Why IT Pros Are Pairing “Agent Mode” AI With Remote Access and How Devolutions Is Making It Safer

How Model Context Protocol turns AI assistants into hands-on operators inside Remote Desktop Manager and without blowing up security or governance.

Cloud Computing

Key Takeaways:

  • MCP turns AI into an operator, not just an advisor.
  • Security depends on connection management, not secret sharing.
  • Devolutions’ named-pipe approach keeps MCP interactions scoped to the active user session.

IT pros are already using AI to write scripts, summarize logs, and troubleshoot issues. The catch? Most workflows still involve a clumsy relay race: copy text from a remote session, paste it into a chatbot, copy the answer back, then repeat. All while hoping you don’t leak something sensitive into chat history.

Marc-André Moreau, CTO at Devolutions, thinks that friction is exactly where IT productivity is being lost and where Model Context Protocol (MCP) can change the game. “What MCP does is basically an RPC for LLMs,” Moreau said. “You have your application, you have your chatbot, now they can talk to each other.”

Devolutions is bringing that idea to Remote Desktop Manager (RDM) by adding an MCP server that lets a Large Language Model (LLM) interact with RDM’s capabilities but without turning your environment into an AI free-for-all.

The real problem: “Copy/paste ops” don’t scale

If you’ve ever had to troubleshoot an issue across Secure Shell (SSH), Remote Desktop Protocol (RDP), PowerShell, and a half-dozen admin portals, you already know the pattern:

  1. Gather context
  2. Ask AI
  3. Try a command
  4. Capture output
  5. Ask again

Moreau’s view is that this overhead is now the bottleneck. Pre-MCP, he said, “you would select the text, copy paste it, ask it, tell it the context… paste it back in… and do this all day long and you will waste a lot of time copy pasting around.”

That inefficiency becomes acute in remote sessions, where your AI assistant typically isn’t “inside” the target system and where switching contexts is constant.

What changes with MCP in RDM (and what doesn’t)

A key point: the user experience in RDM doesn’t suddenly become a sci-fi console.

“It’s the same picture. It looks exactly the same,” Moreau said. The goal isn’t a UI overhaul but it’s exposing RDM functionality to the LLM so the assistant can work with the tool you’re already using.

In practice, think of your MCP client (for example, VS Code with GitHub Copilot) running alongside RDM. When you hit a troubleshooting wall, you can ask the LLM to diagnose, propose steps, and (with your approval) execute actions in the remote context.

Moreau described the leap this way: “Now you can have your LLM on the right hand side… it captures the output, if the command failed, it would actually know, then it can self-correct and suggest you a different command to fix the problem.”

Why this is different from “just let the AI have the password”

Security is the make-or-break issue for agentic AI in IT. Devolutions’ argument is that connection management changes the credential story.

Once the LLM has the credential, you risk it being retained in logs or transcripts.

Moreau put it bluntly: “The problem is the LLM now has the password and it gets logged in various chat logs… you don’t want them to be.”

RDM’s approach is to keep the secret out of the LLM’s hands. “We take the credentials and we inject them into the connection… The LLM doesn’t need to know the credentials. It just needs to use the connections you have.”

And Devolutions adds an extra safeguard: credential-returning capabilities in the MCP server are “disabled by default,” so the LLM can’t request them unless you explicitly enable it.

The architecture choices that matter for governance

Many MCP implementations lean on network-accessible services, which can raise thorny questions about isolation in shared environments. Devolutions instead implemented MCP connectivity through a subprocess transport and a local proxy that talks to RDM via a named pipe, which can be restricted to the user session.

“The MCP server in RDM actually uses a name pipe and that name pipe can actually be correctly restricted to the same user session,” Moreau said. “So we do not break user isolation.”

On top of that, RDM prompts the user to approve the connection. “When the client connects, you get a confirmation prompt inside RDM… to ensure that you don’t get a malicious script… that connects to RDM… without you even knowing about it.”

And for command execution, the workflow is designed to keep a human in the loop: “It will generate the code snippet. You can inspect what the code snippet is… You just click confirm.”

For audit and compliance, Devolutions’ position is that actions remain governed by the same logging and tooling you already use with RDM and Devolutions Server/Hub. In other words, MCP shouldn’t become an unlogged side channel.

Actionable steps: How to start without creating new risk

If you want to test MCP-driven automation without scaring your security team, here’s a practical adoption path:

1) Start with a “known good” MCP client

Moreau recommends VS Code as a starting point: “VS code with GitHub Copilot is a very popular one… It’s also one of the best MCP clients out there.”

2) Treat your LLM provider like a vendor security review

Even if the tooling is local, your prompts and context may not be. Moreau’s advice: “Always look at the terms and conditions. Where is the AI model hosted? How do they handle the training clause? Are you opted out of training?”

3) Keep credentials non-negotiable

Adopt a hard rule: the assistant should open connections and not retrieve passwords. RDM’s design supports this by injecting credentials without exposing them to the LLM.

4) Use human approval for execution (at least initially)

Especially during rollout, require explicit confirmation for tool calls and command execution. The goal is to accelerate work, not automate mistakes.

5) Pilot on “high-friction” work first

Early feedback Devolutions heard included using LLMs to reorganize “thousands of… entries” in large data sources, a task admins wouldn’t attempt manually. Start where the pain is obvious: repetitive troubleshooting, configuration drift fixes, and bulk cleanup.

The bigger trend: IT Pros becoming AI “power users”

Moreau’s take on job displacement anxiety is pragmatic: learn the tools, or someone else will. Early adopters can become internal champions, reshaping who leads operational excellence.

And perhaps the best framing for MCP in RDM is this: it doesn’t replace your admin skills. It removes the busywork between your intent and execution. When the assistant can see context, run the right commands, and iterate safely with you in control, “you will feel so powerful,” Moreau said.