| |
POST https://api.anthropic.com/v1/environments/bridge
The *payload* includes information about your machine:
```json
{
"machine_name": "Fernando's MacBook-Pro",
"directory": "/Users/fernando/code/tokamak",
"branch": "main",
"git_repo_url": "https://git.frr.dev/frr/tokamak.git"
}
And the response gives you an environment_id, an environment_secret, and your organization_uuid. Essentially, it’s a record: “this computer, in this directory, with this session, is available for remote control.”
2. The WebSocket (not polling)
Here’s where it gets interesting. The communication isn’t HTTP polling. It’s a bidirectional WebSocket:
wss://api.anthropic.com/v1/session_ingress/ws/{session_id}
The CLI opens this WebSocket and keeps it running. When you type something into claude.ai on your phone, the message travels through the WebSocket to your local CLI. The CLI processes it (executes tools, reads files, whatever) and sends the result back through the same channel.
It’s not magic, but it’s elegant. The WebSocket maintains a persistent connection, so latency is minimal. No need to keep asking, “anything new?” every five seconds.
Internally, Anthropic calls this HybridTransport: HTTPS for registering and heavy operations, WebSocket for real-time communication.
3. The work polling
In addition to the WebSocket, there’s a polling endpoint for when the WebSocket connection drops:
GET https://api.anthropic.com/v1/environments/{env_id}/work/poll
This is the fallback. If the WebSocket disconnects (shaky WiFi, laptop goes to sleep), the CLI can check back for work using plain old HTTP. Belt and suspenders, as they say in America.
4. The on-disk traces
And here’s what intrigued me most as a toolbuilder. Every /rc session leaves traces in three places:
In the session JSONL (the file Tokamak already parses), a new event appears:
| |
The slug field is particularly cool: a randomly generated three-word name to give the session a human-readable ID. “rapid-frozen-basilisk.” Sounds like a CIA operation name.
In telemetry files, a flag appears:
is_claude_code_remote: true
And in the debug log (~/.claude/debug/{sessionId}.txt), you get EVERYTHING:
Bridge API request: POST /v1/environments/bridge
Bridge API response: 200 { environment_id: "...", ... }
WebSocket connected: wss://api.anthropic.com/v1/session_ingress/ws/...
It’s a complete dump of every interaction with the Bridge API. URLs, payloads, responses, WebSocket states. A reverse engineering dream.
Why it doesn’t work (yet)
Now for the bitter part. Everything I just described, I found while running /rc in my terminal. The CLI registered with the Bridge API without a hitch. The WebSocket connected. The debug log filled with traces.
But when I tried connecting through my phone, claude.ai said no.
I looked on GitHub. Dozens of issues reporting the exact same thing. “Remote Control not available in my organization.” Max accounts. Paid accounts. All with the same problem.
Anthropic’s answer is that it’s on a “gradual rollout.” In plain language: they’re enabling it for some accounts and not others. And as of February 2026, most Max accounts still don’t have access.
It’s like being shown a restaurant menu with gorgeous photos, ordering the signature dish, and then the waiter tells you, “Oh, that one’s not available yet.”
What you can do without Remote Control working
Here’s the constructive part. Even if you can’t use Remote Control from your phone, the mere existence of the API and its trace data opens up interesting possibilities for third-party tools.
Detect remote sessions
The slug field in the JSONL and the is_claude_code_remote flag in telemetry make it possible to distinguish local sessions from remote ones. If you’re building a dashboard (like Tokamak), you could display: “this session was controlled from a phone between 2:00 PM and 3:30 PM.”
Bridge data as context
Registration with the Bridge API includes machine_name, directory, branch, and git_repo_url. If you have multiple machines or sessions, the Bridge gives you a full inventory of where each instance of Claude Code is running. A free Fleet View.
WebSocket as an observation channel
The WebSocket is bidirectional. Today Anthropic uses it to link claude.ai to your CLI. But a third-party client could, in theory, also connect to the same channel as an observer. It could watch messages in real-time without interfering. A live session replay.
I say “in theory,” because you’d need the environment_secret provided by the Bridge API, and Anthropic could (and probably will) restrict who can connect. But the infrastructure is there.
What this says about Anthropic
Two things stand out to me.
First: the technical quality of the Bridge API is high. WebSocket with a polling fallback, rotatable secrets, environment registration with git metadata, human-friendly session names. This isn’t a hack job. They built it to scale.
Second: the rollout strategy is… questionable. Announcing a feature on the official blog, having the CLI support it, having it leave traces and generate functional URLs… but making it unavailable to most paid users. That’s frustrating. The dozens of GitHub issues confirm it.
This is a recurring pattern in the AI world: lots of announcements, little rollout. As if companies need to generate hype before their infrastructure is ready to serve everyone.
In summary
Claude Code’s Remote Control is a real, well-designed feature with an elegant API under the hood. The Bridge API, the WebSocket, the on-disk traces — everything suggests this was built to last.
But as of today, February 27, 2026, most Max users can’t use it. The data is there. The API responds. Your CLI registers and connects. And then claude.ai tells you you can’t pass.
If you have a Max account and it works for you: congratulations, you’re one of the lucky ones. If not: hang tight. Or, like me, spend an afternoon reverse-engineering an API you can’t actually use. At least you’ll learn something.
And if you’re building tools on top of Claude Code, take note of the endpoints and traces. When this opens up to everyone, you’ll want to be ready.