How Cloud Development Environments Work
Platform engineering workflow: Terraform templates, Kubernetes pods, remote dev protocols, AI agent workspaces, and IDE connections
Platform Engineering Team Creates Workspace Templates
Platform engineers and DevOps teams define workspace templates using infrastructure as code (IaC). Terraform, DevContainers, or Kubernetes YAML templates specify everything a workspace needs. In 2026, these templates also define AI agent capabilities, resource tiers for GPU-accelerated workloads, and API-driven provisioning hooks that let external systems - including AI orchestrators - spin up workspaces programmatically.
CPU cores, RAM, GPU allocation (NVIDIA T4/A10G for AI workloads)
Ubuntu 24.04, Debian, Windows, or custom golden images
Languages, SDKs, CLIs, AI coding agents (Claude Code, Copilot)
Linters, formatters, AI assistant plugins, MCP servers
# Example Terraform template for Coder (2026)
resource "coder_agent" "main" {
os = "linux"
arch = "amd64"
startup_script = <<-EOT
# Install AI coding tools
curl -fsSL https://cli.anthropic.com | sh
pip install aider-chat
EOT
}
resource "kubernetes_pod" "workspace" {
spec {
container {
image = "python:3.13-bookworm"
resources {
limits = {
cpu = "8"
memory = "16Gi"
}
}
}
}
}
Developer or AI Agent Requests a Workspace
A developer logs into the CDE dashboard and browses available templates, selecting one that matches their project requirements. Alternatively, an AI agent or CI/CD pipeline can request a workspace programmatically through the platform's REST API - no human interaction needed. This API-driven provisioning model is central to how modern teams run autonomous coding agents at scale: the agent requests its own sandboxed workspace, performs its task, and the workspace is destroyed when done.
Human Developer Workflow
API-Driven Agent Workflow
# AI agent requests a workspace via REST API
curl -X POST https://coder.company.com/api/v2/workspaces \
-H "Authorization: Bearer $AGENT_TOKEN" \
-d '{
"template_id": "ai-agent-sandbox",
"name": "agent-task-4821",
"rich_parameters": {
"repo": "github.com/org/backend",
"branch": "feature/auth-refactor",
"ttl_hours": "4"
}
}'
Platforms like Coder, Ona (formerly Gitpod), and Daytona expose REST APIs that allow AI orchestration systems to provision, monitor, and tear down workspaces without human involvement. This is how teams run dozens or hundreds of AI agents in parallel - each in its own isolated workspace with time-to-live limits and scoped permissions.
Infrastructure Provisions Resources
The CDE platform automatically provisions the requested resources. Depending on the platform and configuration, the workspace runs as a VM, container, or microVM. Most enterprise deployments in 2026 use Kubernetes-orchestrated containers for speed and density, while security-sensitive workloads and AI agent sandboxes increasingly use Firecracker microVMs for hardware-level isolation.
Provisioning speed matters. Leading CDE platforms deliver ready-to-code workspaces in under 30 seconds. Coder and Ona use prebuilt container images and volume snapshots to avoid slow cold starts. For AI agents that spin up and tear down hundreds of workspaces per day, sub-minute provisioning is a hard requirement.
Developer Connects via Remote Dev Protocols
Once the workspace is ready, the developer connects using their preferred IDE and protocol. Every connection method relies on an underlying remote development protocol - most commonly SSH, WebSocket, or a proprietary thin-client protocol - to bridge the gap between the developer's local machine and the remote workspace. Understanding these protocols helps platform engineers optimize latency, enforce security policies, and troubleshoot connection issues.
VS Code Remote SSH
Local VS Code connects to the remote workspace over SSH. The VS Code Server runs on the workspace and handles extensions, file operations, and terminal sessions. ProxyCommand support lets CDE platforms inject custom authentication and routing without modifying the developer's SSH config directly.
JetBrains Gateway
Thin local client connects to a full JetBrains IDE backend running on the workspace. Supports IntelliJ IDEA, PyCharm, WebStorm, GoLand, and other JetBrains IDEs. Uses a proprietary protocol optimized for low-bandwidth scenarios with intelligent UI diff compression.
Browser-Based IDE
VS Code runs entirely in the browser via code-server or OpenVSCode Server. Zero local installation required. WebSocket connections stream editor state between the browser and workspace. Ideal for quick edits, onboarding new developers, or accessing workspaces from any device.
SSH Terminal
Standard SSH access for terminal-based workflows, Vim/Neovim users, and headless AI agent connections. Most CDE platforms provide built-in SSH gateways with certificate-based authentication, eliminating the need to manage individual SSH keys across workspaces.
VS Code Tunnels
A newer connection method that creates a secure tunnel between local VS Code and the remote workspace without requiring direct SSH access or open ports. Tunnels are brokered through Microsoft's relay service, making them useful in restrictive network environments where SSH is blocked.
Work, Stop, Resume
The developer works as if everything were local. When done:
- Workspace can be stopped to save costs (auto-stop after idle time)
- State is preserved - resume exactly where you left off
- Workspace can be deleted when project is complete
- New workspace from same template = same environment, fresh start
AI Agents Use Workspaces Autonomously
In 2026, CDEs are not just for human developers. AI coding agents like Claude Code, GitHub Copilot Workspace, and Devin use the same workspace infrastructure to execute tasks autonomously. The CDE provides the sandboxed environment that agents need to safely clone repositories, install dependencies, run builds, execute tests, and submit pull requests - all without access to production systems or other developers' workspaces.
Each agent runs in its own isolated workspace with scoped permissions, network policies, and time-to-live limits. No lateral movement between workspaces.
Agent workspaces are created for a single task and automatically destroyed when done. No state accumulates, no drift, no cleanup required.
AI orchestration platforms call the CDE API to provision workspaces on demand. No dashboards, no clicks - pure programmatic control over workspace lifecycle.
Run 10, 50, or 200 agents simultaneously - each with its own workspace. CDE platforms handle scheduling, resource allocation, and cleanup automatically.
# Typical AI agent workspace lifecycle
# 1. Orchestrator provisions workspace via API
# 2. Agent clones repo, creates branch
# 3. Agent writes code, runs tests, iterates
# 4. Agent opens pull request
# 5. Workspace auto-destroyed after TTL expires
# Agent workspace with 4-hour time-to-live
POST /api/v2/workspaces
{
"template": "ai-agent-sandbox",
"ttl": "4h",
"parameters": {
"repo": "github.com/org/api-service",
"task": "Refactor auth middleware to use JWT",
"agent": "claude-code"
}
}
Remote Development Protocols Under the Hood
Every CDE connection depends on a reliable protocol layer. Here is how the most common protocols compare for remote development workloads.
| Protocol | Used By | Latency Target | Best For |
|---|---|---|---|
| SSH | VS Code Remote, terminal, AI agents | Under 100ms RTT | Universal access, scripting, automation |
| WebSocket | Browser IDEs, code-server | Under 150ms RTT | Browser-based editing, zero install |
| JetBrains RD | JetBrains Gateway | Under 200ms RTT | Full IDE experience, low bandwidth |
| VS Code Tunnel | VS Code (relay mode) | Under 200ms RTT | Restrictive networks, no open ports |
| RDP/VNC | Windows desktops, GUI apps | Under 50ms RTT | Full desktop, visual/GUI testing |
AI agents primarily use SSH. When an autonomous agent connects to a CDE workspace, it almost always uses SSH or the platform's CLI tool (which wraps SSH). There is no UI to render - agents just need a shell, file system access, and the ability to run commands. This is why SSH remains the foundational protocol for CDE infrastructure, even as browser-based and proprietary protocols serve human developers.
CDE Platforms That Support This Workflow
The template-to-workspace workflow described above is supported by all major CDE platforms in 2026, each with different strengths.
Coder
Open-source, self-hosted. Terraform-native templates with full API for agent provisioning. Runs on any infrastructure - AWS, Azure, GCP, bare metal, or air-gapped.
Ona
Cloud-native CDE platform (rebranded from Gitpod in 2025). Strong DevContainer support, Kubernetes-first architecture, and built-in prebuilds for fast workspace startup.
GitHub Codespaces
Fully managed by GitHub. Deep integration with GitHub repos and Actions. DevContainer-based configuration. Simplest onramp for teams already on GitHub.
DevPod
Open-source, client-side CDE tool. Runs DevContainers on any backend - local Docker, Kubernetes, cloud VMs. No server component required, keeping infrastructure costs minimal.
Daytona
Open-source CDE manager with a focus on API-first provisioning and AI agent support. Standardized workspace API that AI orchestration platforms can integrate with directly.
Microsoft Dev Box
Cloud-powered Windows desktops from Azure. Ideal for .NET, Visual Studio, and Windows-native development. Integrates with Intune for device management and security policies.