CDE Network Security
Comprehensive guide to securing network architecture, implementing zero-trust principles, controlling AI agent and LLM API egress, and protecting your Cloud Development Environment infrastructure.
Network Architecture Overview
Secure network topology for enterprise CDE deployments including AI agent workloads
Reference Architecture
Public Subnet
CIDR: 10.0.1.0/24
Private - Control Plane
CIDR: 10.0.10.0/24
Private - Workspaces
CIDR: 10.0.20.0/22
Defense in Depth
Multiple security layers protect against breaches
Least Privilege
Minimum access needed for each component
Segmentation
Isolate workloads to contain breaches
Encryption
All data encrypted in transit and at rest
VPC Design & Configuration
Cloud provider-specific VPC configurations
AWS VPC Configuration
# Terraform - AWS VPC for CDE
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 5.0"
name = "cde-vpc"
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.10.0/24", "10.0.11.0/24", "10.0.12.0/24"]
public_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
# Workspace subnets - larger CIDR for many workspaces
intra_subnets = ["10.0.20.0/22", "10.0.24.0/22", "10.0.28.0/22"]
enable_nat_gateway = true
single_nat_gateway = false # HA: one per AZ
enable_vpn_gateway = false
enable_dns_hostnames = true
enable_dns_support = true
# VPC Flow Logs for security monitoring
enable_flow_log = true
create_flow_log_cloudwatch_log_group = true
create_flow_log_cloudwatch_iam_role = true
flow_log_max_aggregation_interval = 60
tags = {
Environment = "production"
Project = "cde"
}
}
# Private endpoints for AWS services (no internet required)
resource "aws_vpc_endpoint" "s3" {
vpc_id = module.vpc.vpc_id
service_name = "com.amazonaws.us-east-1.s3"
vpc_endpoint_type = "Gateway"
route_table_ids = module.vpc.private_route_table_ids
}
resource "aws_vpc_endpoint" "ecr_api" {
vpc_id = module.vpc.vpc_id
service_name = "com.amazonaws.us-east-1.ecr.api"
vpc_endpoint_type = "Interface"
subnet_ids = module.vpc.private_subnets
security_group_ids = [aws_security_group.vpc_endpoints.id]
private_dns_enabled = true
}
resource "aws_vpc_endpoint" "ecr_dkr" {
vpc_id = module.vpc.vpc_id
service_name = "com.amazonaws.us-east-1.ecr.dkr"
vpc_endpoint_type = "Interface"
subnet_ids = module.vpc.private_subnets
security_group_ids = [aws_security_group.vpc_endpoints.id]
private_dns_enabled = true
}Azure VNet Configuration
# Terraform - Azure VNet for CDE
resource "azurerm_virtual_network" "cde" {
name = "cde-vnet"
location = azurerm_resource_group.cde.location
resource_group_name = azurerm_resource_group.cde.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "control_plane" {
name = "control-plane-subnet"
resource_group_name = azurerm_resource_group.cde.name
virtual_network_name = azurerm_virtual_network.cde.name
address_prefixes = ["10.0.10.0/24"]
# Enable private endpoint support
private_endpoint_network_policies_enabled = true
}
resource "azurerm_subnet" "workspaces" {
name = "workspaces-subnet"
resource_group_name = azurerm_resource_group.cde.name
virtual_network_name = azurerm_virtual_network.cde.name
address_prefixes = ["10.0.20.0/22"]
# AKS node pool subnet
service_endpoints = ["Microsoft.Storage", "Microsoft.ContainerRegistry"]
}
resource "azurerm_subnet" "aks" {
name = "aks-subnet"
resource_group_name = azurerm_resource_group.cde.name
virtual_network_name = azurerm_virtual_network.cde.name
address_prefixes = ["10.0.32.0/20"]
}
# Network Security Group for workspaces
resource "azurerm_network_security_group" "workspaces" {
name = "workspaces-nsg"
location = azurerm_resource_group.cde.location
resource_group_name = azurerm_resource_group.cde.name
security_rule {
name = "DenyInternetOutbound"
priority = 100
direction = "Outbound"
access = "Deny"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "Internet"
}
security_rule {
name = "AllowAzureServicesOutbound"
priority = 90
direction = "Outbound"
access = "Allow"
protocol = "*"
source_port_range = "*"
destination_port_range = "*"
source_address_prefix = "*"
destination_address_prefix = "AzureCloud"
}
}GCP VPC Configuration
# Terraform - GCP VPC for CDE
resource "google_compute_network" "cde" {
name = "cde-vpc"
auto_create_subnetworks = false
routing_mode = "REGIONAL"
}
resource "google_compute_subnetwork" "control_plane" {
name = "control-plane-subnet"
ip_cidr_range = "10.0.10.0/24"
region = "us-central1"
network = google_compute_network.cde.id
private_ip_google_access = true
log_config {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
resource "google_compute_subnetwork" "workspaces" {
name = "workspaces-subnet"
ip_cidr_range = "10.0.20.0/22"
region = "us-central1"
network = google_compute_network.cde.id
private_ip_google_access = true
secondary_ip_range {
range_name = "pods"
ip_cidr_range = "10.1.0.0/16"
}
secondary_ip_range {
range_name = "services"
ip_cidr_range = "10.2.0.0/20"
}
}
# Cloud NAT for outbound internet
resource "google_compute_router" "cde" {
name = "cde-router"
region = "us-central1"
network = google_compute_network.cde.id
}
resource "google_compute_router_nat" "cde" {
name = "cde-nat"
router = google_compute_router.cde.name
region = "us-central1"
nat_ip_allocate_option = "AUTO_ONLY"
source_subnetwork_ip_ranges_to_nat = "ALL_SUBNETWORKS_ALL_IP_RANGES"
log_config {
enable = true
filter = "ERRORS_ONLY"
}
}Zero Trust Architecture
Never trust, always verify - especially AI agents and their API calls
Verify Explicitly
- Authenticate all users via SSO
- Validate device posture
- Check user location/context
- Require MFA for all access
- Verify AI agent identity per request
Least Privilege Access
- JIT (Just-In-Time) access
- Time-bound permissions
- Role-based access control
- Micro-segmentation
- Scope AI agent LLM API access per task
Assume Breach
- Encrypt all traffic (mTLS)
- Continuous monitoring
- Network segmentation
- Blast radius minimization
- Log all AI agent API calls for audit
Service Mesh Implementation (Istio)
Implement mTLS between all services using a service mesh like Istio for zero-trust within the cluster. CDE platforms like Coder and Ona (formerly Gitpod) run on Kubernetes, making service mesh policies essential for isolating developer and AI agent workspaces.
# Istio PeerAuthentication - Require mTLS
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: coder-system
spec:
mtls:
mode: STRICT
---
# Istio AuthorizationPolicy - Control plane access
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: coder-control-plane
namespace: coder-system
spec:
selector:
matchLabels:
app: coder
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
to:
- operation:
methods: ["GET", "POST", "PUT", "DELETE"]
paths: ["/api/*"]
- from:
- source:
namespaces: ["workspaces"]
to:
- operation:
methods: ["POST"]
paths: ["/api/v2/workspaces/*/agent"]
---
# Istio AuthorizationPolicy - Workspace isolation
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: workspace-isolation
namespace: workspaces
spec:
action: DENY
rules:
- from:
- source:
notNamespaces: ["coder-system", "istio-system"]
to:
- operation:
notPorts: ["22", "13337"] # SSH and coder agent onlyFirewall Rules & Security Groups
Network access control configurations
Security Group Rules Matrix
| Source | Destination | Port | Protocol | Purpose |
|---|---|---|---|---|
| 0.0.0.0/0 | Load Balancer | 443 | HTTPS | User access to CDE |
| Load Balancer | Control Plane | 8080 | HTTP | CDE API/Dashboard |
| Control Plane | Database | 5432 | PostgreSQL | CDE state storage |
| Control Plane | Workspaces | 13337 | TCP | Coder agent |
| Workspaces | Control Plane | 443 | HTTPS | Agent registration |
| Workspaces | NAT Gateway | 443 | HTTPS | Package downloads |
| AI Agent Workspace | LLM Gateway Proxy | 443 | HTTPS | LLM API calls (proxied) |
| LLM Gateway Proxy | LLM Provider APIs | 443 | HTTPS | OpenAI, Anthropic, etc. |
| Workspace A | Workspace B | * | * | DENY - Isolated |
| AI Agent Workspace | LLM Provider APIs (direct) | * | * | DENY - Must use gateway |
Kubernetes Network Policies
# Default deny all in workspace namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: workspaces
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Allow workspace to reach control plane
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-coder-agent
namespace: workspaces
spec:
podSelector:
matchLabels:
coder.com/workspace: "true"
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
name: coder-system
ports:
- protocol: TCP
port: 443
---
# Allow workspace to reach DNS
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: workspaces
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector: {}
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
---
# Allow workspace egress to approved registries
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-registries
namespace: workspaces
spec:
podSelector:
matchLabels:
coder.com/workspace: "true"
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 10.0.10.0/24 # ECR VPC endpoint
ports:
- protocol: TCP
port: 443
---
# Allow AI agent workspaces to reach LLM gateway only
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-llm-gateway
namespace: workspaces
spec:
podSelector:
matchLabels:
coder.com/workspace: "true"
workspace-type: "ai-agent"
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: llm-gateway
namespaceSelector:
matchLabels:
name: coder-system
ports:
- protocol: TCP
port: 443AI Agent & LLM Egress Control
Securing outbound traffic from AI coding agents to LLM inference endpoints
Why AI Egress Needs Separate Controls
AI coding agents like Claude Code, GitHub Copilot, Cursor, and Windsurf send code context to LLM API endpoints with every request. Unlike traditional package downloads or Git operations, these API calls embed source code directly in request payloads. Without a dedicated LLM gateway proxy, your proprietary code, environment variables, and configuration files can flow to third-party inference endpoints with no visibility or filtering. CDE platforms like Coder and Ona provide workspace-level network controls, but LLM egress requires an additional control layer.
LLM Gateway Proxy Pattern
Route all LLM API traffic through a centralized gateway proxy that provides logging, rate limiting, content filtering, and token budget enforcement. Never allow direct workspace-to-LLM-provider connections.
Prompt Content Filtering
Scan outbound LLM requests for sensitive data before they leave your network. Block or redact prompts that contain secrets, credentials, or regulated data.
Secret Detection
Block prompts containing API keys, tokens, passwords, and connection strings from being sent to LLM endpoints
PII Redaction
Detect and strip personally identifiable information (emails, phone numbers, SSNs) from code context sent to inference APIs
File Path Filtering
Prevent agents from sending content of restricted files (e.g., .env, credentials.json, private keys) as LLM context
Context Size Limits
Enforce maximum prompt size to prevent agents from sending entire codebases in a single request
Data Residency Enforcement
Route requests to region-specific LLM endpoints to comply with GDPR, data sovereignty, and industry regulations
LLM Gateway Proxy - Envoy Configuration
Deploy an Envoy-based gateway proxy that intercepts all LLM API traffic. This configuration allowlists approved LLM provider endpoints, enforces rate limits per workspace, and logs all request metadata for audit.
# Kubernetes Deployment - LLM Gateway Proxy
apiVersion: apps/v1
kind: Deployment
metadata:
name: llm-gateway
namespace: coder-system
spec:
replicas: 2
selector:
matchLabels:
app: llm-gateway
template:
metadata:
labels:
app: llm-gateway
spec:
containers:
- name: envoy
image: envoyproxy/envoy:v1.32-latest
ports:
- containerPort: 443
volumeMounts:
- name: config
mountPath: /etc/envoy
- name: content-filter
image: your-registry/llm-content-filter:latest
ports:
- containerPort: 8081
env:
- name: BLOCK_SECRETS
value: "true"
- name: BLOCK_PII
value: "true"
- name: MAX_PROMPT_TOKENS
value: "128000"
- name: LOG_PROMPTS
value: "metadata_only" # or "full" for compliance
volumes:
- name: config
configMap:
name: llm-gateway-config
---
# Envoy configuration for LLM endpoint routing
apiVersion: v1
kind: ConfigMap
metadata:
name: llm-gateway-config
namespace: coder-system
data:
envoy.yaml: |
static_resources:
listeners:
- name: llm_listener
address:
socket_address:
address: 0.0.0.0
port_value: 443
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
stat_prefix: llm_gateway
route_config:
virtual_hosts:
- name: llm_providers
domains: ["*"]
routes:
# Anthropic API
- match:
headers:
- name: x-llm-provider
exact_match: anthropic
route:
cluster: anthropic_api
rate_limits:
- actions:
- request_headers:
header_name: x-workspace-id
descriptor_key: workspace_id
# OpenAI API
- match:
headers:
- name: x-llm-provider
exact_match: openai
route:
cluster: openai_api
rate_limits:
- actions:
- request_headers:
header_name: x-workspace-id
descriptor_key: workspace_id
http_filters:
# Content filter sidecar
- name: envoy.filters.http.ext_authz
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz
http_service:
server_uri:
uri: http://127.0.0.1:8081
cluster: content_filter
timeout: 5s
- name: envoy.filters.http.router
clusters:
- name: anthropic_api
connect_timeout: 10s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: anthropic_api
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: api.anthropic.com
port_value: 443
- name: openai_api
connect_timeout: 10s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: openai_api
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: api.openai.com
port_value: 443
- name: content_filter
connect_timeout: 2s
type: STATIC
load_assignment:
cluster_name: content_filter
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: 127.0.0.1
port_value: 8081LLM Provider Endpoint Allowlist
Only these endpoints should be reachable from AI agent workspaces, and only via the LLM gateway proxy. Block all direct connections from workspaces to these domains.
LLM Inference APIs
api.anthropic.comapi.openai.comgenerativelanguage.googleapis.com*.api.mistral.aiapi.groq.com
Cloud-Hosted LLMs
bedrock-runtime.*.amazonaws.com*.openai.azure.com*.aiplatform.googleapis.com*.sagemaker.*.amazonaws.com
AI Coding Tools
copilot-proxy.githubusercontent.comapi.githubcopilot.com*.cursor.sh*.codeium.com
Rate Limiting
Per-workspace and per-team request limits prevent runaway agents from consuming excessive LLM resources or exfiltrating data at scale
Token Budgets
Enforce daily and monthly token spending caps per agent, per project, and per team to maintain cost control and limit blast radius
Usage Observability
Stream token usage metrics to Prometheus/Grafana for real-time dashboards, anomaly detection, and cost allocation reporting
Egress Control & Data Loss Prevention
Control outbound traffic from developer and AI agent workspaces to prevent data exfiltration
Egress Proxy
Squid proxy for controlled internet access
# squid.conf - Allowlist approach
acl allowed_domains dstdomain .github.com
acl allowed_domains dstdomain .npmjs.org
acl allowed_domains dstdomain .pypi.org
acl allowed_domains dstdomain .docker.io
acl allowed_domains dstdomain .gcr.io
acl allowed_domains dstdomain .amazonaws.com
# AI/LLM APIs - route through LLM gateway, not direct
# These should be blocked here; agents use the gateway proxy
acl llm_direct dstdomain api.openai.com
acl llm_direct dstdomain api.anthropic.com
acl llm_direct dstdomain generativelanguage.googleapis.com
http_access deny llm_direct
http_access allow allowed_domains
http_access deny all
# Block uploads to file sharing sites
acl upload_sites dstdomain .dropbox.com
acl upload_sites dstdomain .wetransfer.com
acl upload_sites dstdomain .pastebin.com
http_access deny upload_sitesAll workspace internet traffic is routed through the proxy for logging and filtering.
Data Loss Prevention
Prevent sensitive data exfiltration
Block Git Push to External
Only allow push to approved Git remotes
Block SSH Tunneling
Prevent reverse tunnels to external hosts
Monitor Large Transfers
Alert on uploads > 10MB
Block Clipboard to Internet
Prevent copy-paste to external sites
Block Direct LLM API Access
Force all AI agent LLM calls through the gateway proxy for logging and filtering
Recommended Egress Allowlist
Package Registries
registry.npmjs.orgpypi.orgrubygems.orgproxy.golang.orgcrates.iomaven.org
Container Registries
*.docker.ioghcr.io*.gcr.io*.azurecr.io*.ecr.*.amazonaws.com
Development Tools
*.github.com*.gitlab.comupdate.code.visualstudio.com*.jetbrains.complugins.gradle.org
LLM APIs (OpenAI, Anthropic, etc.) should route through the LLM gateway proxy, not the general egress proxy.
Network Security Checklist
Infrastructure
- VPC with private subnets for workspaces
- VPC flow logs enabled
- NAT gateway for controlled egress
- Private endpoints for cloud services
- WAF in front of load balancer
Access Control
- Kubernetes network policies enforced
- Workspace-to-workspace traffic blocked
- Service mesh with mTLS
- Egress proxy configured
- DLP policies implemented
AI Agent Network Security
- LLM gateway proxy deployed and enforced
- Direct LLM API access blocked from workspaces
- Prompt content filtering for secrets and PII
- Per-workspace token budgets and rate limits
- LLM API request audit logging enabled
- Data residency verified for LLM endpoints
