CDE Governance and Policies
Establish robust governance frameworks for cloud development environments. Define policies for human developers and AI agents alike, enforce standards, and maintain control at scale while empowering autonomy.
What is CDE Governance?
CDE governance defines the policies, standards, and controls that ensure cloud development environments are secure, cost-effective, compliant, and aligned with organizational objectives - whether those environments serve human developers, AI coding agents, or both.
Definition and Scope
CDE governance encompasses the frameworks, policies, and processes that control how development environments are provisioned, configured, accessed, and managed. It includes access control, resource allocation, security standards, compliance requirements, cost management, lifecycle policies, and - increasingly in 2026 - AI agent permissions and LLM usage policies. Governance ensures that while developers and autonomous agents have self-service capabilities, organizations maintain visibility, security, and control over their development infrastructure.
Why Policies Matter at Scale
As organizations scale from dozens to hundreds or thousands of developers - and deploy AI agents that can spin up workspaces autonomously - informal practices break down. Without governance, teams face security vulnerabilities from inconsistent configurations, cost overruns from unmanaged resources, compliance violations from unaudited environments, uncontrolled AI agent behavior, and productivity loss from environment drift. Formal policies provide guardrails that prevent these issues while maintaining the flexibility developers and agents need to move quickly.
Relationship to Platform Engineering
CDE governance is a core responsibility of platform engineering teams. While platform engineers build and maintain the CDE infrastructure, governance defines how that infrastructure can be used. Platform teams implement governance through technical controls (RBAC, quotas, templates), automated enforcement (policy engines, admission controllers), and self-service interfaces (portals, CLIs, agent APIs) that make compliant choices the default. Platforms like Coder and Ona (formerly Gitpod) provide built-in governance features such as workspace templates, resource limits, and audit logging that platform teams configure to enforce organizational policies.
The platform team serves as the enforcement mechanism for governance policies, translating business requirements into technical guardrails, monitoring compliance, and providing visibility into usage patterns. In 2026, this responsibility extends to governing AI agent workspaces - defining what agents can access, how long they can run, what resources they consume, and what LLM providers they connect to. This partnership between governance and platform engineering ensures policies are not just documented but actively enforced and continuously improved based on real-world usage.
Governance Framework Components
A comprehensive CDE governance framework addresses access control, resource management, standardization, AI agent policies, and lifecycle management to ensure environments remain secure, cost-effective, and compliant.
Access Policies
Define who can create, access, and manage workspaces. Implement role-based access control, multi-factor authentication requirements, IP allowlists, and approval workflows for privileged access.
Resource Quotas
Set limits on CPU, memory, storage, and GPU resources per user, team, agent, or project. Prevent resource hoarding and ensure fair allocation across the organization.
Workspace Lifecycle
Establish policies for workspace creation, idle timeout, automatic shutdown, retention periods, and deletion. Prevent abandoned workspaces from consuming resources indefinitely.
Approved Tool Lists
Maintain catalogs of approved base images, development tools, IDE extensions, LLM providers, and third-party services. Ensure all tools meet security and compliance requirements before deployment.
Template Standards
Define requirements for workspace templates including mandatory security tools, logging configurations, network policies, and environment variables. Enforce consistency across teams.
AI Agent Policies
Govern how AI coding agents provision workspaces, what permissions they receive, which LLM providers they can use, and what actions they can take autonomously versus requiring human approval.
Usage Monitoring
Track workspace utilization, resource consumption, LLM token usage, cost attribution, and compliance metrics. Generate reports for stakeholders and identify optimization opportunities.
Policy Categories
CDE governance spans multiple policy domains, each addressing specific organizational requirements and risk areas - from traditional security and compliance to AI-specific concerns.
Security Policies
Security policies protect sensitive code, data, and credentials from unauthorized access, leakage, or compromise. These include network segmentation (VPC isolation, private subnets), secret management (vault integration, rotation requirements), vulnerability scanning (container scanning, dependency checks), encryption at rest and in transit, authentication methods (SSO, MFA), privileged access controls (just-in-time access, approval workflows), and AI-specific controls like LLM data exfiltration prevention and agent sandbox isolation.
Example Policies
- All workspaces must use approved base images
- Secrets must be stored in HashiCorp Vault
- Production access requires approval
- Containers scanned for vulnerabilities before deployment
- AI agents cannot access production credentials
Enforcement Methods
- Admission controllers block non-compliant workspaces
- Network policies enforce segmentation
- Image scanning in CI/CD pipeline
- SSO integration with IdP
- Agent workspace isolation via microVMs
Cost Policies
Cost policies prevent runaway spending and ensure efficient resource utilization. These include resource quotas per user or team, idle timeout enforcement, workspace size limits, GPU allocation policies, storage quotas, LLM API token budgets, and cost allocation tags. Organizations implement tiered resource access, automatic downsizing of idle workspaces, and chargeback models to align costs with usage. AI agent workspaces require special attention since agents can spin up many short-lived environments rapidly, making cost controls essential.
Example Policies
- Workspaces auto-stop after 4 hours of inactivity
- Maximum 32 vCPU per workspace without approval
- GPU access requires justification and time limit
- Storage limited to 100GB per user
- LLM token budgets capped per team per month
Cost Controls
- Budget alerts at 80% and 100% thresholds
- Showback reports per team and project
- Automatic scaling for cost optimization
- Agent workspace auto-delete after task completion
Compliance Policies
Compliance policies ensure CDEs meet regulatory, contractual, and industry requirements. These cover data residency (geographic restrictions on data processing), audit logging (comprehensive activity trails), data classification (handling of sensitive data), retention requirements, access controls for regulated data, and certification requirements (SOC 2, HITRUST, ISO 27001). With AI integration, organizations must also address EU AI Act requirements, LLM data processing agreements, and AI-specific audit trails that log what models were used, what code they generated, and what actions agents performed.
Example Policies
- PHI workspaces must run in US regions only
- All access logged and retained for 7 years
- PCI environments require dedicated infrastructure
- AI-generated code must pass the same review as human code
Compliance Controls
- Automated compliance scanning and reporting
- Data classification tags enforced
- LLM provider data processing agreements validated
- Quarterly access reviews and recertification
Developer Policies
Developer policies balance productivity with standards and best practices. These include permitted IDEs and extensions, Git workflow requirements (branch protection, code review), testing requirements before merge, documentation standards, approved AI coding assistants, and approved technology stacks. Policies should enable developer autonomy within guardrails rather than restricting innovation. Organizations often maintain tiered policies with strict requirements for production code but more flexibility for experimental projects.
Example Policies
- All code changes require peer review
- Tests must pass before merge to main branch
- Only approved AI coding assistants permitted
- API documentation required for all public endpoints
Developer Enablement
- Self-service workspace provisioning portal
- Pre-configured templates for common stacks
- Integrated AI coding assistants with guardrails
- Documentation and onboarding resources
AI Agent Governance
As AI coding agents become standard development tools in 2026, organizations need dedicated governance frameworks that define agent permissions, LLM usage policies, and human oversight requirements within CDE platforms.
Agent Permission Models
AI agents operating in CDE workspaces need clearly defined permission boundaries. Unlike human developers who exercise judgment, agents follow their programming - making explicit permission scoping essential to prevent unintended actions.
Permission Tiers
- Read-Only Agent: Can read code, analyze patterns, suggest changes - but cannot modify files or run commands
- Sandboxed Agent: Can modify files and run commands within an isolated workspace, no network access outside the CDE
- Standard Agent: Full workspace access with network, can create PRs, run tests - but cannot merge or deploy
- Trusted Agent: Can merge approved PRs, trigger CI/CD pipelines, and perform limited deployment actions with human approval gates
Platforms like Coder and Ona provide workspace-level isolation that naturally enforces agent boundaries. Define agent permissions in workspace templates so every agent session starts with the correct constraints.
LLM Usage Policies
Organizations must govern which LLM providers developers and agents can use, what data can be sent to them, and how token consumption is tracked and budgeted.
LLM Governance Areas
- Approved providers: Maintain an allowlist of LLM APIs (OpenAI, Anthropic, self-hosted models) that meet security and compliance requirements
- Data classification: Define what code and data can be sent to external LLMs vs. requiring self-hosted models for sensitive repositories
- Token budgets: Set per-user and per-team monthly token limits to control costs and prevent runaway API spending
- Prompt logging: Log LLM interactions for audit purposes, especially in regulated environments
- Model version pinning: Pin to specific model versions in production workflows to ensure reproducible outputs
Use API gateways or LLM proxy services to enforce provider restrictions, log usage, and apply rate limits centrally rather than relying on individual developer compliance.
Human-in-the-Loop Controls
Define when AI agents require human approval before proceeding. Not every action needs oversight, but certain operations - especially those affecting production systems, security configurations, or sensitive data - demand a human checkpoint.
Approval Gates
- Always autonomous: Code formatting, test execution, linting, documentation generation
- Review required: Code changes to shared libraries, dependency updates, configuration changes
- Approval required: Database schema migrations, infrastructure changes, security policy modifications
- Prohibited: Production deployments without CI/CD, direct database writes, credential rotation
Configure approval gates in your CDE platform's workspace templates. Coder's workspace templates and Ona's environment definitions can enforce these boundaries at the infrastructure level.
Agent Network and Data Controls
AI agents in CDE workspaces need carefully scoped network access. Agents that can reach arbitrary external endpoints pose data exfiltration risks that go beyond typical developer access concerns.
Network Policy Framework
- Egress allowlists: Agent workspaces can only reach approved endpoints (package registries, LLM APIs, Git remotes)
- DNS filtering: Block resolution of unauthorized domains from agent workspaces
- Data loss prevention: Inspect outbound traffic for sensitive patterns (API keys, PII, proprietary code)
- Internal service access: Limit which internal APIs and databases agents can reach based on task scope
- Audit all connections: Log every outbound request from agent workspaces for security review
Kubernetes network policies and CDE platform-level controls provide the enforcement mechanisms. Ona's workspace definitions and Coder's templates both support network policy configuration as part of the workspace spec.
Start with Restrictive Defaults, Expand Deliberately
When introducing AI agents into your CDE platform, start with the most restrictive permission model and expand as you build confidence. Begin with read-only agents that can analyze code and suggest changes. Progress to sandboxed agents that can modify files in isolated workspaces. Only grant broader permissions after establishing monitoring, audit trails, and incident response procedures for agent-initiated actions. The cost of under-permissioning an agent is developer inconvenience; the cost of over-permissioning is a potential security incident.
AI Governance Policy Checklist
Agent Identity and Access
- Unique identity per agent instance
- Separate RBAC roles for agents vs. humans
- Agent API key rotation schedule
- Session time limits for agent workspaces
- Agent action audit trail enabled
LLM Provider Management
- Approved LLM provider list documented
- Data processing agreements in place
- Token budget per team defined
- Sensitive data classification rules
- Self-hosted model option for regulated code
Operational Controls
- Human approval gates defined
- Agent workspace resource limits set
- Network egress policies configured
- Kill switch for agent workspaces
- Incident response plan for agent failures
RBAC and Access Control
Role-based access control provides granular permissions management for both human users and AI agents, ensuring appropriate access levels based on responsibilities and organizational requirements.
Role Definitions
Define clear roles with specific permissions and responsibilities. Common roles include:
- Developer: Create and manage own workspaces, standard resource limits
- Team Lead: Manage team workspaces, approve resource requests, view team usage
- Platform Admin: Manage templates, configure policies, view all workspaces
- Security Admin: Audit logs, enforce security policies, manage compliance
- AI Agent: API-only access, scoped to assigned repositories, time-limited sessions
- Read-Only: View dashboards and reports, no workspace creation
Permission Matrices
Document permissions for each role across actions and resources:
| Action | Dev | Lead | Agent | Admin |
|---|---|---|---|---|
| Create workspace | ||||
| View team usage | ||||
| Modify templates | ||||
| Access prod data | Approval | Approval | ||
| Use external LLMs | Policy |
Approval Workflows
Implement approval processes for sensitive or resource-intensive requests:
- GPU Access: Request with justification - Manager approval - Auto-expiry after 7 days
- Production Access: Just-in-time request - Security team approval - Time-limited token - Audit log
- Agent Elevation: Agent requests expanded permissions - Team lead approval - Scoped to specific task - Auto-revoke on completion
- Large Resources: Workspaces >64 vCPU require team lead approval and business justification
Access Control Best Practices
Principle of Least Privilege
Grant users and agents the minimum permissions necessary to perform their functions. Start with restrictive defaults and expand as needed. Regularly review and revoke unnecessary permissions. Use temporary elevated access for privileged operations rather than permanent admin rights. This principle applies doubly to AI agents, which should never have broader access than the human developer who initiated them.
Separation of Duties
Prevent conflicts of interest by separating critical functions across roles. Development teams should not have production deployment permissions. Security teams should not manage their own audit logs. AI agents should not approve their own code changes or merge their own PRs. Use approval workflows to enforce checks and balances for sensitive operations.
Resource Governance
Control resource allocation and utilization to optimize costs, prevent abuse, and ensure fair access across teams and AI agents while maintaining developer productivity.
Workspace Size Limits
Define default and maximum resource allocations for workspaces. Implement tiered limits based on user role, project requirements, and budget constraints.
Example Tier Structure
- Small: 2 vCPU, 8 GB RAM, 50 GB storage - Default for general development
- Medium: 8 vCPU, 32 GB RAM, 100 GB storage - For backend services, data processing
- Large: 16 vCPU, 64 GB RAM, 200 GB storage - Requires approval, for ML training, builds
- Agent: 2 vCPU, 4 GB RAM, 20 GB storage - Default for AI agent workspaces, ephemeral
- GPU: 4 vCPU, 16 GB RAM, 1x GPU - Requires approval and justification, time-limited
Platform teams can override limits for justified use cases while maintaining visibility and approval workflows. Auto-scaling policies can adjust resources based on actual usage patterns.
GPU and LLM Resource Policies
GPUs and LLM API tokens are expensive resources requiring careful governance. Implement allocation strategies that balance accessibility with cost control.
Resource Governance Strategies
- GPU request-based: Submit justification, get time-limited allocation
- GPU queue system: Fair scheduling when GPUs fully utilized
- Auto-release: Reclaim GPUs from idle workspaces after 30 minutes
- LLM token pools: Team-level monthly token budgets with alerts at 80%
- Model routing: Route simple tasks to cheaper models, reserve premium models for complex work
Monitor GPU utilization and LLM token consumption. Provide dashboards showing spend per team, per agent, and per project. Consider LLM proxy gateways for centralized cost tracking.
Storage Quotas
Storage costs accumulate quickly without governance. Implement quotas, lifecycle policies, and monitoring to prevent unbounded growth.
Storage Management Policies
- Per-workspace quotas: 50-200 GB depending on project needs
- User home directory: 10 GB for personal files and configuration
- Shared project storage: Team-level quotas with access controls
- Temporary storage: Auto-cleanup of build artifacts after 7 days
- Agent workspace storage: Minimal allocation, auto-cleanup on task completion
Alert users when approaching quota limits. Provide self-service tools to analyze storage usage and identify cleanup opportunities. Integrate with artifact repositories for build outputs rather than storing in workspaces.
Idle Timeout Enforcement
Idle workspaces waste resources and money. Implement automatic shutdown policies that balance cost savings with developer convenience.
Idle Detection Strategies
- Activity monitoring: Track IDE connections, terminal sessions, process CPU usage
- Graceful shutdown: 15-minute warning before auto-stop, allow extension
- Scheduled stops: Auto-stop after business hours unless opted out
- Agent timeouts: Agent workspaces auto-terminate after task completion or max runtime
- Max runtime: Absolute limit (e.g., 12 hours for humans, 2 hours for agents) regardless of activity
Persist workspace state so restarts are seamless. Allow users to extend or override idle timeout for long-running tasks with justification. Report idle time savings to demonstrate governance value.
Balancing Control with Productivity
Resource governance should optimize costs and prevent abuse without frustrating developers. Start with generous defaults and tighten based on actual usage data. Provide transparency into resource consumption and costs. Make it easy to request additional resources with clear approval criteria. For AI agents, use shorter default timeouts and smaller resource allocations since agent workspaces are typically ephemeral and task-focused. The goal is efficient resource utilization, not punishment for legitimate use.
Template Governance
Standardize workspace configurations through governed templates that embed security controls, compliance requirements, and best practices into every environment - for both human developers and AI agents.
Approved Base Images
Maintain a catalog of vetted, security-hardened base images for common development scenarios including agent-specific images.
- Vulnerability-scanned and patched
- Compliance controls pre-configured
- Standard tooling and dependencies
- Agent-optimized minimal images available
- Version pinning for reproducibility
Required Security Tools
Embed mandatory security tooling into all templates to prevent configuration drift and ensure consistent protection.
- Secret scanning (git-secrets, truffleHog)
- Dependency scanning (Snyk, Dependabot)
- Static analysis (SonarQube, Semgrep)
- Audit logging agents
- LLM proxy for API traffic inspection
Mandatory Configurations
Enforce organizational standards through template configurations that cannot be overridden by individual developers or agents.
- Network policies and firewall rules
- TLS certificates and encryption settings
- Git configuration (signed commits)
- LLM provider allowlist and API routing
- Logging and monitoring agents
Template Lifecycle Management
Development Process
- 1 Proposal: Template owner submits request with justification, security review
- 2 Development: Build template in isolated environment, include all required components
- 3 Testing: Automated security scanning, compliance validation, functional testing
- 4 Approval: Security team sign-off, platform team review, publish to catalog
- 5 Maintenance: Monthly security updates, quarterly feature updates, deprecation notices
Governance Controls
Version Control
All templates stored in Git with versioning, change logs, and rollback capability. Semantic versioning indicates breaking changes. Pin workspaces to specific template versions.
Automated Testing
CI/CD pipeline validates templates on every commit. Security scans, policy checks, build tests must pass before promotion to production catalog.
Usage Tracking
Monitor which templates are used, by whom (human or agent), and for what projects. Deprecate unused templates. Identify opportunities for consolidation and standardization.
Template Customization vs. Standardization
Balance flexibility with control. Provide base templates with mandatory security and compliance components, but allow customization of development tools, language versions, and project-specific dependencies. Use inheritance models where specialized templates extend approved base images. For AI agent templates, standardize on minimal images with strict network policies and short timeouts since agents do not need IDE tooling, personalization, or long-lived sessions. Document customization boundaries clearly and enforce through automated validation.
Audit and Reporting
Comprehensive visibility into CDE usage, compliance status, AI agent activity, and resource consumption enables informed decision-making, cost optimization, and regulatory compliance.
Usage Dashboards
Real-time dashboards provide visibility into platform health, resource utilization, and user activity.
Platform Overview
Active workspaces, total users, resource utilization, cost trends, system health metrics
Team Dashboards
Team-specific usage, resource allocation, cost attribution, workspace inventory, compliance status
AI Agent Activity
Agent workspace count, LLM token consumption, task completion rates, permission escalation requests, error rates
Cost Attribution
Accurate cost tracking and allocation enables chargeback models, budget management, and optimization opportunities.
Tagging Strategy
Tag all resources with team, project, cost center, environment, and human-vs-agent origin for granular cost tracking
Showback/Chargeback
Generate monthly cost reports per team including LLM API costs. Option for chargeback to department budgets to incentivize efficiency
Cost Optimization
Identify idle resources, oversized workspaces, inefficient LLM model selection, and over-provisioned agent workspaces
Compliance Reports
Automated compliance reporting demonstrates adherence to policies and regulatory requirements for auditors.
Policy Compliance
Track adherence to security policies, access controls, resource limits, and AI governance rules across all workspaces
Regulatory Reporting
Generate SOC 2, HIPAA, PCI DSS, GDPR, and EU AI Act compliance reports with evidence of controls and remediation
Audit Trails
Comprehensive logs of all human and agent access, changes, approvals with tamper-proof storage and retention policies
Anomaly Detection
Proactive monitoring detects unusual patterns that may indicate security incidents, policy violations, agent misbehavior, or resource abuse.
Security Anomalies
Unusual access patterns, failed auth attempts, privilege escalation, data exfiltration indicators, agent attempting restricted actions
Cost Anomalies
Unexpected spend increases, LLM token spikes, resource spikes, quota violations, cryptocurrency mining detection
Agent Behavior Anomalies
Agents creating excessive workspaces, unusual file access patterns, attempts to bypass network restrictions, looping behaviors
Key Metrics to Track
Adoption Metrics
- Active users vs. total users
- Workspace creation rate
- Daily/weekly active workspaces
- Template usage distribution
- Time to first workspace
Efficiency Metrics
- Resource utilization rates
- Idle time percentage
- Cost per developer per month
- Workspace provisioning time
- Developer satisfaction scores
Compliance Metrics
- Policy compliance rate
- Security vulnerabilities detected
- Mean time to remediation
- Failed access attempts
- Audit findings and status
AI Agent Metrics
- Agent workspace count and duration
- LLM tokens consumed per team
- Agent task success/failure rate
- Human approval gate hit rate
- Agent cost per completed task
Policy as Code
Codify governance policies to enable automated enforcement, version control, testing, and continuous compliance at scale - including AI-specific rules for agent behavior and LLM usage.
OPA and Rego Policies
Open Policy Agent (OPA) provides a declarative policy language (Rego) for expressing complex rules that can be enforced across your CDE platform, including AI agent-specific constraints.
Example Rego Policy
package workspace.admission
deny[msg] {
input.request.kind.kind == "Workspace"
not input.request.object.spec.template.approved
msg := "Workspace must use approved template"
}
deny[msg] {
input.request.kind.kind == "Workspace"
input.request.object.spec.agent == true
not input.request.object.spec.network.egressAllowlist
msg := "Agent workspaces require explicit egress allowlist"
}
deny[msg] {
input.request.kind.kind == "Workspace"
cpu := input.request.object.spec.resources.cpu
cpu > 32
not is_admin
msg := "CPU limit exceeds maximum (32) for non-admins"
}OPA policies can validate workspace configurations, enforce resource limits, check compliance requirements, restrict agent permissions, and integrate with admission controllers to prevent non-compliant deployments.
Automated Enforcement
Integrate policy engines into CDE infrastructure to automatically enforce governance without manual intervention - essential when AI agents operate at machine speed.
Admission Control
Block workspace creation that violates policies. OPA admission controllers in Kubernetes reject non-compliant manifests before deployment. Especially critical for agent-initiated workspaces that spin up without human oversight.
Continuous Compliance
Periodically scan existing workspaces for compliance drift. Automatically remediate violations when possible or notify owners. Generate compliance reports and audit trails covering both human and agent activity.
Runtime Enforcement
Monitor workspace activity and enforce behavior policies. Auto-stop idle workspaces, block unauthorized network access, enforce resource quotas dynamically, and terminate agent workspaces that exceed runtime or cost limits.
GitOps for Policy Management
Store policies in Git repositories to enable version control, peer review, automated testing, and deployment pipelines for governance changes.
Repository Structure
policies/
admission/
workspace-limits.rego
template-validation.rego
security-controls.rego
agent-permissions.rego
runtime/
idle-timeout.rego
network-policies.rego
agent-runtime-limits.rego
ai-governance/
llm-provider-allowlist.rego
token-budget-limits.rego
data-classification.rego
compliance/
soc2-controls.rego
hipaa-controls.rego
eu-ai-act-controls.rego
tests/
workspace-limits_test.rego
agent-permissions_test.rego
docs/
policy-guide.md
change-log.mdDeployment Pipeline
- 1 Policy changes submitted via pull request with description and impact analysis
- 2 Automated tests validate policy syntax and expected behavior
- 3 Security and platform teams review for unintended consequences
- 4 Deploy to staging environment for validation with test workspaces
- 5 Promote to production with monitoring for policy violations and alerts
Benefits
- Consistent enforcement
- Version control for policies
- Automated testing and validation
- Audit trail of policy changes
- Agent behavior governance at scale
Challenges
- Learning curve for Rego
- Policy complexity management
- Performance at scale
- AI-specific edge cases
- Balancing strictness and flexibility
Best Practices
- Start simple, add complexity gradually
- Write comprehensive policy tests
- Document intent and rationale
- Separate human and agent policies
- Provide clear violation messages
Frequently Asked Questions
How do you balance governance controls with developer autonomy?
Effective governance enables autonomy within guardrails rather than restricting it. Provide self-service capabilities with clear boundaries, automate policy enforcement to reduce friction, offer pre-approved templates and tools, and make the compliant path the easiest path. Reserve manual approvals for truly exceptional cases. Engage developers in policy design to ensure rules are practical and don't impede legitimate work. Monitor actual usage patterns and adjust policies based on real-world needs. The goal is security and compliance without developer frustration.
How should organizations govern AI coding agents in CDE workspaces?
Start by treating AI agents as a distinct identity class with their own RBAC roles, separate from human developers. Define explicit permission tiers - from read-only analysis agents to sandboxed agents that can modify code within isolated workspaces. Enforce network egress restrictions so agents can only reach approved endpoints (LLM APIs, package registries, Git remotes). Set hard runtime limits and resource caps on agent workspaces since agents do not self-regulate. Require human approval gates for high-impact actions like merging code, modifying infrastructure, or accessing production data. Log all agent actions for audit purposes. Platforms like Coder and Ona provide workspace-level isolation that naturally supports these controls through template definitions.
What LLM usage policies should organizations implement for CDEs?
Organizations should maintain an approved list of LLM providers that meet security, privacy, and compliance requirements. Classify code repositories by sensitivity - public/open-source code may use any approved external LLM, while proprietary or regulated code may require self-hosted models or providers with data processing agreements that guarantee no training on your data. Set per-team and per-user monthly token budgets to control costs, with alerts at 80% thresholds. Route all LLM API calls through a centralized proxy for logging, rate limiting, and data loss prevention scanning. Pin model versions in production workflows for reproducibility, and maintain audit trails of all LLM interactions for compliance reporting.
What is the difference between governance policies and platform engineering?
Governance defines what is allowed and required, while platform engineering implements the technical infrastructure that enforces those rules. Governance is the "what" and "why" - the business requirements, security standards, compliance obligations, AI usage boundaries, and cost constraints. Platform engineering is the "how" - building the CDE platform, implementing access controls, creating templates, configuring policy engines, setting up LLM proxies, and providing self-service tools. Platform engineers translate governance requirements into technical guardrails and automation. Both must work together: governance without enforcement is ineffective, and engineering without clear requirements leads to misalignment.
How often should governance policies be reviewed and updated?
Review governance policies quarterly for effectiveness and annually for comprehensive updates. In the fast-evolving AI landscape of 2026, AI-specific policies may need more frequent review as new agent capabilities, LLM providers, and regulatory requirements emerge. Trigger ad-hoc reviews when introducing new CDE platforms, deploying new AI agent frameworks, experiencing security incidents, facing compliance changes, onboarding new teams, or receiving consistent developer feedback about policy friction. Track metrics like policy violation rates, approval workflow bottlenecks, cost trends, and developer satisfaction to identify areas for improvement. Establish a governance committee with representation from security, platform engineering, AI/ML teams, development teams, and management to ensure balanced decision-making.
What tools are available for implementing policy as code?
Open Policy Agent (OPA) with Rego is the most popular choice for declarative policy enforcement across cloud-native environments. Kubernetes admission controllers (Gatekeeper, Kyverno) integrate OPA into the cluster control plane. Cloud providers offer native policy services like AWS Config Rules, Azure Policy, and GCP Organization Policy. HashiCorp Sentinel integrates with Terraform for infrastructure policies. For AI-specific governance, LLM gateway products like LiteLLM Proxy and Portkey provide centralized LLM access control, rate limiting, and audit logging. CDE platforms like Coder offer built-in template governance, and Ona provides environment-level policy enforcement. Choose tools based on your CDE platform architecture, existing tooling, team expertise, and integration requirements.
Continue Learning
Explore related topics to deepen your understanding of CDE security, compliance, AI agent management, access management, and cost optimization.
AI Agent Security
Secure AI coding agents with sandbox isolation, permission boundaries, and runtime monitoring in CDE workspaces.
CDE Compliance
Navigate regulatory requirements and compliance frameworks for cloud development environments across industries.
CDE Security
Secure cloud development environments with network isolation, secret management, and vulnerability scanning.
FinOps for CDEs
Optimize cloud development costs with resource quotas, idle detection, LLM token budgets, and chargeback models.
