Secrets Management for Cloud Development Environments
Secure credentials, API keys, and sensitive data in your CDE infrastructure using HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager
Secrets Are the Most Critical Attack Surface
In cloud development environments, secrets management is not optional. Developers need access to databases, APIs, cloud resources, and production credentials. Without proper secrets management, credentials end up hardcoded in repositories, stored in plain text, or shared through insecure channels.
Cloud Development Environments centralize development infrastructure, which means credentials and secrets must be managed at scale. A single compromised secret can expose production databases, cloud accounts, or third-party services across your entire engineering organization.
This guide shows you how to implement enterprise-grade secrets management using industry-standard tools integrated with your CDE platform.
Why Secrets Management Matters
Security Risks Without Secrets Management
- Credential Leakage: Secrets committed to Git repositories, exposed in CI/CD logs, or stored in plaintext configuration files
- Long-Lived Credentials: API keys and passwords that never expire, providing attackers unlimited time to exploit them
- Shared Credentials: Multiple developers using the same database password, making access revocation impossible
- No Audit Trail: Impossible to know who accessed which secrets and when
- Manual Rotation: Secrets rarely rotated due to difficulty in updating all locations
Compliance Requirements
- HITRUST CSF: Requires encryption of credentials at rest and in transit (Control 10.k)
- SOC 2 Type II: Mandates centralized secrets management with access logging (CC6.1)
- PCI DSS: Requires encryption of authentication credentials and key management (Requirement 8.2.1)
- GDPR: Pseudonymization and encryption of personal data (Article 32)
- FedRAMP: FIPS 140-2 validated cryptographic modules for secrets storage
HashiCorp Vault Integration
HashiCorp Vault is the industry standard for secrets management in cloud-native environments. It provides dynamic secrets, encryption as a service, and fine-grained access control.
Setup and Configuration
1. Deploy Vault in High Availability Mode
# vault-config.hcl
ui = true
api_addr = "https://vault.company.com:8200"
cluster_addr = "https://vault.company.com:8201"
storage "raft" {
path = "/vault/data"
node_id = "vault-node-1"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_cert_file = "/vault/tls/vault.crt"
tls_key_file = "/vault/tls/vault.key"
tls_min_version = "tls12"
}
seal "awskms" {
region = "us-east-1"
kms_key_id = "arn:aws:kms:us-east-1:123456789:key/abc-123"
}
# Deploy Vault cluster
kubectl apply -f - <
2. Enable Kubernetes Authentication
# Enable Kubernetes auth backend
vault auth enable kubernetes
# Configure Kubernetes authentication
vault write auth/kubernetes/config \
kubernetes_host="https://kubernetes.default.svc:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt \
token_reviewer_jwt=@/var/run/secrets/kubernetes.io/serviceaccount/token
# Create policy for developer workspaces
vault policy write developer-workspace - <
Dynamic Secrets Configuration
What Are Dynamic Secrets?
Dynamic secrets are generated on-demand with short TTLs (time-to-live). Instead of sharing a static database password, Vault creates a unique PostgreSQL user for each workspace that expires after 24 hours. This eliminates shared credentials and limits blast radius.
PostgreSQL Dynamic Credentials
# Enable database secrets engine
vault secrets enable database
# Configure PostgreSQL connection
vault write database/config/postgres \
plugin_name=postgresql-database-plugin \
allowed_roles="dev-readonly,dev-readwrite" \
connection_url="postgresql://{{username}}:{{password}}@postgres.prod.svc:5432/mydb" \
username="vault-admin" \
password="SuperSecretPassword123"
# Create read-only role
vault write database/roles/dev-readonly \
db_name=postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' IN ROLE readonly;" \
default_ttl="24h" \
max_ttl="72h"
# Create read-write role (for trusted developers)
vault write database/roles/dev-readwrite \
db_name=postgres \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}' IN ROLE readwrite;" \
default_ttl="8h" \
max_ttl="24h"
# Developers retrieve credentials
vault read database/creds/dev-readonly
# Output:
# Key Value
# lease_id database/creds/dev-readonly/abc123
# lease_duration 24h
# username v-k8s-dev-readonly-xyz789
# password A1Bb2Cc3Dd4Ee5Ff
AWS Dynamic Credentials
# Enable AWS secrets engine
vault secrets enable aws
# Configure AWS credentials
vault write aws/config/root \
access_key=AKIAIOSFODNN7EXAMPLE \
secret_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY \
region=us-east-1
# Create role for S3 read-only access
vault write aws/roles/s3-readonly \
credential_type=iam_user \
policy_document=-<
Vault Agent Sidecar for Workspaces
Inject Secrets into Coder Workspaces
# Coder template with Vault Agent sidecar
resource "kubernetes_pod" "workspace" {
metadata {
name = "coder-${data.coder_workspace.me.name}"
namespace = "coder"
annotations = {
"vault.hashicorp.com/agent-inject" = "true"
"vault.hashicorp.com/role" = "coder-workspace"
# Inject database credentials
"vault.hashicorp.com/agent-inject-secret-db" = "database/creds/dev-readonly"
"vault.hashicorp.com/agent-inject-template-db" = <
AWS Secrets Manager Integration
AWS Secrets Manager provides native integration with AWS services, automatic rotation, and fine-grained IAM policies. Best for organizations already heavily invested in AWS.
Create and Retrieve Secrets
# Create secret for database credentials
aws secretsmanager create-secret \
--name dev/postgres/credentials \
--description "PostgreSQL credentials for development" \
--secret-string '{
"username": "devuser",
"password": "SuperSecretPassword123",
"host": "postgres.prod.rds.amazonaws.com",
"port": 5432,
"database": "mydb"
}'
# Create secret for API keys
aws secretsmanager create-secret \
--name dev/stripe/api-key \
--description "Stripe API key for development" \
--secret-string "sk_test_51AbCdEfGhIjKlMnOp"
# Tag secrets for organization
aws secretsmanager tag-resource \
--secret-id dev/postgres/credentials \
--tags Key=Environment,Value=Development Key=Team,Value=Platform
# Retrieve secret in workspace startup script
SECRET=$(aws secretsmanager get-secret-value \
--secret-id dev/postgres/credentials \
--query SecretString --output text)
export DB_USERNAME=$(echo $SECRET | jq -r .username)
export DB_PASSWORD=$(echo $SECRET | jq -r .password)
export DB_HOST=$(echo $SECRET | jq -r .host)
Enable Automatic Rotation
# Lambda function for rotation (rotation-lambda.py)
import boto3
import json
def lambda_handler(event, context):
secret_id = event['SecretId']
token = event['ClientRequestToken']
step = event['Step']
secrets_manager = boto3.client('secretsmanager')
rds = boto3.client('rds')
if step == "createSecret":
# Generate new password
new_password = secrets_manager.get_random_password(
PasswordLength=32,
ExcludeCharacters='/@"\\'
)['RandomPassword']
# Store pending secret
secrets_manager.put_secret_value(
SecretId=secret_id,
ClientRequestToken=token,
SecretString=json.dumps({'password': new_password}),
VersionStages=['AWSPENDING']
)
elif step == "setSecret":
# Update RDS master password
rds.modify_db_instance(
DBInstanceIdentifier='mydb-instance',
MasterUserPassword=new_password,
ApplyImmediately=True
)
elif step == "testSecret":
# Verify new credentials work
# (connect to database and run test query)
pass
elif step == "finishSecret":
# Mark new version as current
secrets_manager.update_secret_version_stage(
SecretId=secret_id,
VersionStage='AWSCURRENT',
MoveToVersionId=token,
RemoveFromVersionId=current_version
)
# Enable automatic rotation (every 30 days)
aws secretsmanager rotate-secret \
--secret-id dev/postgres/credentials \
--rotation-lambda-arn arn:aws:lambda:us-east-1:123456789:function:RotateRDSPassword \
--rotation-rules AutomaticallyAfterDays=30
IAM Policy for Workspace Access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ReadDevelopmentSecrets",
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:us-east-1:123456789:secret:dev/*"
},
{
"Sid": "PreventDeletion",
"Effect": "Deny",
"Action": [
"secretsmanager:DeleteSecret",
"secretsmanager:PutSecretValue"
],
"Resource": "*"
},
{
"Sid": "DecryptSecretsUsingKMS",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "arn:aws:kms:us-east-1:123456789:key/abc-123",
"Condition": {
"StringEquals": {
"kms:ViaService": "secretsmanager.us-east-1.amazonaws.com"
}
}
}
]
}
# Attach policy to workspace IAM role
aws iam attach-role-policy \
--role-name CoderWorkspaceRole \
--policy-arn arn:aws:iam::123456789:policy/WorkspaceSecretsReadOnly
Azure Key Vault Integration
Azure Key Vault provides secrets, keys, and certificate management with Azure Active Directory integration and managed identities for secure, passwordless authentication.
Setup Key Vault and Managed Identity
# Create Key Vault
az keyvault create \
--name company-dev-vault \
--resource-group dev-resources \
--location eastus \
--enable-rbac-authorization true
# Create managed identity for AKS workspaces
az identity create \
--name coder-workspace-identity \
--resource-group dev-resources
IDENTITY_ID=$(az identity show \
--name coder-workspace-identity \
--resource-group dev-resources \
--query principalId -o tsv)
# Grant read access to secrets
az role assignment create \
--role "Key Vault Secrets User" \
--assignee $IDENTITY_ID \
--scope /subscriptions/abc-123/resourceGroups/dev-resources/providers/Microsoft.KeyVault/vaults/company-dev-vault
# Store secrets
az keyvault secret set \
--vault-name company-dev-vault \
--name "PostgreSQL-ConnectionString" \
--value "Server=postgres.database.azure.com;Database=mydb;User Id=devuser;Password=SuperSecret123;"
az keyvault secret set \
--vault-name company-dev-vault \
--name "Stripe-ApiKey" \
--value "sk_test_51AbCdEfGhIjKlMnOp"
Retrieve Secrets Using Azure SDK
# Install Azure SDK
pip install azure-identity azure-keyvault-secrets
# Python code to retrieve secrets using managed identity
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
# Authenticate using managed identity (no credentials needed!)
credential = DefaultAzureCredential()
vault_url = "https://company-dev-vault.vault.azure.net"
client = SecretClient(vault_url=vault_url, credential=credential)
# Retrieve secrets
db_connection = client.get_secret("PostgreSQL-ConnectionString").value
stripe_key = client.get_secret("Stripe-ApiKey").value
print(f"Database: {db_connection}")
print(f"Stripe Key: {stripe_key[:10]}...")
# Bash script to retrieve secrets via Azure CLI
az login --identity
DB_SECRET=$(az keyvault secret show \
--vault-name company-dev-vault \
--name PostgreSQL-ConnectionString \
--query value -o tsv)
export DATABASE_URL=$DB_SECRET
Access Policies and Network Restrictions
# Restrict Key Vault access to specific virtual network
az keyvault network-rule add \
--vault-name company-dev-vault \
--subnet /subscriptions/abc-123/resourceGroups/dev-resources/providers/Microsoft.Network/virtualNetworks/dev-vnet/subnets/aks-subnet
# Disable public network access
az keyvault update \
--name company-dev-vault \
--public-network-access Disabled
# Enable Key Vault firewall
az keyvault network-rule add \
--vault-name company-dev-vault \
--ip-address 203.0.113.0/24 # Office IP range
# Enable audit logging to Log Analytics
az monitor diagnostic-settings create \
--name KeyVaultAuditLogs \
--resource /subscriptions/abc-123/resourceGroups/dev-resources/providers/Microsoft.KeyVault/vaults/company-dev-vault \
--logs '[{"category": "AuditEvent", "enabled": true}]' \
--workspace /subscriptions/abc-123/resourceGroups/monitoring/providers/Microsoft.OperationalInsights/workspaces/company-logs
GCP Secret Manager Integration
Google Cloud Secret Manager provides automatic encryption, versioning, and IAM-based access control with native integration across GCP services.
Create Secrets with Versioning
# Enable Secret Manager API
gcloud services enable secretmanager.googleapis.com
# Create secret
echo -n "postgres://devuser:[email protected]:5432/mydb" | \
gcloud secrets create postgres-connection-string \
--replication-policy="automatic" \
--data-file=-
# Add labels for organization
gcloud secrets update postgres-connection-string \
--update-labels=environment=development,team=platform
# Create secret with explicit version
echo -n "sk_test_51AbCdEfGhIjKlMnOp" | \
gcloud secrets versions add stripe-api-key \
--data-file=-
# List all versions
gcloud secrets versions list postgres-connection-string
# Disable old version
gcloud secrets versions disable 1 --secret=postgres-connection-string
Access Secrets in Workspaces
# Retrieve secret via gcloud CLI
DB_URL=$(gcloud secrets versions access latest \
--secret=postgres-connection-string)
export DATABASE_URL=$DB_URL
# Python code to retrieve secrets
from google.cloud import secretmanager
def access_secret(project_id, secret_id, version_id="latest"):
client = secretmanager.SecretManagerServiceClient()
name = f"projects/{project_id}/secrets/{secret_id}/versions/{version_id}"
response = client.access_secret_version(request={"name": name})
return response.payload.data.decode('UTF-8')
# Retrieve secrets using Workload Identity (no service account key needed!)
project_id = "my-gcp-project"
db_connection = access_secret(project_id, "postgres-connection-string")
stripe_key = access_secret(project_id, "stripe-api-key")
print(f"Database URL: {db_connection}")
print(f"Stripe Key: {stripe_key[:10]}...")
# Node.js example
const {SecretManagerServiceClient} = require('@google-cloud/secret-manager');
const client = new SecretManagerServiceClient();
async function getSecret(name) {
const [version] = await client.accessSecretVersion({
name: `projects/my-gcp-project/secrets/${name}/versions/latest`,
});
return version.payload.data.toString('utf8');
}
const dbUrl = await getSecret('postgres-connection-string');
const stripeKey = await getSecret('stripe-api-key');
IAM Permissions for Workspaces
# Grant secret access to GKE service account
gcloud secrets add-iam-policy-binding postgres-connection-string \
--member="serviceAccount:[email protected]" \
--role="roles/secretmanager.secretAccessor"
# Grant access to multiple secrets using wildcards (custom role)
gcloud iam roles create workspaceSecretsReader \
--project=my-gcp-project \
--title="Workspace Secrets Reader" \
--description="Read access to development secrets" \
--permissions=secretmanager.versions.access,secretmanager.secrets.get \
--stage=GA
# Bind custom role to service account
gcloud projects add-iam-policy-binding my-gcp-project \
--member="serviceAccount:[email protected]" \
--role="projects/my-gcp-project/roles/workspaceSecretsReader" \
--condition='resource.name.startsWith("projects/my-gcp-project/secrets/dev-")'
# Enable audit logging for secret access
gcloud projects set-iam-policy my-gcp-project policy.yaml
# policy.yaml
auditConfigs:
- auditLogConfigs:
- logType: DATA_READ
- logType: DATA_WRITE
service: secretmanager.googleapis.com
Secrets Rotation Policies
Automatic secrets rotation reduces the window of opportunity for compromised credentials. Implement rotation for all secrets with TTL policies.
Rotation Frequency
- Production DB Passwords: 90 days
- Development DB Passwords: 30 days
- API Keys: 180 days
- SSH Keys: 365 days
- TLS Certificates: Before expiration
- Cloud Provider Keys: 60 days
Automation Strategy
- Vault Auto-Rotation: Built-in lease renewal
- AWS Lambda: Scheduled rotation functions
- Kubernetes CronJob: Periodic rotation tasks
- Zero-Downtime: Dual-credential overlap period
- Rollback: Keep previous version for 24h
Notification Workflows
- Slack Alert: Notify on successful rotation
- PagerDuty: Alert on rotation failure
- Email: 7-day expiration warnings
- Audit Log: CloudTrail/Log Analytics
- Metrics: Track rotation success rate
Kubernetes CronJob for Secret Rotation
apiVersion: batch/v1
kind: CronJob
metadata:
name: rotate-database-secrets
namespace: platform
spec:
schedule: "0 2 * * 0" # Every Sunday at 2 AM
jobTemplate:
spec:
template:
spec:
serviceAccountName: secret-rotator
containers:
- name: rotator
image: company/secret-rotator:latest
env:
- name: VAULT_ADDR
value: "https://vault.company.com"
- name: SLACK_WEBHOOK
valueFrom:
secretKeyRef:
name: slack-webhook
key: url
command:
- /bin/bash
- -c
- |
# Rotate database password
NEW_PASSWORD=$(vault read -field=password database/creds/prod-admin)
# Update password in database
psql "postgresql://admin:$OLD_PASSWORD@postgres:5432/mydb" \
-c "ALTER USER admin WITH PASSWORD '$NEW_PASSWORD';"
# Update secret in Vault
vault kv put secret/prod/postgres password=$NEW_PASSWORD
# Notify team
curl -X POST $SLACK_WEBHOOK -d '{
"text": "Database password rotated successfully"
}'
restartPolicy: OnFailure
Temporary Credentials and Just-In-Time Access
Instead of granting permanent access, issue short-lived credentials that expire automatically. This implements the principle of least privilege with time-bound access.
Short-Lived AWS Credentials via STS
# Assume role for 1-hour session
aws sts assume-role \
--role-arn arn:aws:iam::123456789:role/DevS3Access \
--role-session-name developer-workspace-$(date +%s) \
--duration-seconds 3600
# Export temporary credentials
export AWS_ACCESS_KEY_ID=ASIATEMP...
export AWS_SECRET_ACCESS_KEY=tempSecret...
export AWS_SESSION_TOKEN=FwoGZXIv...
# Credentials automatically expire after 1 hour
Time-Bound Database Access
# Create PostgreSQL user valid for 8 hours
CREATE ROLE temp_dev_user
WITH LOGIN PASSWORD 'TempPass123'
VALID UNTIL now() + interval '8 hours'
IN ROLE readonly;
# User automatically becomes invalid after 8 hours
# No manual cleanup required
Best Practice: Workspace-Scoped Credentials
Each developer workspace should receive unique, short-lived credentials that expire when the workspace stops. This ensures:
- - No shared credentials between developers
- - Automatic cleanup when workspaces are destroyed
- - Audit trail linking actions to specific workspaces
- - Limited blast radius if credentials are compromised
Preventing Credential Leakage
Even with secrets management in place, credentials can still leak through Git commits, logs, and error messages. Implement multiple layers of defense.
Log Scrubbing Configuration
# Fluent Bit configuration for log redaction
[FILTER]
Name modify
Match *
# Redact AWS keys
Condition Key_value_matches log (?i)(AKIA[0-9A-Z]{16})
Set log [REDACTED_AWS_KEY]
# Redact passwords from database URLs
Condition Key_value_matches log (?i)://.+:(.+)@.+:
Set log ://username:[REDACTED]@host:port
# Redact API keys
Condition Key_value_matches log (?i)(sk_live_[a-zA-Z0-9]{24,})
Set log [REDACTED_API_KEY]
# Redact JWT tokens
Condition Key_value_matches log (?i)(eyJ[a-zA-Z0-9_-]{10,})
Set log [REDACTED_JWT_TOKEN]
Git Pre-Commit Hooks
# .git/hooks/pre-commit
#!/bin/bash
# Check for AWS keys
if git diff --cached | grep -E 'AKIA[0-9A-Z]{16}'; then
echo "ERROR: AWS Access Key detected!"
exit 1
fi
# Check for private keys
if git diff --cached | grep -E '-----BEGIN.*PRIVATE KEY-----'; then
echo "ERROR: Private key detected!"
exit 1
fi
# Check for common secret patterns
if git diff --cached | grep -iE '(password|secret|token)\s*=\s*["\'].+["\']'; then
echo "WARNING: Possible hardcoded secret detected"
echo "Use environment variables instead"
exit 1
fi
# Use git-secrets for comprehensive scanning
git secrets --scan
exit 0
Secrets Scanning Tools
# TruffleHog - Scan Git history for secrets
docker run --rm -v /path/to/repo:/repo \
trufflesecurity/trufflehog:latest \
filesystem --directory=/repo --json
# Gitleaks - Fast secrets detection
gitleaks detect --source=/path/to/repo --verbose
# Detect-Secrets - Baseline approach
detect-secrets scan --baseline .secrets.baseline
# Integrate with CI/CD (GitHub Actions)
name: Secret Scanning
on: [push, pull_request]
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
fetch-depth: 0 # Full history for deep scan
- name: Run TruffleHog
uses: trufflesecurity/trufflehog@main
with:
path: ./
base: ${{ github.event.repository.default_branch }}
head: HEAD
Emergency Response: Secret Leaked to Git
- Revoke immediately: Rotate the exposed credential in your secrets manager
- Remove from Git: Use
git filter-branchor BFG Repo-Cleaner to purge secret from history - Force push: Rewrite repository history (coordinate with team!)
- Audit access: Check logs for unauthorized use of compromised credential
- Notify security team: Follow incident response procedures
- Update CI/CD: Ensure pipelines fetch new credentials
Secrets Audit and Compliance
Comprehensive audit logging proves to auditors that secrets are properly managed and accessed only by authorized personnel.
Vault Audit Logging
# Enable file audit backend
vault audit enable file file_path=/vault/logs/audit.log
# Enable syslog audit backend
vault audit enable syslog
# Query audit logs for secret access
cat /vault/logs/audit.log | jq '
select(.type == "response"
and .request.path | contains("secret/data")
) | {
time: .time,
user: .auth.display_name,
path: .request.path,
operation: .request.operation
}'
# Example output:
# {
# "time": "2025-01-15T14:32:10Z",
# "user": "[email protected]",
# "path": "secret/data/dev/postgres",
# "operation": "read"
# }
CloudTrail for AWS Secrets Manager
# Query CloudTrail for secret access
aws cloudtrail lookup-events \
--lookup-attributes \
AttributeKey=ResourceName,AttributeValue=dev/postgres/credentials \
--start-time 2025-01-01 \
--max-results 100
# Athena query for comprehensive analysis
SELECT
useridentity.principalid as user,
eventtime,
eventname,
requestparameters.secretId as secret,
sourceipaddress
FROM cloudtrail_logs
WHERE eventsource = 'secretsmanager.amazonaws.com'
AND eventname IN ('GetSecretValue', 'PutSecretValue')
ORDER BY eventtime DESC
LIMIT 100;
Compliance Evidence for Auditors
HITRUST CSF Controls
- - 10.k: Encryption of credentials at rest (KMS)
- - 09.aa: Access control to secrets (IAM policies)
- - 09.ab: User access management (SSO integration)
- - 09.ac: User password management (rotation policies)
SOC 2 Trust Service Criteria
- - CC6.1: Logical access controls
- - CC6.7: Transmission of data protection
- - CC7.2: System monitoring
- - A1.2: Confidentiality of stored data
Platform-Specific Secrets Integration
Coder Parameters
# Terraform template
data "coder_parameter" "db_password" {
name = "database_password"
display_name = "Database Password"
description = "PostgreSQL password"
type = "string"
sensitive = true
mutable = true
}
resource "kubernetes_secret" "workspace" {
metadata {
name = "workspace-secrets"
}
data = {
DB_PASSWORD = data.coder_parameter.db_password.value
}
}
Gitpod Environment Variables
# Set secrets via CLI
gp env set DB_PASSWORD=SuperSecret123
# Scope to specific repository
gp env set -s GITHUB_TOKEN=ghp_abc123
# .gitpod.yml - retrieve from Vault
tasks:
- before: |
export VAULT_ADDR=https://vault.company.com
export VAULT_TOKEN=$(gp credential-helper get vault)
export DB_PASSWORD=$(vault kv get -field=password secret/db)
Codespaces Secrets
# Create repository secret via GitHub CLI
gh secret set DB_PASSWORD \
--body "SuperSecret123" \
--repo myorg/myrepo
# Create organization secret
gh secret set STRIPE_API_KEY \
--body "sk_live_abc123" \
--org myorg \
--visibility selected \
--repos myorg/repo1,myorg/repo2
# Access in devcontainer.json
{
"remoteEnv": {
"DB_PASSWORD": "${localEnv:DB_PASSWORD}"
}
}
Secrets Management Best Practices Checklist
Foundation
- Deploy centralized secrets manager (Vault, AWS, Azure, or GCP)
- Enable encryption at rest using KMS or HSM
- Configure TLS for all secrets API communication
- Implement role-based access control (RBAC)
- Enable comprehensive audit logging
Dynamic Secrets
- Configure dynamic database credentials with TTL
- Generate unique credentials per workspace
- Implement automatic credential revocation on workspace stop
- Use cloud provider temporary credentials (STS, Workload Identity)
- Set appropriate TTL for different secret types
Rotation & Lifecycle
- Automate secret rotation for all long-lived credentials
- Test rotation process in staging before production
- Implement dual-credential overlap during rotation
- Alert on rotation failures via PagerDuty/Slack
- Track rotation success rate in monitoring dashboard
Prevention & Detection
- Install pre-commit hooks to block hardcoded secrets
- Scan Git history for leaked credentials (TruffleHog, Gitleaks)
- Redact secrets from application logs and error messages
- Implement network restrictions for secrets API access
- Create incident response playbook for leaked secrets
Compliance & Audit
- Enable audit logging for all secrets access
- Stream logs to SIEM (Splunk, DataDog, ELK)
- Document secrets management policies for auditors
- Maintain evidence of encryption (FIPS 140-2 compliance)
- Regular access reviews (quarterly minimum)
Workspace Integration
- Inject secrets via Vault Agent sidecar or init containers
- Use platform-native secret stores (Coder params, Gitpod env)
- Authenticate workspaces using service accounts (not user credentials)
- Never store secrets in workspace images or volumes
- Revoke all secrets when workspace is terminated
Ready to Implement Secrets Management?
Securing credentials is one of the most impactful security improvements you can make. Start with dynamic database credentials and expand to all secrets across your infrastructure.