Skip to main content
InfraGap.com Logo
Home
Getting Started
Core Concept What is a CDE? How It Works Benefits CDE Assessment Getting Started Guide
Implementation
Architecture Patterns DevContainers Language Quickstarts IDE Integration AI/ML Workloads Advanced DevContainers
Operations
Performance Optimization High Availability & DR Monitoring Capacity Planning Troubleshooting Runbooks
Security
Security Deep Dive Secrets Management Vulnerability Management Network Security IAM Guide Compliance Guide
Planning
Pilot Program Design Stakeholder Communication Risk Management Migration Guide Cost Analysis Vendor Evaluation Training Resources Team Structure Industry Guides
Resources
Tools Comparison CDE vs Alternatives Case Studies Lessons Learned Glossary FAQ

Pilot Program Design

Structure your CDE pilot for success with team selection criteria, success metrics, 90-day evaluation scorecards, and go/no-go decision frameworks.

Pilot Team Selection Criteria

Choose teams that will set your pilot up for success

Ideal Pilot Team Characteristics

  • Enthusiastic team lead

    Manager willing to champion the change and handle resistance

  • Modern tech stack

    Teams using containers, VS Code, or standard tooling (not legacy)

  • Team size 5-15 developers

    Large enough for valid data, small enough to manage closely

  • Stable roadmap

    Not in middle of critical deadline or major refactoring

  • Recent or planned hires

    Can demonstrate onboarding improvements immediately

Avoid for Initial Pilot

  • Teams under deadline pressure

    Any friction will be blamed on CDE, not given fair evaluation

  • Specialized hardware needs

    GPU workloads, embedded development, iOS builds (for first pilot)

  • Known skeptics/blockers

    Don't start with teams vocally opposed - win them later with success

  • Legacy monolith teams

    Complex local dependencies increase pilot complexity

  • High-latency regions

    Teams far from your cloud region will have poor experience

Team Selection Scorecard

Criteria Weight Team A Team B Team C
Manager enthusiasm (1-5) 3x
Tech stack compatibility (1-5) 2x
Roadmap stability (1-5) 2x
Team size (5-15 = 5, else lower) 1x
Onboarding needs (1-5) 2x
Weighted Total (max 50) - - -

Pilot Success Metrics

Define what success looks like before you start

Productivity

  • Onboarding time < 4 hours
  • Workspace startup < 5 min
  • Env issues/week < 2

Experience

  • Developer NPS > +30
  • Would recommend > 70%
  • Satisfaction score > 4.0/5

Reliability

  • Platform uptime > 99.5%
  • P95 latency < 100ms
  • Support tickets < 5/week

Adoption

  • Daily active users > 80%
  • Local dev usage < 10%
  • Return to local < 5%

90-Day Evaluation Scorecard

Weekly checkpoints for pilot evaluation

Phase Week Milestone Success Criteria Status
Setup 1 Infrastructure deployed Platform accessible, SSO working -
2 Training complete 100% pilot team attended -
Active Pilot 3 First sprint on CDE No blockers, sprint completed -
4 First pulse survey Satisfaction > 3.5/5 -
5-6 Steady state DAU > 70%, < 3 support tickets -
7-8 New hire onboarding Onboarding < 4 hours -
9-10 Edge case testing Complex workflows validated -
11 Final survey NPS > +30, recommend > 70% -
Decision 12 Data analysis All metrics compiled -
13 Go/No-Go Decision Executive presentation -

Go/No-Go Decision Framework

Objective criteria for the expansion decision

GO

All must be true

  • Developer satisfaction > 4.0/5
  • Platform uptime > 99.5%
  • No critical blockers unresolved
  • DAU > 80% of pilot team
  • Onboarding < 4 hours achieved
  • Manager recommends expansion

CONDITIONAL

Extend pilot 30 days

  • Satisfaction 3.5-4.0 (trending up)
  • Uptime 99-99.5% (fixable issues)
  • 1-2 blockers with clear fix path
  • DAU 60-80% (adoption growing)
  • Mixed manager feedback

NO-GO

Any one triggers stop

  • Developer satisfaction < 3.5/5
  • Platform uptime < 99%
  • Security incident occurred
  • DAU < 60% (low adoption)
  • > 3 critical unfixed blockers
  • Manager recommends rollback