Skip to content

DORA Metrics

DevOps Research and Assessment metrics for performance evaluation

GitPulse provides DORA (DevOps Research and Assessment) metrics at the project level, aggregating data from all repositories within a project. These metrics help you evaluate your team's DevOps performance across multiple codebases and understand the overall health of your project portfolio.

Overview

DORA metrics are four key performance indicators that measure the effectiveness of your DevOps practices:

  1. Deployment Frequency - How often you deploy to production across all project repositories
  2. Lead Time for Changes - How long it takes to go from code commit to production deployment
  3. Change Failure Rate - Percentage of deployments causing a failure in production
  4. Mean Time to Recovery (MTTR) - How long it takes to restore service after a production failure

GitPulse calculates all four metrics at the project level, providing a comprehensive view of your DevOps performance.

Performance Grades

Each metric is classified into one of four performance levels based on industry benchmarks:

๐ŸŸข Elite

  • Color: Green
  • Description: Top-tier performance, exceptional DevOps practices
  • Impact: Organizations at this level typically have the highest business outcomes

๐Ÿ”ต High

  • Color: Blue
  • Description: Above-average performance, good DevOps practices
  • Impact: Strong business outcomes, competitive advantage

๐ŸŸก Medium

  • Color: Yellow
  • Description: Average performance, room for improvement
  • Impact: Standard business outcomes, potential for optimization

๐Ÿ”ด Low

  • Color: Red
  • Description: Below-average performance, needs attention
  • Impact: May indicate bottlenecks or process issues

Metric Details

1. Deployment Frequency

What it measures: How often your team successfully deploys code to production across all repositories in the project.

Calculation: Number of production deployments across all project repositories in the selected period รท number of days in the period

Performance Benchmarks:

  • ๐ŸŸข Elite: โ‰ฅ 1.0 deployments/day (several times per day)
  • ๐Ÿ”ต High: 0.14 - 0.99 deployments/day (once per day to once per week)
  • ๐Ÿ”ด Medium: 0.03 - 0.13 deployments/day (once per week to once per month)
  • ๐Ÿ”ด Low: < 0.03 deployments/day (less than once per month)

Project-level aggregation

This metric aggregates deployment data from all repositories within the project, giving you a comprehensive view of your deployment velocity across your entire project portfolio.

2. Lead Time for Changes - LT1

What it measures: Time from the first commit to successful production deployment across all project repositories.

Calculation: Median time between the earliest commit in a pull request and the deployment that includes that PR, aggregated across all repositories.

Performance Benchmarks:

  • ๐ŸŸข Elite: < 0.042 days (< 1 hour)
  • ๐Ÿ”ต High: 0.042 - 1.0 days (1 hour to 1 day)
  • ๐Ÿ”ด Medium: 1.0 - 7.0 days (1 day to 1 week)
  • ๐Ÿ”ด Low: > 7.0 days (> 1 week)

Cross-repository analysis

LT1 at the project level helps identify which repositories or teams might be experiencing bottlenecks in their development process.

3. Lead Time for Changes - LT2

What it measures: Time from pull request merge to successful production deployment across all project repositories.

Calculation: Median time between PR merge and the deployment that includes that PR, aggregated across all repositories.

Performance Benchmarks:

  • ๐ŸŸข Elite: < 0.042 days (< 1 hour)
  • ๐Ÿ”ต High: 0.042 - 0.5 days (1 hour to 12 hours)
  • ๐Ÿ”ด Medium: 0.5 - 2.0 days (12 hours to 2 days)
  • ๐Ÿ”ด Low: > 2.0 days (> 2 days)

Deployment pipeline efficiency

LT2 at the project level measures the overall efficiency of your deployment pipelines across all repositories.

4. Change Failure Rate (CFR)

What it measures: Percentage of production deployments that caused incidents within 24 hours of deployment.

Calculation: (Number of deployments causing incidents within 24h / Total production deployments) ร— 100

Performance Benchmarks:

  • ๐ŸŸข Elite: 0-15% failure rate
  • ๐Ÿ”ต High: 16-30% failure rate
  • ๐Ÿ”ด Medium: 31-45% failure rate
  • ๐Ÿ”ด Low: > 45% failure rate

Incident correlation

GitPulse uses a 24-hour window heuristic to correlate incidents with deployments, helping identify which deployments may have introduced issues.

5. Mean Time to Recovery (MTTR)

What it measures: Average time to restore service after a production incident across all project repositories.

Calculation: Average time between incident detection and resolution, measured in minutes.

Performance Benchmarks:

  • ๐ŸŸข Elite: < 15 minutes
  • ๐Ÿ”ต High: 15 minutes to 1 hour
  • ๐Ÿ”ด Medium: 1 hour to 4 hours
  • ๐Ÿ”ด Low: > 4 hours

Recovery efficiency

MTTR at the project level indicates your overall ability to respond to and resolve incidents quickly across your entire project portfolio.

6. Reliability

What it measures: Incident frequency normalized by time period.

Calculation: Total incidents in the period รท number of days in the period

Performance Benchmarks:

  • ๐ŸŸข Elite: < 0.1 incidents per day
  • ๐Ÿ”ต High: 0.1 - 0.3 incidents per day
  • ๐Ÿ”ด Medium: 0.3 - 0.5 incidents per day
  • ๐Ÿ”ด Low: > 0.5 incidents per day

System stability

Reliability measures how often your systems experience incidents, providing insight into overall system stability.

Data Aggregation

Repository-level to Project-level

GitPulse aggregates DORA metrics from individual repositories to provide project-level insights:

  1. Deployment Frequency: Sums deployments across all repositories
  2. Lead Time Calculations: Aggregates timing data from all repositories
  3. Change Failure Rate: Correlates incidents with deployments across all repositories
  4. MTTR: Calculates recovery times from incidents across all repositories
  5. Reliability: Aggregates incident data from all repositories

Production Environment Detection

GitPulse automatically identifies production deployments by looking for:

  • Environment names containing "production", "prod", "live", "main", "master"
  • GitHub Pages deployments (github-pages environment)
  • Deployments with success statuses

Interpreting Your Results

High Performance Indicators:

  • Elite grades across all metrics
  • Consistent deployment patterns across repositories
  • Short lead times with low variance
  • Low change failure rates
  • Fast incident recovery times

Areas for Improvement:

  • Low or Medium grades
  • High variance in lead times between repositories
  • Infrequent deployments
  • Long delays between merge and deployment
  • High incident rates or slow recovery times

Cross-Repository Analysis:

  1. Compare metrics across repositories to identify bottlenecks
  2. Look for patterns in deployment frequency and lead times
  3. Identify high-risk repositories with frequent incidents
  4. Spot opportunities for process improvements

Data Requirements

To calculate project-level DORA metrics, GitPulse requires:

For Deployment Frequency:

  • Production deployments with success statuses across all repositories
  • Environment information to identify production deployments

For Lead Time Calculations:

  • Pull requests with merge dates across all repositories
  • Deployment success timestamps across all repositories
  • Commit information for first-commit calculations

For Change Failure Rate:

  • Production deployments across all repositories
  • Incident data with detection times
  • 24-hour correlation window for deployment-incident matching

For MTTR:

  • Incident data with detection and resolution timestamps
  • Project-level incident aggregation

Limitations

  • Metrics are calculated over the selected time period (default: 30 days)
  • Requires sufficient data across all repositories for accurate aggregation
  • Only considers successful deployments for lead time calculations
  • Change failure rate uses a 24-hour correlation heuristic
  • May not capture all deployment types (e.g., hotfixes, emergency deployments)

References