CI/CD Cost Per Build: Benchmarks and How to Measure

Craig CookFounderLinkedInGitHub8 min read

A finance lead asks the obvious question: what does one CI build actually cost us? The platform team pulls last month's GitHub Actions invoice, divides by the number of workflow runs, and reports something like four cents per build. Reasonable, manageable, ignored.

That number is not wrong. It is the wrong number. Cost per build looks small on the invoice because the invoice only captures one half of the bill. The other half, the half that pays out of salaries instead of compute spend, is usually two orders of magnitude larger. CI/CD cost per build is compute plus developer wait time. Get the formula right and the answer changes by a factor of a hundred.

That framing sits inside the broader true cost of CI/CD: the invoice covers compute; salaries cover the rest. What follows is the cost-per-build version of the same argument. The formula, the per-minute provider rates, real numbers from a 30-day window of GitHub Actions runs, and the wait-time multiplier that turns four cents into four dollars without changing a single line of code.

The cost per build that doesn't appear on the invoice

Provider invoices itemise compute. GitHub Actions bills per runner-minute. GitLab CI bills per CI minute. CircleCI bills per credit. Bitbucket and Azure DevOps follow the same model with different units. Divide the monthly bill by the run count and you get a defensible compute-cost-per- build number. It is the number every cost dashboard surfaces today.

It is also the smaller half of the bill. An engineer on a fully-loaded $75 per hour is paid that rate while they wait for a pipeline to go green, the same way they are paid while writing code. The money leaves the business in both cases. Only the compute half carries an invoice; the wait half lands in payroll, where nobody attributes it to the pipeline that caused it. Reporting cost per build from the compute invoice alone consistently undercounts by roughly two orders of magnitude. The exact factor depends on the workload.

That same wait-time framing is treated in depth in developer wait time: the cost component your CFO never sees because no provider sends an invoice for it.

The formula: compute plus the wait-time multiplier

One run produces two costs. The compute cost is what the provider charges. The wait cost is what the team pays in engineer-hours blocked on the run.

cost per build = (compute per minute × pipeline duration) + (hourly rate × pipeline duration ÷ 60 × engineers blocked)

Three inputs are infrastructure questions: the per-minute rate, the pipeline duration, the runner type. Two are team questions: the fully-loaded hourly rate for an engineer, and how many engineers are blocked on each run. The first set is on the invoice. The second set is on the org chart.

"Engineers blocked" is the input most cost models drop, because it is the hardest to measure. A solo push from a developer working on a feature branch typically blocks one engineer (the pusher, waiting for the green tick to merge). A push to main on a small team where five other people have open PRs depending on it blocks five. A push to a shared service in a large estate during business hours can block dozens. Use one as the conservative baseline and scale up where you have evidence.

What compute per build actually looks like

Across 378 GitHub Actions runs over a recent 30-day window, median compute cost per build was $0.04. p95 was $0.08. Average was also $0.04. The distribution is tight because most runs sit close to the median; a handful of long end-to-end suites push the p95.

Within that aggregate, the per-workload averages spread wider. A small static-site build averaged $0.01 per run. A medium SaaS application's test suite averaged $0.04. A larger application with a test matrix averaged $0.06. The same provider, the same per-minute rate, three different cost-per-build numbers because the underlying pipelines take 1, 5, and 7 minutes respectively. Cost per build is almost entirely a duration story once you fix the runner type.

WorkloadAvg durationAvg compute / buildWait-to-compute ratio
Static-site build1.4 min$0.01156 to 1
SaaS test suite4.6 min$0.04111 to 1
Test matrix on a larger app7.1 min$0.0689 to 1

The pattern is consistent. Faster builds have a higher wait-to-compute ratio in percentage terms because the compute number scales linearly with duration, while the wait-time per minute is a much larger constant. Slower builds narrow the ratio but pay more on both sides.

Provider per-minute rates compared

Compute cost per build is per-minute rate times duration. The per-minute rates across the major hosted runners sit in a tight band. The numbers below are list rates for standard Linux runners as published by each provider; the actual figure on your invoice depends on runner type (large, GPU, ARM), prepaid commitments, and any enterprise pricing.

ProviderLinux per-minute (list)Notes
GitHub Actions$0.0082 vCPU baseline; larger runners scale up.
GitLab CI (SaaS)$0.005Small runner; medium and large multiply.
Bitbucket Pipelines$0.0061× runner; 2×, 4×, 8× cost proportionally.
CircleCI$0.006Medium resource class; small is cheaper.
Azure DevOps$0.005Microsoft-hosted Linux; first 1,800 min/month free.
Jenkins (self-hosted)$0.005 (illustrative)Cost is your own cloud or hardware spend; the per-minute number is whatever the EC2/k8s/bare-metal bill works out to.

Rates differ, but the spread is roughly $0.005 to $0.008 per minute on the standard Linux tiers. A 5-minute build therefore costs between $0.025 and $0.04 in compute wherever it runs. The scale lives in the wait component, not the per-minute rate.

Wait time is where the real cost lives

On the same 30-day window as the compute numbers above, the wait-to-compute ratio across the three workloads ranged from 89 to 1 up to 156 to 1. At the conservative end (89 to 1), every dollar of compute on the invoice represents 89 dollars of paid engineer time somewhere in the org. At the wide end, it is 156 dollars. The wait side dominates by two orders of magnitude.

Apply that to the cost-per-build numbers. A workload averaging $0.04 in compute is, with one engineer blocked at $75 per hour, paying roughly $4 in wait time per build. A team with five engineers downstream of a typical change is paying $20. A team running a hundred builds a day at the five-engineer figure is paying $2,000 a day in wait time alone, against $4 in compute. The compute number is what Finance sees. The wait number is what Engineering pays.

The fix order follows the cost shape. Reducing compute by 20% saves the team eighty cents a day. Reducing average pipeline duration by 20%, which saves both compute and wait, saves them about $400 a day at the same volume. The ROI on duration improvements is a hundred times the ROI on compute negotiations, and most teams optimise the wrong side.

How to measure your own CI/CD cost per build

Three numbers and a multiplier. Pull the per-minute rate from your provider's pricing page (or the cloud bill, for self-hosted Jenkins agents). Pull the average pipeline duration from your provider's UI or any CI dashboard that aggregates runs. Set a fully-loaded hourly rate for engineering time; a defensible default is $75 per hour, but higher in expensive markets and senior teams. Decide a baseline for engineers blocked per run; one is the conservative floor.

Plug the four numbers into the formula and you have a defensible cost per build, complete with the wait-time component the invoice hides. Cross-check against reality: if your number puts wait time at less than 50 to 1 of compute, either the build is unusually long, the engineer rate is unusually low, or the engineers-blocked count is too generous. If your number puts the ratio above 200 to 1, either the build is sub-minute or the engineer rate is unusually high. Most well-formed estimates land between 50 and 200 to 1.

With the cost per build calibrated, the next layer is the annualised view: cost per build times runs per day times working days, by workflow. That is the number worth bringing to a budget meeting, not the four-cents-per-build figure that comes off the invoice alone.

How CI/CD Watch surfaces cost per build

CI/CD Watch, a CI/CD observability platform that monitors pipelines across GitHub Actions, GitLab CI, Bitbucket Pipelines, CircleCI, Azure DevOps, and Jenkins, computes the compute-plus-wait formula on every workflow run. The cost view aggregates per-workflow cost per build, splits compute and wait time as separate lines, and surfaces the total at the workflow, repo, and tenant level. Per-minute rates and the developer hourly rate are configurable in settings, so the numbers reflect your actual provider tier and team cost. How the calculation handles failed runs, cancelled runs, and chained workflows is documented in the cost-calculations reference.

The compute-and-wait cost view sits on the Team plan and above. The Free tier covers pipeline-run monitoring across your connected providers, which is enough to see run volume and average duration: the two inputs the cost-per-build formula needs from your data before you reach for the analytical view.

See what your builds actually cost

CI/CD Watch's Free tier covers pipeline-run monitoring for small teams. Connect a provider to see run volume and average duration across your workflows, the two inputs the cost-per-build formula needs. The compute-plus-wait cost view, waste-category breakdowns, and per-workflow cost trends live on the Team plan and above. Either way, the formula above is the same, and the underlying argument is the same as the broader true cost of CI/CD framing: the answer the invoice gives is two orders of magnitude too low.

CI/CD Watch is built by 3CS Technologies Ltd. It started as an internal tool for tracking pipeline health across a mixed GitHub Actions and Jenkins estate. The same engine now powers the SaaS platform.

Related articles

Ready to monitor your CI/CD pipelines?

Connect your first provider in under two minutes. No credit card required.