PullNotifier Logo

PullNotifier

TestingCI/CDCode Quality

Code Coverage: The Complete Guide

Everything you need to know about measuring, improving, and enforcing code coverage in your projects.

Last updated: February 2026

·

18 min read

TweetLinkedIn

Table of Contents

1. What Is Code Coverage?2. Code Coverage Metrics Explained3. Code Coverage Calculator4. Code Coverage Tools by Language5. Coverage Thresholds & Best Practices6. Setting Up Code Coverage in CI/CD7. Common Mistakes to Avoid8. Mutation Testing: Beyond Coverage9. Code Coverage in Pull Requests10. FAQ

Other Tools

Stacked PRs GuideGit Commit Message GuideMerge Request vs Pull RequestIntegrate GitLab with SlackSlack GitHub Actions
See all tools →
PullNotifier — Pull request alerts directly on Slack

What Is Code Coverage?

Code coverage is a software testing metric that measures the percentage of your source code that is executed when your automated test suite runs. It answers a fundamental question: which parts of my code are actually being tested?

When you run your tests with a coverage tool enabled, the tool instruments your code — tracking which lines, branches, functions, and statements are executed. The result is a coverage report showing exactly what was tested and, more importantly, what was not.

Coverage Visualization

Covered

Not covered

1

12x

function calculateDiscount(price, membership) {

2

12x

if (membership === "premium") {

3

8x

return price * 0.8;

4

4x

} else if (membership === "basic") {

5

4x

return price * 0.9;

6

0x

} else if (membership === "trial") {

7

0x

return price * 0.95;

8

0x

}

9

0x

return price;

10

12x

}

Line: 60%

Branch: 50%

Function: 100%

In the example above, the function has been called 12 times during testing, but only the premium and basic membership paths were tested. The trial path and the default return are completely untested — a gap that line coverage alone might obscure but branch coverage would catch.

Code coverage vs test coverage

Code coverage is a technical metric measuring which code was executed. Test coverage is a broader concept measuring what percentage of requirements and user scenarios are validated by your test suite. You can have high code coverage but low test coverage if your tests execute code without verifying the correct behavior.

Coverage is typically measured using instrumentation tools specific to your programming language — Istanbul for JavaScript, coverage.py for Python, JaCoCo for Java, and built-in tooling for Go. These tools integrate with your test runner and CI/CD pipeline to produce reports automatically.


Code Coverage Metrics Explained

Not all coverage metrics are created equal. Understanding the differences helps you choose the right level of rigor for your project.

Line Coverage

Measures the percentage of executable lines of source code executed during testing. The simplest and most commonly reported metric.

(Lines Executed / Total Executable Lines) x 100

Easy to understand and widely supported

A line can be "covered" without its behavior being verified by assertions

Branch Coverage

Measures whether each branch of every control structure (if/else, switch/case, ternary) has been exercised. Requires both the true and false paths to be tested.

(Branches Executed / Total Branches) x 100

Catches untested conditional paths that line coverage misses

Does not test individual boolean sub-expressions in compound conditions

Function Coverage

Measures whether each function or method defined in the code has been called at least once during testing. The coarsest standard metric.

(Functions Called / Total Functions) x 100

Quick way to identify completely untested functions

A function is "covered" with a single call even if most internal logic is untested

Statement Coverage

Measures whether each statement in the program has been executed at least once. Similar to line coverage but differs when a single line contains multiple statements.

(Statements Executed / Total Statements) x 100

More granular than line coverage in multi-statement lines

Same fundamental weakness as line coverage — execution does not equal verification

Condition Coverage

Measures whether each boolean sub-expression in a compound condition has been independently evaluated to both true and false. The most granular standard metric.

(Conditions Evaluated / Total Condition Outcomes) x 100

Catches gaps in complex boolean logic that branch coverage misses

Can be impractical for deeply nested or highly compound conditions

Metric Comparison at a Glance

MetricGranularityRecommendationCatches Branch Gaps?Tool Support
Line CoverageLowBaselineUniversal
Statement CoverageLow-MediumBaselineUniversal
Branch CoverageMediumRecommended minimumMost tools
Function CoverageVery LowSupplementaryUniversal
Condition CoverageHighAdvancedLimited

Recommendation

Use branch coverage as your minimum standard. Line coverage is useful for quick overviews but can give a false sense of security. Branch coverage catches the hidden gaps that line coverage misses — and virtually all modern coverage tools support it.


Code Coverage Calculator

Enter your covered and total lines to calculate your code coverage percentage and see how it stacks up against industry benchmarks.

Calculate Your Coverage

Lines covered

Total lines

Enter values to calculate


Code Coverage Tools by Language

Every major programming language has mature coverage tooling — many with built-in support. Here are the recommended tools for each language, with quick-start setup commands.

JavaScript / TypeScript

Istanbul / nyc

Open source

The de facto standard since 2012. Istanbul is the instrumentation engine; nyc is its CLI. Battle-tested and widely adopted across the JS ecosystem.

npx nyc --reporter=lcov --reporter=text npm test

Jest (built-in)

Built-in

Jest uses Istanbul under the hood. Run jest --coverage to generate reports. Supports coverage thresholds in configuration and any Istanbul reporter format.

npx jest --coverage

Vitest

Built-in

Supports two providers: @vitest/coverage-v8 (default, uses V8 native coverage) and @vitest/coverage-istanbul (higher accuracy). Configure in vitest.config.ts.

npx vitest run --coverage

c8

Open source

Lightweight alternative that uses V8's native code coverage directly, without instrumentation overhead. Faster than Istanbul for large codebases.

npx c8 node test.js

Python

coverage.py

Open source

The standard Python coverage tool, authored by Ned Batchelder. Supports line and branch coverage. Generates HTML, XML, JSON, and LCOV reports.

coverage run -m pytest && coverage report --show-missing

pytest-cov

Plugin

A pytest plugin wrapping coverage.py for convenient integration. Handles subprocess and parallel (xdist) testing correctly. The recommended approach for pytest users.

pytest --cov=mypackage --cov-report=term-missing

Java

JaCoCo

Open source

The most widely used Java coverage tool. Integrates with Maven and Gradle. Supports line, branch, instruction, and cyclomatic complexity metrics.

mvn verify  # with jacoco-maven-plugin configured

Cobertura

Open source

An older alternative to JaCoCo. Less actively maintained but still used in some legacy pipelines. Generates HTML and XML reports.

mvn cobertura:cobertura

Go

go test -cover

Built-in

Go has native coverage support — no third-party tools required. Supports profile modes (set, count, atomic) and HTML report generation. Since Go 1.20, integration test coverage is also supported.

go test -coverprofile=coverage.out ./...

Ruby

SimpleCov

Open source

The standard for Ruby projects. Generates HTML reports with grouping, filtering, and branch coverage. Must be required before any application code is loaded.

# Add to top of spec/spec_helper.rb
require 'simplecov'
SimpleCov.start 'rails'

C# / .NET

Coverlet

Open source

The most popular cross-platform coverage tool for .NET. Part of the .NET Foundation. Included by default in xUnit project templates.

dotnet test --collect:"XPlat Code Coverage"

dotCover

Commercial

JetBrains commercial tool with deep IDE integration in Rider and Visual Studio. Supports continuous testing and coverage visualization.

dotnet dotcover test

Cross-Language Reporting Platforms

Codecov

Cloud platform that aggregates coverage from any language or tool. PR comments, badges, and trend tracking.

Best for: Teams wanting automated PR coverage comments and trend analysis

Coveralls

Similar to Codecov — tracks coverage over time, integrates with all major CI systems, and supports most languages.

Best for: Open-source projects wanting free, simple coverage tracking

SonarQube / SonarCloud

Supports 25+ languages. Combines coverage with static analysis, code smells, security hotspots, and quality gates.

Best for: Enterprise teams wanting coverage + code quality in one platform


Code Coverage Thresholds & Best Practices

What percentage of code coverage should you aim for? The answer depends on your project's risk profile — but research and industry experience provide clear guidelines.

Coverage Threshold Guide

0-30%
Critical

Large portions of code are untested. High risk of undetected bugs.

30-60%
Low

Some testing exists but significant gaps remain. Acceptable only for legacy codebases being gradually improved.

60-75%
Acceptable

Google's "acceptable" threshold. Good starting point for most projects.

75-85%
Good

The most widely cited industry target. Balances thoroughness with practicality.

85-95%
Excellent

Google's "exemplary" tier. Recommended for business-critical and user-facing code.

95-100%
Comprehensive

Required for safety-critical systems (healthcare, aerospace, finance). Diminishing returns for most applications.

Google's coverage guidelines

Google considers 60% acceptable, 75% commendable, and 90% exemplary. They avoid broad top-down mandates, encouraging each team to select the threshold appropriate for their business context. For new code, Google recommends a per-commit coverage goal of 90%+.

Best Practices

1

Use coverage as a diagnostic, not a target

Martin Fowler: "If you make a certain level of coverage a target, people will try to attain it. The trouble is that high coverage numbers are too easy to reach with low quality testing." Use reports to find untested areas, not as a scoreboard.

2

Enforce coverage on new code, not just overall

Require newly written code to meet a high bar (e.g., 90%+) rather than demanding legacy codebases retroactively reach 80%. This prevents blocking development while steadily improving coverage.

3

Use the ratcheting approach

Set a baseline, then configure CI to fail if coverage drops below the current level. The threshold automatically increases as coverage improves but never decreases. This prevents silent regression.

4

Prefer branch coverage over line coverage

Line coverage can be misleading — it marks a line as covered even if only one side of a conditional is tested. Branch coverage ensures both true and false paths are exercised.

5

Exclude generated code and configuration

Coverage metrics should reflect hand-written application logic. Exclude auto-generated code, database migrations, configuration boilerplate, type definitions, and test files themselves.

6

Combine with mutation testing

Coverage only tells you code was executed, not verified. Mutation testing introduces faults and checks if tests catch them — addressing the "assertion-free test" problem.

7

Set per-module thresholds for critical code

Not all code is equally critical. A payment processing module warrants higher coverage than a logging utility. Tools like Jest support per-glob threshold configuration.

8

Track coverage trends over time

A single snapshot is less useful than a trend. Use Codecov, Coveralls, or SonarQube to visualize coverage changes across sprints and releases.


Setting Up Code Coverage in CI/CD

Enforcing code coverage in your CI/CD pipeline is the single most effective way to prevent coverage regression. Here are complete GitHub Actions workflows for the most popular languages, using Codecov for reporting.

JavaScript (Jest)

GitHub Actions
name: Test & Coverage
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with:
          node-version: '20'
          cache: 'npm'
      - run: npm ci
      - run: npx jest --coverage --coverageReporters=lcov
      - name: Upload to Codecov
        uses: codecov/codecov-action@v5
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
          files: ./coverage/lcov.info

Enforcing a Minimum Threshold

Most coverage tools support built-in threshold enforcement. If coverage drops below the configured minimum, the test command exits with a non-zero code — failing your CI build automatically.

# Jest (package.json)
"jest": {
  "coverageThreshold": {
    "global": {
      "branches": 80,
      "functions": 80,
      "lines": 80,
      "statements": 80
    }
  }
}

# pytest-cov (pyproject.toml)
[tool.coverage.report]
fail_under = 80

# Go (shell command)
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print substr($3, 1, length($3)-1)}')
if (( $(echo "$COVERAGE < 80" | bc -l) )); then exit 1; fi

Pro tip: PR coverage comments

Use the codecov/codecov-action or irongut/CodeCoverageSummary GitHub Action to automatically post a coverage summary as a comment on every pull request. This makes coverage changes visible during code review without requiring reviewers to check a separate dashboard.


Common Code Coverage Mistakes to Avoid

Treating coverage as a quality metric

Coverage measures execution breadth, not test quality. A test that calls every function but asserts nothing will show 100% coverage while verifying nothing.

Fix: Combine coverage with mutation testing and code review of test assertions.

Gaming coverage with low-quality tests

When coverage becomes a mandated target, developers write tests to hit numbers rather than to validate behavior. This produces brittle, hard-to-maintain tests.

Fix: Focus on meaningful assertions. Review test quality alongside coverage numbers.

Chasing 100% coverage dogmatically

The cost-benefit curve is logarithmic. Going from 80% to 100% can take as much effort as 0% to 80%, with diminishing returns. Even at 100%, only about half of faults are exposed.

Fix: Set a practical target (80%) and invest remaining effort in integration tests and mutation testing.

Ignoring branch coverage

Relying solely on line coverage provides a false sense of security. Code with if statements and no else block will show 100% line coverage with only the true path tested.

Fix: Always enable and track branch coverage alongside line coverage.

Not excluding irrelevant code

Including auto-generated code, migrations, and vendor code inflates or deflates coverage numbers meaninglessly.

Fix: Configure exclusion patterns for generated code, config files, and third-party code.

Not running coverage in CI

Coverage reports generated locally are easily forgotten. Without CI integration, coverage standards drift over time and regressions go unnoticed.

Fix: Make coverage reporting and threshold enforcement part of the automated CI pipeline.


Mutation Testing: Beyond Code Coverage

Code coverage tells you that code was executed by your tests — but not that it was verified. A test that calls a function without any assertions will still count as covered. Mutation testing addresses this fundamental limitation.

How it works: A mutation testing tool introduces small faults into your code — called mutants — such as changing > to >=, flipping true to false, or removing a function call. It then runs your test suite against each mutant. If your tests catch the mutation (fail), the mutant is "killed." If your tests still pass, the mutant "survived" — meaning your tests are not actually verifying that behavior.

Key insight

A test suite can have 100% code coverage and a 40% mutation score — meaning 60% of intentional code changes go undetected by your tests. Mutation testing reveals these "assertion-free" tests that inflate your coverage numbers without providing actual safety.

Mutation Testing Tools

LanguageTool
JavaScript / TypeScriptStryker
JavaPIT (pitest)
Pythonmutmut
C# / .NETStryker.NET
Gogo-mutesting
Rubymutant

Code Coverage in Pull Requests

The most effective place to review code coverage is during the pull request process. When coverage reports are visible alongside the diff, reviewers can immediately see whether new code is tested — and whether changes have caused coverage to drop.

Tools like Codecov and Coveralls automatically post coverage summaries as PR comments, showing the overall coverage change, per-file impact, and whether the PR meets the configured threshold. This makes coverage a natural part of the code review conversation rather than an afterthought.

What to look for in PR coverage reports

  • Coverage delta: Did overall coverage go up or down? A large drop may indicate untested new code.
  • New file coverage: Are newly added files covered? New modules should ideally have high coverage from the start.
  • Modified file coverage: Did changes to existing files maintain or improve their coverage?
  • Branch coverage gaps: Are there untested conditional paths in the changed code?

Staying on Top of PR Activity

Coverage reports on PRs are only useful if reviews happen promptly. When a PR sits unreviewed for days, coverage gaps compound as more code is built on untested foundations.

For teams that live in Slack, a dedicated GitHub-to-Slack integration keeps pull request workflows moving. PullNotifier sends instant Slack notifications for new PRs, review requests, approvals, CI status changes, and merges — so your team knows the moment a PR is ready for review, including when coverage checks pass or fail.

When CI coverage checks are part of your branch protection rules, fast notification of check results becomes critical. A failed coverage check should trigger immediate attention, not sit undiscovered until someone manually checks GitHub.


Never Miss a Failed Coverage Check

PullNotifier sends real-time Slack notifications for every pull request event — including CI status checks like code coverage. Know instantly when a PR passes or fails its coverage threshold, so your team can act immediately.

Try PullNotifier FreeView Pricing

Frequently Asked Questions

What is code coverage?

Code coverage is a software testing metric that measures the percentage of your source code executed when your test suite runs. It helps identify untested areas of your codebase. Common metrics include line coverage, branch coverage, function coverage, and statement coverage. Coverage is typically measured using instrumentation tools specific to your programming language.

What is a good code coverage percentage?

The widely cited industry target is 80%. Google considers 60% acceptable, 75% commendable, and 90% exemplary. Context matters: safety-critical systems (healthcare, finance, aerospace) should aim for 95%+, while internal tools might be fine at 70%. The most important thing is to focus on meaningful test quality over hitting a specific number — and to prevent coverage from decreasing over time.

What is the difference between code coverage and test coverage?

Code coverage is a technical metric measuring which lines, branches, and functions were executed during testing. Test coverage is a broader concept measuring what percentage of requirements, user scenarios, and specifications are validated by your test suite. Code coverage is a subset of test coverage — you can have high code coverage but low test coverage if your tests execute code without verifying the correct behavior.

Does 100% code coverage mean my code is bug-free?

No. 100% code coverage means every line was executed during testing, not that every behavior was verified. Tests can execute code without asserting correct output. Research estimates 100% code coverage exposes only about half of the faults in a system. Bugs can exist in edge cases, concurrency issues, integration points, and scenarios that line-by-line execution does not capture.

What is the difference between line coverage and branch coverage?

Line coverage measures whether each executable line of code was run at least once. Branch coverage measures whether both the true and false outcomes of every decision point (if/else, switch, ternary) were tested. Branch coverage is stricter — you can have 100% line coverage but less than 100% branch coverage if you only test one path of a conditional. Branch coverage is generally recommended as the minimum meaningful metric.

Should I enforce code coverage in CI/CD?

Yes. Enforcing coverage in CI prevents silent regression. Two recommended approaches: (1) Set a minimum threshold (e.g., 80%) and fail the build if coverage drops below it. (2) Use ratcheting, where the threshold automatically increases as coverage improves and never decreases. Focus enforcement on new and changed code to avoid blocking development on legacy codebases.

What is mutation testing and how does it relate to code coverage?

Mutation testing "tests your tests" by introducing small faults (mutants) into your code — like changing a > to >=, or flipping true to false — and checking whether your test suite catches them. It addresses the key limitation of code coverage: coverage tells you code was executed but not that it was verified. A test suite can have high coverage but a low mutation score if it lacks meaningful assertions. Popular tools include Stryker (JS/C#), PIT (Java), and mutmut (Python).

How do I increase code coverage effectively?

Start by using coverage reports to identify the largest untested areas (highest impact). Focus on testing business-critical paths and complex logic first. Write regression tests for every bug fix. Require coverage for all new code via CI. Use branch coverage to find hidden untested paths. Avoid writing tests solely to increase the number — focus on meaningful assertions that validate actual behavior.

What code should I exclude from coverage reports?

Commonly excluded: auto-generated code, database migrations, configuration and boilerplate files, third-party vendor code, test files themselves, type definitions or interfaces (in typed languages), and trivial getters/setters. Most tools support exclusion via configuration files or inline comments (e.g., /* istanbul ignore next */ in JavaScript, # pragma: no cover in Python).

How often should I review code coverage?

Coverage should be reviewed automatically on every pull request (via CI) and trended over time at the project level. Weekly or sprint-level reviews of coverage trends help identify systemic gaps. Use platforms like Codecov or SonarQube to visualize changes. The goal is to ensure coverage remains stable or improves as the codebase grows.



Other Tools

See all tools →
PullNotifier — Pull request alerts directly on Slack
PullNotifier Logo

PullNotifier

© 2026 PullNotifier. All rights reserved