Overview

Our Review and Collaborations report package provides a way for software teams to see the ground truth of what’s happening in the code review process. The package is split into three sets of metrics: Submit, Review, and Team Collaboration

The four Reviewer metrics, found in the Review Collaboration report, include:

  • Reaction time: The time it takes for the Reviewer to respond to a comment addressed to them.
  • Involvement: The percentage of Pull Requests that the reviewer participated in.
  • Influence: The ratio of follow-on commits made after the Reviewer commented.
  • Review coverage: The percentage of hunks commented on by the reviewer.

These metrics are designed to promote healthy collaboration and provide prescriptive guidance to improve the productivity of the team’s code review process as a whole.

As with any data point, these metrics should be used in context. “What’s right,” and “what’s normal,” will vary depending on your team’s culture. 

Reaction Time

Are Reviewers responding to comments in a timely manner? 

Reaction Time is the time it takes for a Reviewer to respond to a comment addressed to them. Reaction Time for the Reviewer is the same concept as Responsiveness for the Submitter.  

In practice, the goal is to drive this metric down. You generally want people to be responding to each other in a timely manner, working together to find the right solution, and getting it to production. If someone addresses you directly, you want to respond to them within an hour or so, and more than eight hours is usually counterproductive under most circumstances.

However, like everything we do, Reaction Time is context-dependent. 

An engineer may be in the zone and shouldn’t realistically stop. In some cases, it may be inappropriate for them to stop (they’re in a meeting, working on an extremely important ticket, or handling an outage). 

But when it’s a “my work” versus “their work” situation, as soon as you exit your flow state — breaking for lunch or coffee — you should take the time to respond to those comments. 

Involvement

Are some people more involved in Reviews than others?

This number will change according to your view. If you’re looking at their home team, an individual may show 75% involvement, indicating they reviewed three out of four PRs. But if you zoom out to view the whole organization, that same individual’s involvement rate will be much lower.

Involvement is very context dependent.  Not everyone can review everyone else’s code (imagine an HTML’er reviewing a complex query optimization).  Architects and Team Leads are usually expected to have more involvement to ensure consistency.  

However, you should find a goldilocks zone for each individual and the team they’re on and manage significant or sustained changes to their involvement.

Influence

How often do people update their code based on the Reviewer’s comments?

Influence is the ratio of follow-on commits made after a Reviewer posted a comment. It’s the sibling of Receptiveness. The Influence metric looks at whether your comments elicited a follow-on commit.

Influence doesn’t try to assign specific credit. That is to say, no one person gets the credit for being influential.  We understand that it’s the discussion itself that deserves the credit, so all participants in the discussion prior to the follow-on commit get influence credit counted toward the metric.

In practice, there’s a Goldilocks zone with this metric: too low may be a signal that an individual isn’t making substantive comments, and too high may be a signal that an individual is acting as a gatekeeper or a crutch. 

Architects and Team Leads should have higher Influence metrics. Once you find the right level for each individual and team, manage significant or sustained changes as they could indicate a shift in the team dynamic that warrants a manager’s attention.

Review Coverage

How much of each PR has been reviewed?

Review Coverage is the number of hunks in a PR commented on as a percentage of the total hunks in the Pull Request.  A typical PR will contain multiple files and multiple edits (aka hunks) on that file. 

Like a teacher who puts tic marks on every page of a term paper to indicate they read it, a good reviewer will put a comment on the majority of the edits of a PR even if it’s a simple “LGTM”.  In practice, 100% Review Coverage is overkill.  

As a manager, you want to watch Review Coverage so that when coverage rises and falls, both at the individual level and at a team level, you can provide guidance. The goal is to drive this number up encouraging team members to take the time to review each change in the code, not just skim it as a whole.  Small changes in average Review Coverage can make a big difference.

Still have questions about the Reviewer metrics and how to use them? Email us at support@gitprime.com or click on the chat link in the bottom right corner of your screen.

Did this answer your question?