Overview

As part of the Review and Collaborations report package, the Submitter metrics make it easy to understand how PR Submitters are responding to and incorporating feedback during the code review process. 

The four Submitter metrics, found in the Review Collaboration report, include:

  • Responsiveness: The time it takes a Submitter to respond to a comment on their PR with either another comment or a code revision.
  • Comments addressed: The percentage of comments to which a Submitter responds.
  • Receptiveness: The percentage of comments the Submitter accepts as denoted by code revisions.
  • Unreviewed PRs: The percentage of PR’s that have no comments.

These metrics are designed to promote healthy collaboration and provide prescriptive guidance to improve the productivity of the team’s code review process as a whole.

As with any data point, these metrics should be used in context. “What’s right,” and “what’s normal,” will vary depending on your team’s situation. 

Responsiveness

Are people responding to feedback in a timely manner? 

Responsiveness is the average time it takes to respond to a Reviewer’s comment with either another comment or a code revision. It looks at the time between the last comment of the reviewer and the submitter’s response.  

In practice, the goal is to drive this metric down. It’s up to the manager to determine how quickly people should respond to comments, but depending on your deployment frequency or deadlines, you may find that less than four hours is ideal, where over 24 hours regardless of timezone is counterproductive under most circumstances. 

However, like everything we do, Responsiveness is context-dependent. 

The Submitter may be in the zone and shouldn’t stop. In some cases, it may be inappropriate for them to stop (they’re in a meeting, working on an extremely important ticket, or handling an outage). 

But when it’s “my work” versus “their work,” as soon as you exit your flow state — breaking for lunch or coffee — you should take the time to try to respond to those comments. 

A “response” can be a comment or a code revision. If I say, “Change foo to bar,” you don’t need you to say “Okay” plus make the suggested change to the code. The change is your response. Similarly, if you don’t agree with my suggestion, you can respond with a comment. Both options are viable “responses.” 

 Comments Addressed

Are people acknowledging feedback from their teammates? 

Comments Addressed is the percentage of Reviewer comments the PR Submitter responded to with a comment or a code revision.

This metric is different from Responsiveness because it looks at how broadly the Submitter responded to the Reviewer’s comments (instead of how quickly they responded to them). 

As a manager, you want to drive this number up.  If a Reviewer thought it was worthwhile to make a comment, it’s generally worthwhile to respond to it. It’s best to use this metric as a prompt to encourage thorough reviews rather than managing to an absolute target.

Receptiveness

Are people incorporating feedback from their teammates?

Receptiveness is the ratio of follow-on commits to comments. In short, this metric looks at whether the PR Submitter is taking people’s feedback and incorporating that into their code.

This is a goldilocks metric, so you’ll want to manage the outliers — and as always, context matters. A good developer is always open to improvements, but not all suggestions are worth implementing. 

If Receptiveness is too low, it could be a sign that a developer is closed to any input regardless of merit. It could also be a sign of “rubber-stamping,” which is a work pattern where the Reviewer, assuming too much of the Submitter, approves the PR without a thorough review. 

Alternately, if Receptiveness is too high, you may be seeing a developer failing to stand their ground, or someone relying on the review process to shake out bugs easily caught in development.

Unreviewed PRs

Are PRs getting the proper level of review?

The Unreviewed PRs metric shows the frequency with which Pull Requests were submitted that had no comments and were self-merged. It’s a percentage of PRs that didn’t get any reviews. 

In an ideal world, PRs are never merged without being reviewed — even when they’re small or made by senior developers they can be a huge source of bugs. Many organizations establish a policy and configure their system to programmatically reject unreviewed PRs. 

But for those who don’t enforce a policy: Every single time the PR goes out without being reviewed, a manager should know.

As a manager, you should drive this number to zero and take seriously the rare instance when an engineer felt compelled to drive code straight from the laptop to production without anyone ever looking at the change.

Still have questions about the Submit metrics and how to use them? Email us at support@gitprime.com or click on the chat link in the bottom right corner of your screen.

Did this answer your question?