Code Reviews
Russell Cheung
July 25, 2024 · 12 min read
As engineers, the quality of the codebase is one of our core responsibilities. Code reviews are the final line of defense before code enters a codebase. Once code is in place, it becomes a challenge to address issues that do not break core functionality. At limehome we are a fully-remote agile team, we want to take advantage of what code reviews have to offer.

Starting with a definition
What is a code review?
- A safe space for asynchronous communication
- Centralized location for feedback on code entering a codebase
- An opportunity for mentorship and education
- Foster engineering visibility and collaboration
- Brings shared accountability and confidence in new code
- Fail early in the face of uncertain priorities and product potential while maintaining quality benchmarks
What is not a code review?
- A form that one fills out to track code changes in a codebase
- A place to argue about, assert or defend the merits of one’s choices
- The way to merge into a deployment branch
- An representation of Jira ticket status
- Collect as many changes as possible to deploy at once
- Ensures 100% stable and bug-free code
What’s the point
As engineers, the quality of the codebase is one of our core responsibilities. Code reviews are the final line of defense before code enters a codebase.
🤔 Think of a time when you merged something and later revisited it to improve the quality, stability and maintainability of code. I don’t know about you, but in my experience the chances to do so are far and in between.
Once code is in place, it becomes a challenge to address issues that do not break core functionality. The natural outcome is to add a Tech Debt ticket into the backlog somewhere. It is hard to prioritize these because there are other priorities that take precedent.
At limehome we are a fully-remote agile team, we want to take advantage of what code reviews have to offer. Rubber stamping or bypassing code reviews is not too different from merging to your release branch or main and pushing directly to remote.
Code reviews encourage early failure and this is a positive downstream effect for our end-users. To that end, our customers and branding are at risk if we opt to fail later. How does this scale our organization and engineering talent?
- By sharing feedback before code is merged, it becomes a learning opportunity for the MR author
- On a broader spectrum, all engineers are empowered to review and gain visibility in other projects
- As a leader, it is known who is accountable for a code change and all the context that comes with it
The anatomy of a productive code review
Code reviews should not be a burden:
- Review turnaround should be fast for the author and the reviewer
- They are there not to boost ego or to show off
- The goal is to get code merged and to unblock each other
- The outcome is confidence that something was merged that everyone feels good about
Roles
To set the stage, let’s take a look at what each actor can expect to achieve in this process:
Author
- One creates a MR to get code merged
- They want this to happen quickly to unblock self
- Feedback is desired on the implementation
- Confidence to merge and shared accountability on the code change
- Someone else to understand the code written
- Learn something new
- Share an interesting piece of work
Reviewer
- Get MR code merged
- They want this to happen quickly to unblock self
- Feedback to meet quality, maintainability and stability standards
- Assess the code and share accountability
- Understand the codebase and recent activities
- Learn something new
- More than one reviewer offers additional perspectives
All engineers are expected to play both parts at some point, regardless of experience and tenure. The desired outcome is for all viewpoints, experiences and knowledge to be shared no matter how old or new.
How does one author an easy-to-review MR?
Keep it short
A long MR does not do favours to those involved. It requires more time and overhead to review due to sheer quantity. This results in time spent crafting an MR, false approvals, quality leakage and significantly longer turnaround on reviews. Not only will the first review take long, subsequent comments will take longer to review.
As a reviewer, feel empowered to request breaking down of an MR into smaller MRs. Otherwise, the duty of a reviewer cannot be reasonably fulfilled. This is a collaborative effort, so be prepared to support the author.
Be clear in your expectations
The reviewer should have all the context required in your MR. That includes a clear description, test instructions and any caveats. Be aware that someone reviewing your MR may not be from your team. If it is a small enough change, any engineer can assess the quality of your work. If that is not acceptable, be specific about who the reviewers should be.
As an author, preparing an MR with sufficient context and selecting the right reviewers will prevent questions and comments like:
- “What is this?”
- “Why are we using this?”
- “This wasn’t in the description.”
- “How do I test this?”
- “I don’t know about this project/this codebase.”
Choose your reviewers wisely
Be careful with how many and which reviewers you select

There are no hard and fast rules here. When requesting reviewers, be cognizant of the reason they are there. Perhaps they have specific knowledge that is needed or that they are part of your immediate project team.
However, more reviewers may lead to a situation with too many cooks and opinions. It is certainly not forbidden and may in fact be warranted for a critical change with a broad impact.
Am I happy with it?
🤔If you were to look at this code again in a year or if someone new joined the team and looked at this code, would it be easy to work with?
How does one give an effective review?
A review is effective when:
- Actionable feedback is shared quickly
- It is clear when feedback requires or does not require an action
- Feedback is addressed quickly
- Waiting for one is asynchronous and does not block work
- There is no pressure to approve
- The reviewer is comfortable with sharing constructive feedback and asking questions
- The stakeholders of impacted code are in alignment with the approach and implementation
- The stakeholders of impacted code believe that it meets our quality standards
Reject and redirect feedback fast

A review request is received.
🤨 Confused?
😐 No idea what this project is?
🤯 Never did frontend work before?
🏝️ On vacation?
🥵 Too busy?
🐣 Never seen this team before?
Options for what to do next:
- Redirect the review to someone you know can do so
- Comment in the MR why you cannot perform the review
- DM the MR author with your reasoning why you cannot perform the review
These actions should be taken immediately and reduce both the author and reviewer’s overhead. It allows the author to take an action and get unblocked as a result. In a way, it’s an example of actionable feedback…
Giving constructive feedback
Feedback, even outside the context of a code review, should be constructive and actionable. Without a recommendation or clear action, asynchronous communication is subject to misinterpretation. It is important to be objective. By sharing reasoning, risks and impact of a suggestion, the receiver is more likely to understand that perspective. If the feedback is ambiguous then one can expect the same on the receiving end.
let varX;
...
if (!varX) return false;
Looking at the code sample above, an example of constructive feedback might look like this:
By using a falsy condition on line 3, the expression could be evaluated as true depending on the value even if it is not null or undefined. I would suggest something like this:
if (varX == null) return false;
This would evaluate as true only if varX is null or undefined and would correctly evaluate to false with values like 0 and ''.
This is an oversimplified example, but the components of actionable feedback are in place. Do not take this as the rule for how review comments should look like - they can be as wordy or concise as you like. However, the feedback and next step should be easily understood.
It does not hurt to indicate whether the comment is a nitpick nit or something that is required for approval.
Assessing code quality
A comprehensive guide is not the intention in these next few sections. Much of it comes with experience and learnings. Each team may have their own guidelines and reviewers should be realistic and pragmatic. Quality should be addressed when it impacts the goals of the engineering org or those of the team.
Typically, quality is a concern when coding style standards are not met or if code smells appear. If something is found in a review that was not addressed by linting then it should be added to the linter configuration right away, either in the same MR or in a subsequent MR. Only addressing the problem in the code being reviewed is not sufficient because it allows the issue to resurface in future code changes.
Subjective stylistic preferences have a place in assessing code quality, however make a clear distinction between opinion and fact. One’s preference does not invalidate another’s preference. If one is preferable to another then the facts should support that. Otherwise, it’s only an opinion and the author is allowed to take it or leave it.
Assessing stability and performance
When reviewing code, it is vital to pull from experience and knowledge to address potential risks and bottlenecks. In the code deployment journey, there are several points to capture these types of issues each performed by different groups:

Each type of assessment can assess parts of both stability and performance. There is no prescribed set to run for every code change. As a reviewer, it is important to flag whether you believe a type of assessment is required for the code change being reviewed at that moment.
There is no one-size-fits all for good reason. Running all assessments is expensive, time consuming and blocking. However, taking none introduces significant risk. Work with the author and QA to determine a suitable balance. To illustrate with a few examples:
- For a small content change it may be excessive to do anything more than a manual test by the code author and to rely on automated test coverage
- Refactoring a couple of classes might require updating automated tests and adding to or assessing the outcome of integration tests
- Adding new functionality limited to your domain may warrant new automated tests, integration tests and manual testing of some new test cases by QA
- Adding new functionality that depends on or impacts other services may need the coverage brought by manual testing by all parties, automated test suites and a QA regression that covers all affected services
- Optimizations in a critical path might require author and reviewer manual testing, QA tests cases on the path and new/updated load testing procedures
Every example comes with a “may” or a “might” because this assessment can only come from those involved with the review and deployment process. As a reviewer, your role is not only to execute certain assessments, but also to suggest them before it’s too late.
Assessing maintainability
The goal of assessing maintainability is to give the gift of pleasant code to a future self. There are no strict rules here, however it is desirable to keep implementations as simple as they need to be. Some questions you may ask yourself or thoughts that might come up during a review:
- Will this be easy to maintain?
- Would someone new be able to work on this piece of code?
- Will automated tests pass if this code is refactored?
- This looks like the pasta I ate last night
- I don’t understand
- There are so many lines
These are signs that something is wrong. Something may not be wrong, but it is probably time to dig a little deeper. It might help to have a short pairing session to clarify any complexity. Any warranted complexity should be easily explained by the author. A code change suggestion with the benefits and drawbacks is helpful and can be a source of education for the team and collective.
In the CI process, SonarCloud is the tool of choice for static code analysis. It has markers for code complexity, maintainability and security which makes it a great place to look once in a while for such flags. Each merge request will trigger SonarCloud and a summary with the above indicators will be provided in a comment after the analysis.
Without making sure code is maintainable before it enters the codebase, we decide to take on technical debt.
Bringing your domain knowledge and experience
The code review is the opportunity for your contributions without being the code author. It is the moment to bring any concerns and to protect you and those around you from headaches in the future. Instead of a chore or something to be rushed, take as a chance to teach a new concept or to encourage breaking a habit. Do that constructively and all engineers will grow to do the same.
Align on goals and communication
A code review is not a one-way street. Authors spend time to write an MR and Reviewers spend time to review them.
As a reviewer, give feedback quickly. If you are not an appropriate reviewer, indicate that to the author so another reviewer can be selected in your place. You could also redirect it yourself if you know the best person to take over. Be prepared to share reasoning and insight into your feedback. It is reasonable to expect that you may miss some context. Go into a review objectively and help the author get to the finish line.
As an author, likewise be prepared to share reasoning and insight. Reviewers may not be aware of everything, but like any communication tool it is up to the people to resolve misunderstandings and uncertainties.
For both authors and reviewers, be clear about what the goals are. Feedback can be shared but it is not a requirement to address them. This is determined on a case-by-case basis, but some examples include small nitpicks or subjective opinions. It is up to those involved to resolve feedback to merge code in a timely manner.
Other tools that can move things along include pairing sessions over an MR and quick calls to clarify text-based comments.