The Evolution of Our Release and Quality Process

Azadeh Pourpakdel

February 12, 2025 · 5 min read

When I joined the team as the first QA engineer in 2019, we were preparing to release the first version of our website. At the time, our process relied heavily on manual testing. Later we did implement some automation but it had lower priority. As I was the only QA for a long time, we didn’t have enough time and team members to cover everything. Five years later, we've come a long way and made much progress in our quality process. It was a tough journey, but looking back, I believe it was worth it.

Kayaking in the see

How was our release process?

We used to have long regression testing sessions before each release. I remember staying late, ensuring everything was running smoothly and that nothing was broken. Releases were always stressful for the QA and other engineers. We had a rotating release manager to coordinate the release, and a release call was held before each deployment. With weekly releases on a fixed schedule, QA had to be always available to catch any major issues before or after the release, making taking more than a week's vacation difficult. We wanted to be the first to know if something went wrong, not our customers.

The entire quality process was a headache, and what made things harder was the lack of testing at the implementation stage. This often resulted in QA engineers taking on the role of quality officers and spending more time following up on fixes and validations instead of enhancing testing strategies.

We also monitored our Change Failure Rate (the percentage of deployments causing a failure in production), which ranged between 40% and 60%. This wasn’t good enough, so we knew we needed to change something.

Over time, we expanded the QA team, assigning a dedicated engineer to each team. But this didn’t fix the problem, because the real issue was deeper. What we needed was a shift in mindset.


From Icecream Cone to a Pyramid

We started highlighting the importance of the testing pyramid, frequently discussing it with the engineering team to ensure everyone was on the same page. As you can imagine, mentioning it once or twice wasn’t enough but eventually, repeating it paid off. The goal was to also focus on testing at the lower levels such as unit and integration tests so that our pyramid wouldn’t look like an “ice cream cone” anti-pattern that leads to relying too much on manual testing. As Martin Fowler says “If you get a failure in a high-level test, not just do you have a bug in your functional code, you also have a missing or incorrect unit test.” By addressing small errors early, we can often prevent larger failures.

As part of this shift, we also reassessed our automation tools. Initially, we used Robot Framework, but due to its limitations and the need for more robust and scalable automation, we later switched to Cypress and eventually to Playwright for end-to-end and integration testing. This transition allowed us to achieve faster, more reliable tests and better integrate with our development workflow.

QA Testing Pyramid
QA Testing Pyramid

While we've primarily followed the Testing Pyramid, the Swiss Cheese Model, introduced by James T. Reason, provides another interesting way to think about testing layers. In this model, each level of testing is like a layer of Swiss cheese, with holes of different sizes. These holes represent potential gaps in coverage. If the holes align across the layers, defects can slip through the entire testing process. However, if each layer is covered with sufficient tests, the chances of the holes aligning are minimized. This increases overall defense, making it much less likely for a bug to reach the end user. Although the Swiss Cheese Model is typically used in risk management for more complex systems, it, much like the Testing Pyramid, emphasizes the importance of testing at multiple levels, as no single test can catch every defect.

Swiss Cheese Model
Swiss Cheese Model

Shift in Testing Approach and Culture

Together with the department leadership we set the goal to adopt a shift-left approach in testing across the department. Shift-left describes the mindset that quality is a shared responsibility, involving everyone from QA and developers to designers and customer support. However, this shift was very challenging, because ownership of testing was believed to be the responsibility of the QA team, rather than a shared effort.

To make it work, we took a step further by organizing workshops and knowledge-sharing sessions within the team to show, in practice, what QA does, how others can support it, and the importance of everyone contributing to the quality of the product. Despite a slow start, the shift gradually encouraged stronger collaboration, enabling developers to play a more active role in testing and taking on greater ownership of their work while allowing QAs to focus on writing automation for end-to-end tests. By shifting testing to earlier phases such as during development and at the integration stage we were able to catch issues sooner, improve code quality, and ultimately reduce the time spent on manual testing later in the process. Alongside this, we implemented trunk-based development, and continuous integration practices, which supported our shift-left mindset. By merging smaller, more frequent changes, we could test and deploy faster, improving agility and reducing the risk of large, complex releases. 


Test Visibility and Real-Time Monitoring

As part of our continuous improvement efforts, we also integrated Datadog with our CI to monitor test reports, providing better visibility into test performance, reliability, and flakiness. With the testing dashboard, we can now easily track key metrics like execution time, failure rates, and overall success rate helping us to optimize testing and improve CI pipeline stability.

DataDog Test Dashboard
DataDog Test Dashboard


How is the release process now?

As I mentioned earlier, we used to have weekly releases. Since switching to trunk-based development, we deploy to production more than 50 times weekly. We also significantly increased our end-to-end automation coverage, allowing us to release more confidently and efficiently. When we transitioned to Playwright, we migrated everything and rebuilt our automation structure from scratch. Now we've reached nearly 80% automation coverage, a remarkable milestone.

Qase Automation Dashboard
Qase Automation Dashboard

By embracing automation and continuous deployment, releases have become routine rather than stressful events. This shift has improved our release speed, and increased confidence in every deployment.

Our journey has been long and challenging but also rewarding. We’ve built a stronger, more efficient process that benefits both QA and the team. However, this is just the beginning. We continue to refine our process to make deployments even smoother every day.




References:

Recommended for you

pair programming

Pair Programming

March 7, 2024

2 min read

Pair programming is a great way to see a problem from a different perspective. It gives you the opportunity to learn and be inspired by the people around you. And most of all, it stops you from tunnelling in on a problem and going down the wrong path for too long.

Read more

cat-doing-code-review

Code Reviews

July 25, 2024

12 min read

As engineers, the quality of the codebase is one of our core responsibilities. Code reviews are the final line of defense before code enters a codebase. Once code is in place, it becomes a challenge to address issues that do not break core functionality. At limehome we are a fully-remote agile team, we want to take advantage of what code reviews have to offer.

Read more

view-of-a-desk-with-a-calendar-and-a-planning-journal

Managing team capacity and building in slack

December 6, 2024

4 min read

It can be tempting to fill your sprints and your roadmap to the brim with work. Ironically this actually tends to cause delays and missed deliveries - a better way is to instead think about your capacity and (productive) slack time.

Read more

Back to Blog