Moving to testing automation at FINRA, like many companies, didn’t happen overnight. How did we go from manual testing to mostly automated tests with multiple open source projects? We sat down with Raghu Raman, Senior Director for Development Services, to find out.
It was almost purely manual, defaulted to a waterfall mechanism. Applications would be built and deployed to an environment. Then, the QC person would interact with the application to test it, like a user might. This process would only test system after deployment. In 2006, our first attempt at automation was twofold. First was the usage of QC focused tools like Mercury’s Win Runner and record and play type tools. The second was proprietary scripting, using shell scripts and Perl. These were all customized for specific apps. However, there were some flaws with this. These weren’t transportable to other applications and did not scale meaningfully. We found that one or two people in a team were capable at writing automating scripts. If the person left FINRA or moved to another team, the team would regress. The other issue was that the process was always a two phased deal. It was still waterfall with a long QC cycle. Automation was a stage after manual testing. It never caught up.
Since 2008, we didn’t want to go with proprietary tools like Win Runner/Quick Test Pro; we wanted to adopt standardized tools that are not focused toward a QC professional who is not code capable. We wanted open source tools that are capable of being used by anyone in the team, not necessarily with the assumption that the person can’t code.
We also developed XCore; our own tool that was applicable for all browser based testing. We based it on the open source selenium web driver. We even open sourced some of it. At this point, multiple teams adopted automation. Today, it’s spread across multiple teams, past 50-60 applications.
The major problem faced during this situation was that our test objectives were met as end to end tests of a fully deployed application. This resulted in long running and flaky tests. The infrastructure needs to support multiple regression runs became enormous.
In the distinct 3rd phase, starting in 2012, we adopted the pyramid model of testing. That is where we try to meet test objectives at as low a level as possible. For example, we focused on meeting test objectives as unit tests. If not, then at an API level and so on. We reserved end to end tests for a few cases where we mimic user interaction with the system.
One of the major benefits we realized was cross browser testing became easier. We write the tests once and run it against any web driver compliant browser. This became really beneficial when users adopted multiple browsers, especially Chrome and Firefox. An important part of this was that the need to have a majority of the testers be code capable. We started using tools that allowed for structural testing of code within the application stack. Tools like Jasmine, Karma, Protractor, and REST Assured allowed us to interact with the code in a more significant way.
All together, these made us more nimble, reducing end to end tests. We began to test even before committing code to the repository. Quicker testing meant faster quality feedback. It demolished the wall between coding and testing. Our collaboration became more fluid and helped teams become more agile. Automation has also contributed to successfully finishing more stories in every sprint. QC is becoming more about automation and creating the tools for that. One of the desired things out of this that we realized a vast reduction of regression test cycle time. Regression tests were onerous: They could take multiple weeks, delaying releases. Now, fully regression test cycles take few hours at most, most of it is automated. In the previous era, we would run regression tests before releases 3-4 times a year. Now, we can have that daily, allowing for smaller continuous deliveries.
We expect to build on this pyramid model of testing. We hope to improve our unit test coverage thresholds to further improve our continuous delivery.
Most important: improve time to market. Today users are generally accustomed to getting features in a month or a quarter, we are rapidly moving towards delivering features at the boundary of every two week sprint. Sometimes it’s even earlier than that. The evolution of testing is a vital component of accelerating delivery.
Over the past ten years, we’ve had significant change in the mind set and skill set of test team. Part of the team’s change has been the tremendous growth over the past few years. From 2012 to now, we went from almost no junior person on teams to where 30-35% is juniors. What we have found is that, when we hire the cream of the crop from top institutions, it’s a symbiotic relationship. We find people who come from a computer science background and are eagerly looking to learn. At FINRA, we provide them a vibrant learning opportunity, a challenging but collaborative environment for growth. This symbiosis leads to very little turnover. We reap the benefit of their enthusiasm and knowledge.
A few years ago, one of our testing members from Virginia Tech collaborated with others to create MSL. This was picked up by GTAC and we presented it there. For us, it was like our team making it to the super bowl. This was his first job. Three years later, he’s still here.
Cover photography by Michael Scheidt