Part 1-How the process of testing is accelerating
It is self-evident that technology is advancing quickly. Every company wants to gain as much commercial value as possible from these advancements, and the expectations of both employees and customers are that they should have access to “the latest” tools. Every company has to decide which new digital technologies to adopt to satisfy this demand. The technology products companies develop and launch are integral to their futures, but they must also be inextricably linked to a rigorous approach to testing and Quality Assurance (QA). Confidence and capability in one without the other is potentially disastrous, so testing processes are now as high profile and critical as the engineering processes.
In the era of Microservices, APIs, Big Data, Mobility, IoT, Blockchain and AI, let’s take a look at how and why these new movements affect testing and how Ness is addressing them.
In Part 1 of this two-part blog post series, I’ll share some observations about how the testing process itself is changing to support faster release of digital platforms and products to market:
QA is changing
In the testing domain, I see the industry becoming more focused on Quality Engineering (QE) versus just Quality Assurance.
Back in the formative years of Agile, we got used to two-week sprints for development followed by two weeks of testing the code in the next sprint. There was a real split between rudimentary “what works/doesn’t work” testing of the code and then, much later on, testing for performance, usability, security. It was quite linear and waterfall-like and not at all in the spirit of the Agile manifesto which focused on “early and continuous delivery of valuable software… with a preference for the shortest timescale.”
Quality Assurance (QA) is really a remnant of the waterfall development lifecycle era and geared toward long delivery cycles with a natural inclination to “test when done.”
With the modern urgency to release products and applications faster, traditional QA has become an impediment to product companies because the process is too slow and QA frequently becomes the main bottleneck in the process. Working out where to gain efficiency led to the emergence of Quality Engineering (QE).
QE is taking QA’s place
Where Quality Assurance is the overall process (the quality system) of ensuring a final product meets specified requirements, QE is the approach used to define, maintain and improve that system. It is a more ambitious, prominent, and muscular version of what has gone before, and it signifies a monumental “shift-left” where the subject of testing and quality arises much earlier in the timeline of software creation.
QE’s emergence has everything to do with the pursuit of speed in getting new features to market with confidence. Quality Engineering is all about managing testing much earlier: test often, test fast, fail fast, fix fast and automate as much as possible to speed up testing even more.
QE’s role in testing today
By testing earlier, things have become more agile and much smarter. We now test smaller packets of completed code and are therefore able to make adjustments more economically than in the old days when it felt much more like we were testing everything all the time.
With modern approaches like Test Driven Development (TDD), Acceptance Test Driven Development (ATDD), and Behavioral Driven Development (BDD), there is an increasing benefit to surface QA much earlier in the development life cycle. QE delivers that benefit with a fixed eye on increasing efficiency through the automation of tests–and isolating what has been tested, so we don’t need to test it again.
Now that we are testing as we are developing, our automation test suite is growing all the time. This means that at the end of every sprint, we can now run an entire automation program to see if anything has broken: we have automated regression testing. Automation points towards increased certainty and quality predictability that everything still works. It removes the frailties of manual testing, which sometimes caused confusion when changing things that already passed testing.
A New Type of Engineer Emerges
With the new emphasis on testing (and test automation) much earlier in the cycle, there is a growing demand for a new kind of engineer: a Software Development Engineer in Test (SDET).
SDETs are developers themselves. They understand code and development practices and have the technical mind set to write, run and automate test cases. Crucially, they also have the skills to find reasons for failed tests AND adjust the code to fix the failure. I can tell this new role is helping Ness scrum teams move with even greater speed and efficiency.
This is a new career path for test engineers to follow. In training and building out our SDET capabilities, we are cross-training test automation engineers, so they are able to automate processes for thorough testing and develop code on their own. They are playing a contributory role in the creation of all our new software. We know we are in esteemed company, because SDET was originally a Microsoft term and is now popular with Facebook, Amazon and Google as they strive to release at speed with high quality.
As was the case with Agile and then DevOps, Ness has always been on the leading edge of implementing practices that facilitate faster, quality software releases. It’s a priority for us because we know speed and reliability create an advantage for our clients when they release new products. Over the years, we have also gotten very good at training and scaling new capabilities across our engineering teams, so we can deploy new practices quickly. In Part 2 of this blog post, I’ll share perspective on how we’re using QE approaches to support various development opportunities in IoT, open source, and service design, to name a few.
Click here for part 2