Test Automation Solves Everything!

Readers of this blog might be wondering “how could you write several blog posts about software testing, and barely even mention test automation?”  It’s a fair question, since test automation is an absolutely critical tool in any software testing strategy.  This has become accepted wisdom in the software industry these days, and quality engineers (and their management) spend a lot of time selecting automation frameworks, and extending and customizing them.  They hope that if they implement a good automation framework, and automate “all the tests”, they can achieve maximum speed of delivery and good quality.  This is just like hoping that getting the fastest car is all you need to do to minimize the time it takes to drive from San Francisco to New York.  But other things do some into play, such as making sure you’ve got access to good navigational tools (GPS, map, whatever).  Because going as fast as possible in the wrong direction won’t get you anywhere fast.  So let’s put on our Omniscient Tester hat and think about what it takes to get the most out of automated testing, and what are the keys to doing it well.

Let me start with a true story from my career in QA.  I once worked at a company where a software engineering team in my division working on a different product were very proud of their investment in test automation.  Every week in our divisional status update, they proudly reported the number of tests they had automated, and their pass rate.  The number of tests grew to over 200,000, and the pass rate varied between 98.0% and 99.5%.  Very impressive!  You’re thinking this must be a really high quality product, and the team is able to deliver new releases very quickly, right?  Well, this was an enterprise software product, and the team always missed its scheduled release dates.  Even worse, the customer base was in active revolt and engineers had to travel all over to customer sites to troubleshoot and fix issues.  So while the team seemed quite proud of their large collection of automated tests and their high pass rate, I used to marvel at how much they liked to advertise to the whole division how poorly their tests actually found bugs!  We knew there were lots of bugs in the software, and here they had over 200,000 automated tests that were not finding them!

By now you may have figured out that the title of this post (“Test Automation Solves Everything!”) is tongue-in-cheek.  Although you’d be crazy to try to do software testing without it, you have to do it right to be efficient and effective.  So let’s start looking at what it takes to do automated testing right.

First, let’s examine the notion of automating “all the tests”.  There are two pitfalls with this notion.  The first is that there is no such thing as “all the tests”.  Any software which is even slightly complicated requires such a large number of tests to find every conceivable bug that we might as well consider it to be infinite.  When people say “all the tests”, they generally mean “all the existing test cases”.  If that’s your navigational tool for getting where you want to go, your speedy car is already getting questionable directions, based on how good, bad, or incomplete the existing set of test cases is.  On top of all this, automated tests have an inherent cost to maintain them.  As the product software changes, or even the infrastructure on which it’s built is updated, the tests will also need to be updated.  So while instinctively we might think that more tests are always better, we need to make sure that every automated test we create and maintain actually has a value higher then the cost to create and maintain it.

In order to keep this post to manageable length, I’m going to wrap up with a controversial statement, then a list of topics about efficient and effective test automation that can be explored individually in later posts.

Controversial statementHumans are better at finding bugs than automated tests.  But wait, what about the earlier statement that automated testing is critical?  These two statements aren’t contradictory, for a very important reason.  For an automated test to find a bug, it must predict exactly what the symptoms of that bug are, and the conditions required to cause it to occur.  For new functionality, or significant changes to existing functionality, it can be nearly impossible to predict all the bugs ahead of time and build the right automated tests.  Even the Omniscient Tester only knows the bugs after they’re created!  Humans are great at recognizing that things aren’t as they should be, even when the error is unexpected.  New bugs tend to be wildly unpredictable for a very good reason – developers tend to get the predictable cases right.  It’s the more esoteric code paths and functionality that tend to get less thought and be implement incorrectly.  On the other hand, humans tend to be lousy at reliably running tests that almost always pass.  If you had to run 1000 manual tests on every build, just to make sure that some existing functionality didn’t break, and you knew that most of the time these tests would all pass, you would quickly get weary and bored out of your mind.  Many years ago, in some companies where I worked, many people had jobs exactly like this.  The good testers used their brains and learned to “cheat”, by talking to developers and making educated bets about which tests they could skip.  The bad ones just re-ran all the tests mindlessly and took random short-cuts which could easily result in missing bugs when they did occur.

So here’s a starting point for my philosophy about which tests to prioritize for the investment in automation.  Definitely automate those tests which need to be run prior to any release to demonstrate that software is working correctly, and which will usually pass.  These tests should focus on main functional paths and be designed to find bugs that would render the software unusable in significant ways if they were to escape into production.  The next priority should be to automate tests that will find bugs that will block significant numbers of other tests (automated or manual) from being run.  We’ll delve into this more deeply later, but it’s a back-up plan for imperfections in the application of Principle 4 (write tests so they’re not easily blocked by other tests).  There are many more types of tests that can make sense to automate, but these two provide a good starting point, and from there, the decision process gets a bit more complicated.

So without going into detail now, here are some other key subjects to explore that lead to efficient and effective automated testing:

  • Ensure the test environment where the automated tests are run sufficiently mimics the production environment.
  • Be sure to think about the three key areas of system configuration, test data, and functional behavior.
  • Validate compatibility of the versions of the software under test, the data model under test, the test data, the system configuration, and the test cases themselves.
  • Create automated tests that are easy to maintain and update.
  • Focus as least as much, if not more, on the validations (detection of bug symptoms), as on the triggering sequences of events.
  • Minimize test blockage by implementing tests as small, self-contained units as much as possible, avoiding lengthy sequences of events with lots of validations.
  • Ensure repeatable initialization of the beginning state before the steps of the test are executed.
Advertisements

3 thoughts on “Test Automation Solves Everything!

  1. Awesome post Dave, and it makes a great point. Normally I would recommend testers getting their hands dirty in automation for first time, to really try as hard as they can to automate “all their test cases”, because only this way they could learn it is practically impossible.
    To me automation should not be regarded as the “silver bullet” of software testing and even if you are writing automated functional tests this doesn’t mean your job will get easier. On the opposite by automating the, let’s say boring part of the test cases, your task as a tester gets actually even harder on trying to figure out more complex scenarios to test manually.
    Cheers 🙂

    • Thanks for your great comments, Viktor! I think you’ve expressed a very, very important insight about the value of automated testing. In fact, I think it’s true of any form of automation, be it software, mechanical, or whatever. By eliminating the time and effort required for people to do what I’ll call the tedious and time-consuming tasks, they can be freed up to do the more innovative tasks requiring human intelligence. I think many in the industry fall into the trap of expecting automated tests to find all the bugs. In fact, I think automated tests free up quality engineers to find more bugs.

  2. Pingback: QAshido – The path of the tester. Virtue # 1 – Technical skills. | Mr.Slavchev()

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s