What does it mean to talk about the Future of Software Testing without talking about "Quality" (whatever that is)? I believe that Testing is a means to helping you attain the desired quality but that on its own it is not an indicator of what the delivered quality will be. I think it is fair to speculate on the practice of Testing in isolation of the idea of Quality. Just like it is fair to speculate on the kind of vehicles you use for travel without saying it is a clear indicator of the quality of the journey.
When it comes to predicting the future, how far ahead do we look? It used to bother me that H.G. Wells chose to go 800,000 years in the future in his famous work, The Time Machine. Some authors go 100 years or even just 5 or 10 to tell their tale. I will not be writing about what will happen in 5 years time and I really hope it doesn't take 100 years. I don't know how long some of these events will take to manifest. I do have some idea of 'markers' that we may see along the way though.
When it comes to Software Testing, two thoughts jump immediately to mind: 1) Software Testing is an inseparable part of the creative Software/Solution Development process; and 2) there will be many different possible futures depending on how you do testing today. Put another way, there is no one right way to solve a problem, and creating software is a complex problem-solving activity performed in shifting organisations of human interaction so there are many ways to do it. In my opinion, the technical 'fiddly bits' like code, tests and user documentation pale in comparison to the difficulty of those two primary challenges: (1) solving the real problem, and (2) working with others to get the job done.
When we ask what is the future of software testing, we are really asking what is the future of software development. So what will Testing look like in the future? Well, what does Testing look like today? What does Development look like today?
Scenario 1: No (internal formal) Testing
For some, testing is only done by the end users. This may be a single individual, a small group of people or a larger population depending on the application. For example, if you have a highly-specialised application, the best testers may be the domain experts themselves (e.g. researchers, scientists, and so on). Aside from some quick checks made during development, no formal testing is performed prior to release of the software to the end user(s) for evaluation.
In some cases, non-critical software applications that people can live without if they fail are sometimes also released without any special internal testing phases. I have found that this often happens when a company is in a new niche market and they have no competitors yet. These applications may be prototypes or ways to try to figure out user/market value. If the apps stop working, people can continue on without them.
I don't see the future of this kind of software development changing much. No (internal) Testers are affected in the present or future of this kind of development shop. I believe with the advances in development technologies, we will see the quality of deliverables improve over time although the activity of determining suitability or fitness for purpose will always remain. That is, specialised software will always require the approval of the expert customers. They are the testers here.
Scenario 2: Programmer-driven-testing
I believe that "testers" are "developers" by definition, so rather than saying "developer-testing" (which would be confusing) I will say the code jockeys are the ones who own and perform the testing here. This is different from the scenario above in that formal test frameworks are in place and testing happens prior to release to the customer or users. There are no separate testers or test teams within the company to take a second look at the system to make sure it "looks right" (big air quotes here). Many Agile Development shops operate this way, especially smaller ones.
Again, as there are no separate testers to speak of here, the future here won't seem very surprising. I expect things will pretty much look the same as they do today - only the tools will become more sophisticated. (more about that below, in the next scenario)
Scenario 3: Functional System Testing
Unfortunately, this makes up a large part of the software development companies out there with QA/Test teams - the "traditional" software testing. It depends on the context of course, however my experience has been that this activity is largely a waste of time and money and is performed more for show and to satisfy lawyers (i.e. copious test paperwork as "evidence" of due diligence) than it is to actually raise software quality.
Anyone today whose job is to create mountainous test documentation that slows the creative development process, creates division and mistrust between project team members, and only serves to check that what has already been built has been "built as specified" is completely wasting everyone's time and money. I believe the term here is WOMBAT - waste of money, brains and time.
The future here is easy to predict. This tester role will completely disappear in the future as it is "wasteful" (in "Lean" development terms) and provides no additional value to the development process or solution quality. Sorry kiddos - adapt or die.
What will trigger this future? Simple. Responsible development practices and smarter development tools. When you take a look at the value that these test cases (i.e. usually very narrow-focussed functional "checks") provide, they are generally standard, straightforward things like:
1. Is the feature element (button, field, widget, text, whatever) that you said is supposed to be there actually there?
- Test-Driven Development (TDD) is a development practice (available today!) that completely eliminates the need to have separate testers do this kind of thing manually. There is simply no good reason for this testing activity to continue in a manual fashion into the future. It shouldn't even be done manually today!
- (TDD-style) Automated functional "checks" directly linked to the code are way more efficient to maintain. They also facilitate good quality deliverables that don't degrade unexpectedly over time.
- I believe that (future) advanced development tools (programming languages, compilers, etc.) will automatically include model-based testing (MBT) subroutines that will automatically scan for such trivial aspects of developed code.
- There is really nothing magical in this kind of testing activity. Yes, it finds a lot of really good bugs. The only perceived "magic" here comes from inexperienced programmers who don't know how to develop better solutions.
Generally speaking, I don't know when programmers stopped taking responsibility for such basic testing and checking activities as part of their coding tasks. If you go back to the 1960's and 70's, there were no separate testers - programmers did it all. This is easy stuff. I really believe that if programmers had continued to "own" this kind of testing, it would have been part of the development tools by now.
We took one step forward in the 60's and 70's and then two steps backwards in the 90's and 2000's. It's no wonder that the "agile movement" is trying to move programmers back in the right direction. When this quality/value ownership in development becomes more widespread, I will be happy to report that we will have achieved 1960's-level of development craftsmanship. Again. sigh.
Scenario 4: Business Analysts, Personas and Suitability Testing
Testing in some companies happens with BA's or other specialists who act on the customer's behalf to check that the application or system developed (SUT) meets the expected fitness for purpose. That is, does the SUT meet the business needs (rules, SOP's, statutes, industry standards, and so on) of the customers or users? These types of internal testers generally have some kind of industry or domain experience or knowledge.
In testing jargon, this is more "validation" kind of testing or "did we build the right thing?" Sometimes this is done with the help of "typical user profiles" called "personas." In the absence of formal requirements or tests, one can ask the question "what would user X do in this situation?" This kind of testing has more of a basis in business and psychology and doesn't presently lend itself to automation very well.
I believe that in the future, these kinds of tests will be automated as well, using intelligent systems and algorithms that can calculate the percentage probability of a developed solution falling within the desired user parameters.
I believe that the Eureqa system provides us with a glimpse of what intelligent computer systems can do today. With advancements in hardware technology, I believe it will be practical for humans one day to interact with computers via natural-language voice control and have the computers do these kinds of checks for us. The "comparing system" or "oracle" will group business rules together into a model, run through the SUT and heuristically compare the data output with the desired business model. At this point it is simple mathematics to let you know that the SUT meets approximately 72% of the desired model, include confidence limits, and tell you which rules the SUT fails to meet.
We're not there yet. If this describes your job, I'm pretty sure your job is safe for the next 5-10 years (if you are good at what you do). Development of this kind of "oracle" really depends on a lot of things, including how quickly we get through scenario 3 above.
Scenario 5: Life-Critical Systems
This is more than simply a variation of scenario 4 above. These kinds of systems are things like medical devices, nuclear/energy management systems, aerospace and deep-sea technologies, and so on. Basically, any system that if it fails someone will very likely die.
When people's lives are on the line, I believe the responsible action will be to always have people involved in the evaluation and assessment of the SUT.
Yes, I believe that advanced development tools and technologies such as what I mentioned in scenarios 3 & 4 will greatly improve the foundational quality of all systems developed - including life-critical systems. However, the role of the responsible development organisation here will be to have good, smart, skilled people own the field trials in a way that determines "fitness for use" at a level well beyond what I have already described above.
If my word processor is unavailable, I may get annoyed but I will find another way to write a message. If a pacemaker stops running after two weeks of use, you can be sure that the patient involved cares about this problem in a really big way. As will his or her family.
Scenario 6: Black-Box System Testing, Para-Functional Testing, Exploratory Testing
This is a superset of scenarios 3 & 4 and complementary to scenario 5. In this role, the tester takes a look at more than just the functionality of the system and is trained to ask questions about the SUT and user expectations that sometimes challenges the current design of the developed solution.
A good Exploratory Tester identifies assumptions and asks open questions about them. For example, questions about the user experience, flow of data, security and privacy, internationalisation, reliability and many other facets that are often skipped or ignored in "traditional" software test teams.
I do this style of testing, and I know many good people who also do it around the world. Unfortunately, I also know that we represent a small percentage of all the test teams out there.
I believe that this testing role fills an important niche in the software development "creative problem-solving" activity that is currently lacking in many companies. That is, we go back to the important question: are we solving the right problem for everyone who matters?
I see two possible futures here. First, if people are still involved in the development process, I believe that this testing role will be split in two parts. The hands-on testing part will be automated using the advanced development tools I already described above. We can simply add new MBT subroutines to the tools to account for new personas, perspectives and potential problem types. The creative, investigative people-interaction part will still need a skilled person to go around, talk to people and ask the right questions. This will lead to creating the right tests for the tools to help us answer.
In the second possible future, if people are not involved in the software development process, this whole testing activity will be automated in a fashion similar to what I described in scenario 4 above.
Scenario 7: Specialised tests - e.g. Performance, Usability
This is a variation of scenario 6 above. A specialised test is one that answers a specific class of questions. My experience in Performance Testing tells me that how I do this kind of testing is different from other kinds of testing. At the end of the day, there are a set of rules and models that apply to do this kind of testing properly. We will be able to program or teach these rules to an "oracle" system at some point in the future.
Human research and expertise will go into the problem-solving models that we will program into these computer systems. Depending on the complexity of the development project, the oracle system may be able to ask the appropriate questions and execute the tests without further prompting. In some cases, I expect the initial questions will come from a person and the computer system will be able to perform the test and report the results as required.
To sum up, I see the future of software development as being much more plug-and-play than anything we have today. Software testing activities will be largely automated with the possibility that humans may still be involved in asking important questions that lead to suitability or fitness for purpose. In time, I think that learning computer systems will be able to anticipate those kinds of questions and they will free us up to do different, more interesting creative work.
Don't worry, testers. Your jobs are safe as long as the programmers' jobs are safe - provided you are contributing value to the development activities. I believe our "developer" roles will disappear in close tandem. Software development and engineering is still a relatively young field/industry. We still have a lot of growing up to do.