Most recently it occurred to me while reading one of my son's chapter books. FYI: "chapter" books are short books broken into chapters for kids just beginning to read on their own. Usually for kids ~ 7-8 years old. These aren't Harry Potter books.. most of them barely reach 80 pages.
The book that caught my attention is called "Jigsaw Jones #9: The Case of the Stinky Science Project" by James Preller. The main characters are in Grade 2 and in this particular story their teacher was giving a Science lesson:
"The world is full of mystery. Scientists try to discover the truth. They ask questions. They investigate. They try to learn facts. Scientists do this by using the scientific method."
The teacher then handed out sheets of paper which read:THE SCIENTIFIC METHOD
1. Identify the problem. What do you want to know?
2. Gather information. What do you already know?
3. Make a prediction. What do you think will happen?
4. Test the prediction. Experiment!
5. Draw a conclusion based on what you learned. Why did the experiment work out the way it did?
Back when I used to teach High School Physics, I recall giving a set of steps very much like this one. I might have used the word "inferences" instead of "conclusion" but otherwise it's a pretty good list.
When you think about testing software, generally you run through the same process and set of questions. If you don't think about each of these questions, then you're probably not doing something right.
For example, here are some questions that come to mind when I think of the Scientific Method applied to testing software:
1. Identify the problem.
- What are the risks?
- What is the particular feature of interest?
- What is it you want/need to 'test' and 'why'?
- What references are around to tell you how something should work? (e.g. Online Help, manuals, specifications, requirements, standards, etc.)
- What inferences can you deduce (or guess) about how something should work? (i.e. based on your experiences testing similar apps, or other parts of the same system, etc.)
- What can you determine by asking other people? (e.g. customers, programmers, subject-matter experts, etc.)
- Design your tests.
- What is your hypothesis?
- What are the expected results?
- Think about any assumptions or biases that might influence what you observe. How can you compensate for these?
- Set up the environment
- Execute the tests
- Be creative! Make as many observations as you can.
- Collect data
- Did you observe the expected result? Does this mean the test passed? Are you sure?
- If the test didn't turn up the predicted result, does this mean the test failed? Are you sure?
- Revise the test design and any assumptions based on what you observe.
- Do you have a better understanding of the risks that drove the test in the first place?
- Do you have any new questions or ideas of risks as a result of this test?
- If you collect a lot of data, summarise it in a chart that can help demonstrate the trend or pattern of interest.
- Write a few words to describe what these results mean to you. (You might not have all the information, but don't worry about that. Just say what you think it means.)
In general, I find the Scientific Method to be a very good guideline for both beginners and experienced testers alike. Wikipedia has some entries on the Scientific Method as well as a Portal. I think it's a good read. I'd recommend those pages to anyone serious about becoming a good tester.
If there are things on those pages that you aren't sure about, look them up! You might just learn something new about how to think about things that will help you do your job better.