I was watching a science program just now and it occurred to me that Testing is very much science. And then I wondered about Programming.
I started in IT over 22 years ago doing programming. For me, the process of programming broke down to three parts: figuring out the algorithm to solve the problem, implementing/coding the solution, and cleaning up the code (for whatever reason - e.g. maintainability, usability of UI, etc.). It gets more complicated than that of course, but I think that about sums it up the major activities as I saw them. (SIDE NOTE: I didn't write those to mirror TDD's Red-Green-Refactor, but it does align nicely that way.)
When I think back on my experiences in programming, I don't see a lot of overlap with my experiences in Science (~ 8 years studying, researching and doing Physics & Environmental Science + teaching Science on top of that). Science is about answering questions. The Scientific Method provides a framework for asking and answering questions. Programming isn't about that. Building software isn't about that. I'm having difficulty at the moment trying to see how testing and programming go together.
It occurs to me that schools and universities don't have any courses that teach students how to build software. It also occurs to me that schools and universities provide students with the opportunities to learn and develop the skills required to build software well. The schools just don't know they're doing that and consequently the students don't get that opportunity intentionally.
I'm not talking about learning to program. That's trivial. Building software isn't about programming.
Building software starts with an idea - an idea that someone will pay money for. School courses to watch for here include - Economics, Entrepreneurship.
Building software requires people to work together. School courses that may apply - Business, Math/Finances, Psychology.
Building software requires people to work under constraints. "Project Management" is not really taught in schools, but there are many courses available to the public. I found this really painful to learn by doing. A whole world of insights opened up when I took my first PjM course. Reinventing the wheel really is dumb here. I believe this should be formally taught in schools - in High School actually (the earlier, the better).
Building software requires people to solve difficult problems creatively.
This one is interesting. I think there are many opportunities for people to learn this skill in school. I know that we definitely covered this in Science. I also know that Engineering programs teach students how to do this. There are many more faculties and programs that this would apply to and they all have one thing in common - there's some formula or method for solving problems for some purpose.
The thing is, that purpose is different in each case. See, here's where my mind is doing flip-flops.
In Science, when we solve a problem, the outcome is usually more information. This information feeds back into the original idea to help us check the validity of our initial premise/hypothesis. Sure, there are moments of free-form exploratory investigation, but I don't believe that happens a lot. There are an infinite number of paths that any experiment may go in if you don't have a particular question in mind when you start, so unless you are trying to intentionally waste time and money, you will start with some question in mind before you start your experiment or investigation.
In Engineering, solving a problem is different in that the outcome is usually something real, something tangible, some application or system that fills a need.
The purpose of Science is not to build things but to answer questions. The purpose of Engineering is to build things - and to do so safely, ethically, within desired parameters for intended purpose, and so on.
So where does that leave us in building software? *Building* software is definitely an Engineering task... and then some. I am temporarily over-simplifying the process of building software to focus only on the requirements gathering, design, coding and deployment phases.
"Testing" comes from and is an integral part of Science, so how does it fit in with these software engineering/development phases of requirements gathering, design, coding and deployment? Well, it doesn't. It has nothing to do with them. And yet, it has everything to do with them.
That is, from one perspective, at no time do you ever need to test anything to get through any of those phases.
That last statement, while true, kind of goes against everything I ever learned in school. Whenever I did math problems, I always checked that I got the right answers against the solutions in the back of the textbooks. Why did I do this? Why did I care?
I did it so that I could tell myself that *how* I solved the problem was correct - it got me the same answer that the textbook and my teacher cared about. I discovered there were exceptions, of course. That is, there were times when I got the correct answer but my method was wrong in some way. Dumb luck does play a role in life and that was when I first discovered the evil twins named Type I and Type II errors. (Side Note: I wonder if that's where Dr. Seuss got his idea for Thing 1 and Thing 2? Hmm..)
So, the process of checking answers with the "approved" solutions, and handing in assignments for grading by teachers is a feedback mechanism to tell me that I've learned how to solve certain kinds of problems in ways that provide the desired results. Let's assume for a moment that's a good thing.
Getting back to building software, you can go through requirements gathering, design, coding and deployment without ever once checking that you are producing the desired solution or results. In the end, this is a monumental waste of time and money, and is completely incongruous with the initial premise that you are building a product/service/solution that someone will pay money for. That's bad economics. That's psychotic.
So how do we fix this? What's the problem here?
Well, one problem is that we now have a question, we have doubt at the end of each of these phases. We have a desire to learn if the end result of each stage and for the whole process is meeting [our/someone's] expectations. The answer at the back of the textbook here will be provided by the people who are choosing to pay you and not your competitor for what you produce/release/ship.
Hey! Wait a minute! Science helps us answers question! Testing is a small part of that process. The bigger process starts with a question that stems from some research or exploration of the initial area of interest. This hypothesis is the part we really care about. The "how do we go about gathering enough information to answer this question" part is something different and we should get people who know how to do this to either (a) do it for us, or (b) help us do it for ourselves. Then there is the analysis of that data or observations in context of the initial hypothesis.
But this is a different layer now! We're adding a layer of "science" on top of "engineering". That's weird. That's like trying to mix oil and water together. That is, they don't mix. If you shake them up together you only end up with some cloudy mess that eventually will separate out again.
So what does this mean for building software? We need people who are skilled at engineering solutions, and we need people who are skilled at identifying and answering questions about the solutions being engineered. I believe these are two, very separate skill sets required to be successful.
However, my experience in the Software/IT industry over the last 20 years has been that only one of those skill sets has really been identified as important or relevant -- that of the programmer or engineer in building the solutions.
This is a problem. There's a huge knowledge gap here.
Schools don't teach you how to "test" in the context of software development. Every single Testing "certification" agency I have met to date misses the mark. They don't teach you the correct skills. They "teach" you superficial documentation skills that produce information *like* the kind of information that is required. That would be like me handing out Plumbing certificates to anyone who successfully completes the Mario & Luigi video games because these characters are plumbers in the games. That's just not right. Likewise, there is no actual "science" performed by teaching people to create scientific-like reports. (Although it happens sometimes that testers learn testing skills accidentally if they pay attention to what they're really doing.)
So, where are we here? Building software requires layers of intelligent, creative effort and problem-solving abilities. These layers are complementary and require different skill sets. Just like you wouldn't hire an accountant to deploy your systems, I don't think it's wise to hire a programmer to provide valuable testing insights into the development process. It's the wrong skill set.
Oil and water, or oil and vinegar - the analogy holds for me. Testing is something completely different from Programming and building software. It's a layer on top to help you know that what you are doing is on target for what your paying customers are expecting. Some might call that value.