Quality Center Must Die

It is not a matter of "if" -- it is a matter of "when" HP's Quality Center software will die.  And you, my dear readers will help make that happen.

"How?" you may ask? Simple. There are two things you should do: (1) think, and (2) don't put up with crap that gets in the way of delivering value to the customer and interacting intelligently with other human beings.

But I am getting ahead of myself. Let's rewind the story a bit...

Several months ago I was hired by my client to help train one of the test teams on agile and exploratory testing methods. The department has followed a mostly Waterfall development model until now and wants to move in the Agile direction. (A smart choice for them, if you ask me.) Why am I still there after all this time? That's a good question.

After attending the Problem Solving Leadership course last year, and after attending a few AYE conferences, I changed my instructional style to be more the kind of consultant that empowers the client with whatever they need to help themselves learn and grow.  It's a bit of a slower pace, but the results are more positive and long-lasting.

I am a part of a "pilot" agile/scrum team and am working closely with one of the testers (I will call him "Patient Zero") to coach him on good testing practices to complement the agile development processes. I have done this several times now at different clients, so this is nothing new to me. One of the unexpected surprises that cropped up this time was that this development team is not an end-to-end delivery team, so when they are "done" their work, the code moves into a Waterfall Release process and it all kind of falls apart. There are still some kinks to be solved here and I am happy to see some really bright, caring people trying to solve these problems. So that's okay.


Patient Zero and I are part of a larger test team, and the rest of the test team all work on Waterfall-style projects, use Waterfall-compatible tools, and they generally don't get how we work. :) Unfortunately, one of the tools mandated for our team's use is HP's Quality Center (HPQC).  I hadn't seen that tool in about a decade and it looked very similar to how I last remember it.

To my agile coach/practitioner friends I should clarify that at no time during our sprint development work does anyone ever touch HPQC! However, once the code is deployed/falls into the Waterfall Release process, regression test cases are created in HPQC and it is used for defect tracking. It is mandated, and so shall it be done. I can live with that. It's just a tool at this point and the impact to our ability to deliver a good solution is eliminated by the fact that we don't touch it until after we are "done". (Communication and collaboration FTW!)

Two days ago.

Our whole test team took part in a 2-day HPQC training workshop on something HP calls "Business Process Testing" or BPT. Being naturally curious to learn something new, I wanted to know what BPT was and how it fits into the bigger testing picture. Here we go.

We were given a handout with some "test scenarios" to be used for training. The test scenarios fell into this pattern:
  • Scenario name/title
  • Requirement description
  • Test Situation (I am staring at this right now and I still don't know what this means)
  • Role (kind of system user this requirement/situation applies to)
  • Steps

That's okay information. The "Steps" are what you might typically expect to see if you have been testing for a while. Here is an example for working with a sample web app:
  1. Login to the system with (a certain type of user)
  2. Navigate to some module in the app
  3. Click "Create" button from the tool-bar
  4. Enter mandatory field values and save
  5. Search for the information you created in the previous step
  6. Logout of the system

And there were 6 of these scenarios.

I read through these tests and then I tried to follow them using the system. I quickly encountered a half-dozen bugs - some with the system, some with the test scenarios/cases, and some were open questions that I would follow-up with the Product Owner for requirement clarification.

But, woah-woah-woah-hey.. wait a minute. We only need to worry about *these* documented test scenarios! I struggled hard to keep my mouth shut about the value of time and the many different kinds of tests I would happily engage in at this point if I could leave the tool alone. But, I left that to my "inner voice" and we were now one hour into the first day's training.

At this point, we were given an overview of the HPQC modules and told to (1) enter the Requirements, (2) create BPT test scenarios, and (3) "componentize" the test scenarios into groupings of related steps. This last part required some explanation since this was new to me, and once we got it, we set to work on the task.  Since we could see (1) and (2) on the handout sheets in front of us, we went straight to work on part (3). That was kind of fun working in small group of 4 looking at these scenarios and trying to come up with solutions.

And then someone went and spoiled the fun.  We were "told" that we weren't supposed to do part (3) until after we had entered the information for (1) and (2) into the tool.  I was all like "really?" and then a team lead came by and repeated the exact same thing.  This may be summed up as: Enter data into the tool first, and think later.

I was kind of shocked with this comment and attitude and in retrospect it kind of foreshadowed the rest of the training experience -- i.e. Tool and process first; think later; maybe.  Okay. I'll play along and see where this goes.

During this exercise, I began to grasp this idea of "components" as HP uses them. I think they are like Page Objects - chunks of code (or in this case, test steps) that perform a certain function, promote reuse, and reduce duplication. Although it was never described in this way, I believe the HP "BPT" module is a proprietary Do-It-Yourself DSL. Aha! I have experience with those. I get that stuff.

So, once I got it, I started to explain the concept of YAGNI to my group team members.  That is, let's not overthink or over-engineer these components. Let's build/write them based on the needs of the requirements in front of us. We will modify the components in the future as new requirements appear.  This idea was well received and we quickly came to an efficient solution for our small group.

When we took up the exercise, I found I had to explain the YAGNI concept to the instructor/trainer (and rest of the class/team) as he proposed that we try to abstract out these components to allow for compatibility with other features and system elements. What a waste of time! We cannot know what we will need beyond our immediate needs so that kind of abstraction is a pointless exercise that leads to more headaches than you need.

Eventually, I started to get that the point of writing the test scenarios using BPT was that these components form a basic vocabulary that may then be automated at some point in the future -- yes, BPT integrates with HP's QTP automation tools.  Now, I'm all in favour of consistency, clarity, reuse, and automating tests that humans should never have to do more than once (and if there is value in re-running the test), so I struggled to understand why it was never explained to us this way.  As long as I kept the DSL/automation model in my head, I understood what we were doing and can see the potential benefits of it.

I saw many testers struggle with the exercises and models we were presented with. End of day one.

Day two began with a quick recap and then we were introduced to the bureaucracy that is Waterfall and HPQC. That is, there are review processes and workflows to cover each requirement, BPT and component. Welcome to Wasteland. (I mean the Lean Development concept of "waste" here, although other interpretations of "wasteland" may be just as valid.) Easily a third of the 2-day training was spent on the processes surrounding the management and review of the various HPQC objects. sigh.

We then moved onto the concept of "parameters" for the components we created yesterday.  Okay, I get method parameters when I am scripting with Ruby, so this was no sweat.  Given the length of time I spent parametrizing my components compared to everyone else in the class, I think I may be one of the few who really got it.

I learned a few more things about the main instructor. One was that he didn't know how to identify web page elements using commonly available browser tools. Umm, aren't these tools supposed to interact with web pages? This never came up before? You have never wondered how to find the name/id for an element on a web page? really?

The other was that he had a really bad sense of humour.  He made a reference to a "QA" joke that I shall not repeat here.  Needless to say that I found it insulting, offensive, it made my skin crawl and my blood boil.  Many unpleasant feelings and ideas arose in me and it took all my willpower and strength not to react to the blatant stupidity of insulting the profession of the students in your class and the market that the HPQC tool represents.

The final blow for the day came to me when we tried to "execute" these BPT test scenarios using the HPQC tool. THEN I discover that these "parameters" can have 2 kinds of values - fixed/hard-coded and run-time. Anyone who has done intelligent automation knows NOT to hard-code values in their scripts. Data-driven is way better, and as an exploratory tester, I may not know what value I will choose until moments before.

Here's where the HPQC tool gets stupid..er.  Regardless which parameter type you choose, they may ONLY be defined BEFORE the test execution! You cannot leave empty parameters so that when you get to a particular step, the tester can decide what value to input and feed it back into the tool.

What does this mean?  As a tester, let's say you want to create a new user profile in an app of some kind. The test parameters for things like Name, Email, Address, Country, and so on, must all be defined and set before the test is executed. You cannot decide while you are actually testing!

Why does this matter to me? It matters because it means that the humans who execute these BPT tests manually have no choice in what values they may input. Testing techniques like Equivalence Classes and BVA that help guide our choices to pursue interesting paths are completely cut-off! It turns out that HPQC treats humans worse than the automated counterparts. In discussion with an automation "expert" at the end of the class I learned that at least you can code some variability into the QTP automation scripts.  This is not possible with the same test scenarios executed manually by human beings.

So. Much. Wrong.

So after my first ever QC training session, here are some of my take-aways:

  • It took me 2 days to "script" 6 test scenarios in this tool and they were rotten test cases to begin with! I suspect that outside of the training environment, it will actually take longer to complete since you won't have a "reviewer" sitting next to you waiting for you to finish your piece.
  • And they weren't even automated! Who knows how long it would take to tweak the "components" to make them work with a particular automation strategy.
  • HP QC will never be a useful tool for any agile or rapid development efforts
  • (HP best practice) Put the tool, data and review processes first, before you think. Maybe always instead of it too.
  • BPT is a DSL framework for test scripting
  • BPT component parameters cannot be customised during test execution. They may only be set/defined before you start testing. => No thinking allowed while testing.
  • The instructor didn't appear to be knowledgeable on anything outside of the tool itself. This includes how we might actually want to use the tool. No, no, no. And I quote: "Testers must change how they work to use the tool in the way it was designed."
  • As long as there are people pushing these kinds of horrible tools that suck the life and intelligence out of people, inject mountains of wasteful activities that provide no value to the customers or end users, and continue to create barriers between testers and their developer counterparts, I will always have job security in helping organisations recover from these cancers.

At the end of the day, there were several people interested in my idea of randomising tests. The automation expert in the class insisted that it couldn't be done with automation, so I called up my "Unscripted Automation" presentation slides that I gave a few years ago. He said that what I proposed was not a "best practice" and that everyone, the whole industry, was using the tools in the way that he described how they should be used.

My response was to simply say "they are all wrong."

26 comments:

  1. Great blog and a view shared by Original Software whole heartedly.

    ReplyDelete
  2. So much in your post that I found myself nodding at. I abandoned formalised scripting years ago in favour of keyword driven testing frameworks. However it is hard to explain the simple concept that 'it gets me testing faster' to these massive tool .... advocates.

    ReplyDelete
  3. It is very difficult for someone to teach HPQC when they haven't actually use it. In your case – I can tell that you have not worked with QC much especially since you refer to BPT as QC. Just a note BPT is not = QC.

    I understand your frustration but BPT is just one of the components that comes with QC and your anger should be towards BPT and not QC as a whole. I think it is wrong for you to base your entire opinion in just one part of the tool - You should say instead BPT is really BAD - but instead you prefer to say that QC is. I can also tell by your comments that you have not used QC much. Believe me when I say this - you can do so much with this tool but if you have not worked on it much - guess what – of course it won’t work for you.

    I don't know why but people tend to blame the tools when they don't get it their way. Someone said once - "it is a poor workman who always blames the tool".

    I understand your point but QC is not about BPT only. BPT is just one part of it. Anyone reading your blog would think that BPT is QC and it is not.

    ReplyDelete
  4. Paul, I can see your agony and sympathize with you. I also see how exploratory-minded tester you are and I admire that.

    I also feel the embarassament and stress of the trainer, who probably faced a student with superior knowledge and experience in the field than the trainer himself. Maybe it was natural for him to take an incongruent stance and speak more firmly.

    I usually prefer growable tools, in social and technical aspects. People should learn to handle complexity, with experience and time. There are some tools that seeminly decrease the complexity but in reality it acts the opposite. Growable tools usually augment human thinking -- handling drudgery is another benefit but comparatively it's much less important in value.

    I am more interested to hear what you did with your team(the test team and the pilot team) after the training.

    I hope you make some more comrades in your client's site and have happy time.

    With best regards,

    June Kim

    ReplyDelete
  5. Great post! I've seen numerous occasions where a tool was prescribed on an agile team. Let the team choose what to use and how they use it. That is where team performance will improve.

    ReplyDelete
  6. First of all hats off to the manager who sent their team to a HPQC course instead of assuming that it can be picked up in tester's spare time.

    I'd be also be interested how useful the course was for the company and how much of it was then used in practice. Was it worth the money?

    If not, what would alternative courses have cost and where'd the value be?

    I'm implementing SBTM at the moment (again) after having ditched HPQC which ran out of warranty - not that it was used much. If we all look at what the business' needs are instead of starting with what tools have been bought blogs like this wouldn't be necesssary anymore.

    Thanks for sharing.

    ReplyDelete
  7. Dear Anonymous, thank you for the feedback.

    Thank you for the clarification that BPT is not QC. I suppose you're right although the end result is the same. It is part of the same tool suite, and part of the problem.

    This blog post was mainly focussed on my recent experience and I can assure you that I have much more experience with QC and other modules within it.

    I have used the Requirements, Test Plan, Test Lab and Defects components mostly. I can tell you with absolute certainty that Quality Center is EVIL. No good testing will ever come from it.

    This comment field is but a margin and I can write pages of comments and feedback about how QC's Test features cannot align with good test design, and the Test Lab/execution component is just plain vile.

    IMO, the single worst aspect of Quality Center is the feature I have avoided the most - the Dashboard. Collecting *metrics* on test execution time and counts of test cases as measure of tester performance and "quality" are stupid, demoralising, and misleading. I am being generous here, and I have many colleagues who can also provide many supporting examples here.

    I apologise if I failed to relate my broader experience with the tool. You are correct in that "it is a poor workman who always blames the tool." In my opinion, this tool is part of a disease of mind that is negatively affecting the Information Technology Industry.

    I am a knowledge worker. I help smart, creative people break free from their mediocre routines to excel and provide awesomeness to their colleagues and customers. If I see a tool, process or idea that is holding people back, then I will happily help them to see the elephant in the room.

    QC is on my hit list.

    ReplyDelete
  8. WHY is it so hard for software professionals to see the obvious (I admit, it was hard for me too):
    1. Quality and testing activities are the responsibility of the whole team, not of only the QA team
    2. The programmers who write the production code can do the test automation in a tiny fraction of the time it would take the testers to automate in any tool, especially one like QTP
    3. If the testers don't have to waste time trying to automate tests (probably creating tests that are too expensive to maintain and provide bad ROI), then they have oodles of time to collaborate w/ customers & devs to elicit examples of desired and undesired behavior, which they can then turn into test cases which are automated quickly and easily by the programmers. Then the same testers will also have oodles of time in which to do the essential exploratory testing. Everyone gets to do what they do best, everyone collaborates, high s/w quality results.

    Sigh.

    ReplyDelete
  9. As a HPQC user I can identify with a lott of what you are saying. Even so most of what you are saying seems having more to do with an incompetent training and the misguided conception that one should follow HP's standard of structured testing when using HPQC.
    I work in a company that requires me, based on audit and regulatory rules, to create test ware with built in traceability between requirements, test design, test execution, defects and to link them to individual systems and releases. Those rules do not however tell me that I should do the actual creation and execution from within HPQC. And thus I use HPQC more like a repository storing requirement summaries, short test descriptions, execution results, defects and also some manual tests. Within that context it does the job without irritating or limiting me. It actually does the job quiet well as I seem to be to keep the auditors satisfied without much effort.
    So yes HPQC is certainly no blessing, but it can be useable if you put some effort into it.
    Additionally instead of criticism I would like to hear people show me alternatives.

    ReplyDelete
  10. It has been my experience that HPQC requires an extensive period for prior planning before the tester can simply dive in an begin to develop tests. Even with that said, it is crucial to establish how and what types of testing will be set conducted. For unit testing, I feel QC is not an efficient tool as it requires MUCH planning and coordination with CM just to get test execution activities off the ground. System, Integration, and End-to-end is a bit of a different story however. Tests can be more structured and test data be organized into parametrized sets that portray good vs bad data.

    Even with that said, there is another dynamic that is not normally considered with QC users. There is a difference between front-end and back-end testing. Unit tests sort of combine the two categories, thus making QC a poor tool choice for UT. With front-end testing, the exploratory testing method is fine for user acceptance but is not efficient for system, integration, or E2E. Back-end testing is all about executing SQL code that tells you if data hit the right target table/column... This is something that QC does not execute automatically unless you use QTP to execute SQL through VB scripting. In stead you end up having to use TOAD/SQL Dev. to execute the code while updating the status of a given test in QC.

    The new version 11 of QC now comes with the "Sprinter" application (for free) and was designed exclusively for the exploratory testing method. In deed, it also automates manual tests to a point but not so to the point QTP automates functional tests.

    Your taking of the business process testing class was truly a waste of time as it would have made more sense if you had taking "Sprinter" training instead.

    Overall HPQC is a good tool but I feel it deserves low grades and criticism for its usability. I feel test managers really need to understand how to set up a test program and implement that program using the modular test approach that QC embraces.

    There is so much potential for QC but the test manager needs to plan out how the tool will be used... not the free lance tester who is just looking for quick verification of a given requirement.

    Just some thoughts...

    ReplyDelete
  11. @TexasPride:

    That was awesome feedback! Thank you for sharing your experiences and adding to the value of this post! I appreciate it.

    You mention several interesting points. I'll start with the most-interesting part (for me, right now) - I have never heard of "Sprinter" and have only used up to version 10 to date.

    My current client has mentioned the possibility of upgrading to version 11. If I have an opportunity to use this feature I will most certainly review it and blog about my experiences with it!

    You wrote: "HPQC requires an extensive period for prior planning before the tester can simply dive in and begin to develop tests." I would agree with this statement.

    The time spent using the tool is one of my concerns. That is time I would rather spend collaborating or testing. I like the range of the testing contexts you described - and you mentioned situations I hadn't even heard of, so that's cool.

    Your statement "test managers really need to understand how to set up a test program and implement that program using the modular test approach that QC embraces" is very similar to what I was told about how the testers need to change their process to align with how the tool works.

    See, that just doesn't work for me. I have seen how the tool works and I refuse to dumb-down my thinking & testing processes to the HPQC level. I like to be challenged and learn and grow. HPQC (as far as I have seen) sets the bar so low that I cannot conceive of a way to have good testing and quality come out of their models and approach. I have tried. Several times.

    When I teach testing, I use whiteboards, pencil & paper, and simple, readily available apps like text editors, paint programs, wikis and freeware tools. The value is in helping the testers to think, organise their thoughts, execute on their ideas, and communicate effectively with the team members. All of the implementations I have seen so far of HPQC fall short on complementing any of those activities.

    In one company, I terminated the QC license and we went back to using Word and Excel documents. Testing productivity went up and the testers felt better about what they were doing. It wasn't a perfect solution, however there was a noticeable improvement in happiness and performance once we eliminated the tool.

    I know it's not the tool's fault. It's the whole package -- the processes, the tool (stability & bugginess, design, complexity, usability), the workflows, and the monitoring & control aspects -- that, in my opinion, don't appear to support the creative enterprise that is software development.

    ReplyDelete
  12. Alarm bells ring when you are told you have to change the way you work to fit in with the 'new' software.

    ReplyDelete
  13. Whilst I can understand this guy’s point of view on a couple of his criticisms, I do agree with some of what he says. However I would disagree with his negative attitude. It’s definitely been clear he totally clashed with the instructor. Also I don’t feel the instructor really knew what he was doing from a real life scenario point of view. You will NEVER get one tool that fits ALL testing methodologies so he’s being a little short sighted I feel there. The BPT add-on I feel for the right project and methodology is a very clever, lower maintenance, structured way of building test scenarios. It also allow users from different teams (business areas) for example to be writing test scenarios all at the same time and all following test scripting standards. The parameterising issue I do agree needs modifying as in certain situations it would be nice to run on the fly.

    The summary for me then is this guy has overly critisized a tool that he clearly has very little or actually NO real experience of using on a live project. So the statement ‘QC will die’ at the beginning of his blog I would hapily put my house on never becoming a reality…certainly not in the near future anyway. I think if he’d critisized the cost of the tool he’d have had my backing more with that!

    ReplyDelete
  14. Dear Anonymous, thank you for your feedback.

    You often use "this guy" in your comment. To be clear, my name is Paul. It says so at the top of this page, and it also says "Posted ... by Paul" at the top of every blog post. Please refer to me by name. You may choose to remain anonymous if you like, but I do have a name and am not hiding my identity with the thoughts, experiences and opinions that I post here. Also, if you haven't noticed, the "Paul" who replies in comments (like this one) is the *same* person who wrote the blog post. You may want to read some of those comments to add to the context of the original post.

    The instructor knew what he was doing, and all told, was quite effective in meeting the objective of the training session. I can respect that. I certainly left the workshop understanding the functionality, workflow, and expected business context in which the tool features are to be used. I was surprised at some gaps in his knowledge, but as a colleague suggested, I shouldn't have been.

    The Trainer never said that the "tool fits ALL testing methodologies" and I don't believe I did either. If you listen to the HP marketing hype, I'm pretty sure they *do* push the tools that way though. I was at the StarEast conference this past May and the HP/QC sales people used pretty much those exact words.

    The BPT functionality is interesting. Once I got it, I compared it to something like RSpec or other Domain-Specific Languages. As I said in my post above, I *like* those kinds of abstractions. I think they provide value. Anything that reduces the complexity of details and allows subject-matter experts to get more involved in writing tests is a good thing in my opinion.

    What I don't like about the "big picture" here is the Waterfall-style barriers and structures imposed by this particular implementation, and that when I used these "business tests" in the Test Lab/execution phase (of the tool - i.e. trying to use the tool to actually *test*), I still cannot do effective testing. As I mentioned above, automating these tests would be more effective than running them manually because at least computers don't think. (yet)

    You wrote: "he clearly has very little or actually NO real experience of using on a live project." I can assure you that the 2-day training course was *not* the first time I have used the tool. I have used HP Quality Center almost daily for the last 5 months. I also used QC when it was called Test Director many years ago. It hasn't changed much in the last decade. Bells and whistles can't change the spots on a leopard. (with apologies to any leopards that may be offended by being compared to QC.) QC is as bad today as it was then. I had hoped it improved, but it hasn't.

    I am currently using the tool as a glorified repository and am avoiding as much of their specialised functionality as possible. There are many reasons for this, the least of which is that the specific Testing "features" only lead to bad testing and poor quality IMO. In that case, there is no way to justify the expense for this tool.

    If we change this to an economic discussion, which I am happy to do, a cost-benefit analysis would most certainly show that the teams that focus on real-time collaboration and communication produce far greater quality and value than the ones that rely on tools for accountability and information traceability.

    The latter (a tool for accountability) keeps lawyers happy and since when did we allow them to start driving software development projects?

    Let's get rid of the tool and start fresh with people. Have conversations, build understanding, and provide value. Only then select the tools that provide value to you, the humans doing the hard creative work. Don't use tools that make slaves of the humans, eliminate thought, prevent creativity and create barriers between teams. We can do better.

    ReplyDelete
  15. Paul, how much of this did you raise during the training, or feed back after the course so that the training can be improved? As a trainer, one of the very few things that really annoys me is an audience that has the opportunity to ask questions or speak their mind (and I give people plenty opportunity to do that on my own courses), but then only makes clear their feelings after the course has finished or there is too little time to deal with their points...

    ReplyDelete
  16. Sounds like your trainer did not have true enterprise-level experience with the tool (I can certainly relate as I have frequently known much more than the trainer). We have integrated HPQC into our existing processes that support our SDLC/SDM (process first then tool) and have been successful at many of the challenges identified in this blog post. While some of our groups see significant value and have a vision that can be accomplished using the tool, other groups continue to struggle as the expertise simply is not there.

    ReplyDelete
  17. Dear Anonymous trainer,

    As I described in the experience above, I mentioned several of these points during the training itself. So as to not be disruptive, I saved some of the points for immediately afterwards when I spoke directly with the trainer. He seemed like a good guy (poor, disrespectful sense of humour aside) and helped me understand that some of the difficulties I expressed were with the implementation and not the tool itself.

    I raised some of these points with my team lead during and immediately following the training. And some of the points I summarised in the feedback form.

    After 4 attempts to raise issues using different methods, I was never once approached to further discuss any of the concerns I raised. You know, there's only so much you can do before you move onto the next thing, right?

    I am a professional tester. I raise issues, risks and concerns in the attempt to try and improve things. From this blog post, a Product Manager from HP contacted me to discuss my thoughts on how I might help improve the BPT product and I am happy to help.

    Based on the evidence, it appears that this blog post was more successful than my previous direct attempts to try and improve the training.

    At the end of the day, QC is a tool that can be used and misused in ways that leads to poor testing that provides very little value. That is what must die. Bad testing must stop. We can do much better.

    ReplyDelete
  18. As a Software Tester who has been 'forced' to use this tool for 6 years now, the blogger is RIGHT ON THE MONEY. And well stated.

    To Anonymous - if you are too insecure in yourself and this tool to actually post your name. Shut up. You are offering nothing but chicken shxx opions that have no place in the business world.

    ReplyDelete
  19. I whole heartedly agree. My exposure to QC is more limited than most of you here. However I could never shake the feeling that it leads to "tick box" testing.

    It makes creative testing hard. Capturing the reality of the ad-hoc test that you conducted, (because you saw something in a previous test that wasn't a failure case, but wasn't optimal behaviour), in a new test, was a serious pain.

    Once the tests in the tool had been ran "testing was complete"? Huh? What? I'd review the tests, and be astounded at how *little* was actually in the tool.

    Give me Jira, and a testing team that reviewed the user stories/use cases, gave their insight into the user process, worked to understand the domain model (independent of the SQL model underneath), and writes tests in their tool of choice - if that's Excel, that's fine. I know they will test significantly better than any team using a HP QC driven process.

    ReplyDelete
  20. David P, Quantech SystemsApr 11, 2013, 11:40:00 a.m.

    Quite an interesting read. I do however feel that Paul has a slightly biased opinion of QC that could be unfounded.

    Like many professional testers, I do feel that QC lends itself to a narrow minded scripted "tick-box" style testing, and is not good at exploratory testing. That doesn’t make QC bad per se though, it can be used as repository in much the same way as your traditional excel/word ways of working, but it also allows you to link defects to test cases and steps in a manner that allows for traceability; something of which it can be hard to show with your traditional excel piles of documents or Jira stories. (I have used Jira and I like it, btw, among many other testing tools in my 16 years of QA experience)

    Use QC as you would use excel/word to document your tests, but link your defects, and include high level "title only" exploratory tests in your QC test catalog. By documenting your exploratory thoughts up front, you can often provoke more creative test documentation, and other team members can often add valuable extra pairs of eyes to increase coverage outside the scripted method of using QC. I have been using QC like this for years, and it works for me. I have constantly stayed away from the BPT module, and until recently always been frustrated by the lack of maturity in the test lab module, but much of that has now (eventually) been addressed.

    You can be collaborative, you can still get your team thinking outside the box and testing creatively, but you dump it all in one repository that appeases the higher level PM and execs to show that you have thought about your testing, you have covered your requirements, you have executed what you said you would execute, and what bugs have been found have been resolved (or deferred?)

    QC does have a place, but I agree that the BPT enforced flow type of tests that HP themselves advocate and try to shoehorn in do not make for a happy tester haven. I'm happy using QC for my structured tests and my unstructured tests, linking my defects to tests and even logging and linking to requirements if the need arises (although I tend to start at the test plan module and work forwards from there TBH) I am also happy letting my manager report on the test artifacts within QC to gauge how far on we are in a test cycle.

    Certainly don't use QC as your only tool, but identify where it can provide useful input to your team. If you don't have a test process in place, QC can help provide one that you can start with, but if you have a process that works, like Paul, then don't try to shoehorn QC into it.


    David P, Quantech Systems

    ReplyDelete
  21. Paul, leaving aside BPT, which I have no experience of, to my mind your criticism of QC seems a little out of proportion. It would be clearer if you offered a point of comparison (even if this was only to describe how excel or paper and pen could offer a better alternative) for your points of criticism. I have some experience of QC, and this was working under an agile methodology, and it seemed no better or worse than mediocre. Yes it does become a bit of a box-ticking exercise, however this is a part of testing, and the bit the auditors like to see. I’m not sure I understand the setup complexity either: upload requirements, create test cases linked to requirements, create test plan and execute.

    It can be difficult for testers who are outside of the development circle (yep this may be the problem I know) to test more than the predefined scenario presented to them; QC or any other QA tool provides a decent tool to get the boxes ticked. Testing within the development team can be more flexible and creative. I guess the real story is balancing the need for “box-ticking” against providing the necessary level of testing depth and breadth. QC helps facilitate the first aspect more than the second.

    For the sake of openness I am not a tester and so QC is generally someone else’s issue…maybe I am just selfish!

    Stu

    ReplyDelete
  22. Here is why it sucks:

    Only works in IE “the least used browser available”
    To be more specific (IE7 - IE8)
    Plugins only work with the 32 bit version of office
    You must run your browser in protected mode off
    QC downloads a crap load of active X files on users local
    Uses tons of memory, bloated with tons of useless features
    QTP uses VB script with a horrible weak IDE
    Randomly crashes from time to time

    Quality center is a steaming pile of monkey crap.

    ReplyDelete
    Replies
    1. I totally agree, I write python test frameworks that can be executed on Linux small computers (Raspberry Pi) The company I am contracting to just decided to shift the test cases to QC.... I read that ALM 11+ supports REST interface so I had no objection.... NO it does not it supports a tiny subset of its features. Why does any modern test tool not use open standards? and dont get me started on the actual interface oh my good god.. really.. define a test as Automated... that's it now you cant change it back... one of the features in any project I design is the ability to pass a test between Manual Automated and Augmented.. Most tests start out as Manual tests, and evolve to Automation... Now not possible! you must duplicate the test mark it Automated and block the Manual test?? duplication is the enemy!

      Delete
  23. It's been a few years since I had to use QC (2010 was the last time, I think) so I'm not sure if it has changed, but this post brought all the bad memories rushing back :)

    Essentially, and in common with *many* test management tools, it isn't designed for testers to use in their day to day work, it is for management to get information on what testers are doing, or more generally in the overall status of the test phase of a waterfall project.

    In addition to the needlessly complicated setup work required before anything can start to happen, the bits that used to drive me crazy were the way that all test cases were sorted alphabetically within a folder, and modal dialogue boxes so you can't refer to your previous work when e.g. writing a new test case.

    ReplyDelete
  24. My name is Jerry, and have been doing consulting now for over 18 years - in the testing arena - across many states (CA, TX, Northeast, and global): kudos to you Paul.....for having the self-confidence to express your personal adverse experience with HP products. Reading carefully your piece, and the replies from what obviously seem like HP salespeople or company folks on the thread (defending HP), I totally understood the essence of your blog title, and what you really meant by it. Actually, the bias was obvious to me from those who were too quick to basically categorize you as almost "loony" and "emotionally in distress". Well, fact is that I go back way back....since Y2K testing and have used QC, QTP....and I can submit with confidence 5 points: 1) HP has spent millions on advertisement and promotions...and that accounts more for the popularity of their tools...then the quality of them, 2) in recent years, HP has changed their business model from, direct distribution, support, and sales of their products, to now having vendor (contracting companies) do the selling and distribution for them...this is proving to be a nightmare...from a communications and support angle...and is leading to a lot of frustrations, 3) years ago, you could go on the HP site and clearly view their license and price options....now, they force you to sign up with a vendor rep., allow them to give sales presentations, and more pitches before you can get price info...and this is proving to be a time killer and frustration for companies that are already informed about the products and simply want to move ahead with assessing purchasing decisions...so you have to wonder why has HP moved to such a "unfriendly" and "time consuming" model?, and 4) absolutely.....over the years, HP has clutter their software with so much coding that it has put a strain on their performance....so I', hearing across many blogs how QC adds more and more clutter to your machine's memory as you use it, that performance is being compromised. And #5) the biggest hurdle of all....HP has found itself in a battle with its' own business model by having to charge astronomical license prices (compared to other tools), in order to offset the $millions being spent in marketing and the $commission they now pay vendors (their new sales force). From a business angle, I think HP is slowly killing itself through these very bad marketing decisions. Regarding product knowledge...hate to burst the bubble for those defending HP in this blog, but your experience with that trainer is more the case today then the exception with HP.

    ReplyDelete