Reading James Bach's post about labels for test techniques was a big relief. I find talking about what testing terms mean to be very limiting and frustrating. Very few "common" terms exist in testing and this makes it hard to talk amongst ourselves about what we do.
Names aren't bad, and they do facilitate discussion. Like them or hate them, programming patterns at least make it easy to have a conversation about common structures. Of course, this says nothing of people faking knowledge of the pattern under discussion. That is another post.
Establishing what terms mean for your team is a necessary first step. I'm going through this right now and so far, we have a lot of similar shared past experience and there is a lot of overlap and we haven't run into any serious misunderstandings or had any heated debates.
I second the recommendation of Gerald Weinberg's book. I need to read it again as a refresher on these topics.
Monday, December 31, 2007
Monday, December 24, 2007
Spend your time testing, not documenting
Every time I need to write a test plan or document test cases, I consider the role of documentation in testing. If you are in an industry segment that requires a lot of documentation, things are easy, produce the docs. If you aren't required by law to produce a certain level of documentation, it becomes harder.
The most consistent advice I've found is to document to the extent that you need to so you can give as good an accounting of the testing as you are required to.
When your project leadership asks what the state of the software is, can you give a good answer based on test data?
Do you know if you have addressed the risks in your dev cycle as best you can given your resources?
As I've mentioned previously, I'm building a new team and we are finding our way on processes. Documentation came up very early as something we would need to figure out. I keep spreadsheets and notebooks of my testing. Along with emails, bug reports, and a lean test plan, I feel like my work is adequately documented for our project. However, the other tester on my team wasn't documenting as much. So, when a chunk of functionality he had recently tested came back around after a major refactoring, I asked him to document his test cases in a spreadsheet. Basic, lightweight test documentation I'm sure most testers are familiar with. We are using Microsoft Team Foundation Server, but I'm not yet comfortable with the test documentation features to use it yet. I want some flexibility to enable us to learn more about how we are going to work with our test case assets.
Anyway, a few days passed and I went to check up on the testing progress from my other tester. He hadn't gotten very far.
"How much testing and how much documenting are you doing?"
"About 50/50".
Oh.
That struck me as too much documentation for our team. 80/20 seemed better. Our job is to give timely feedback to the dev team and get as much testing as we can get done. I'd like to think 80% testing and 20% documenting is a good balance. Analyzing the software and coming up with test cases is a separate activity that falls outside of this time recommendation. The 20% spent on documentation is actually writing out the tests, filing bugs, taking notes, etc. as you test.
I don't yet know how this will work out, but for now this is the heuristic I'm going to use.
The most consistent advice I've found is to document to the extent that you need to so you can give as good an accounting of the testing as you are required to.
When your project leadership asks what the state of the software is, can you give a good answer based on test data?
Do you know if you have addressed the risks in your dev cycle as best you can given your resources?
As I've mentioned previously, I'm building a new team and we are finding our way on processes. Documentation came up very early as something we would need to figure out. I keep spreadsheets and notebooks of my testing. Along with emails, bug reports, and a lean test plan, I feel like my work is adequately documented for our project. However, the other tester on my team wasn't documenting as much. So, when a chunk of functionality he had recently tested came back around after a major refactoring, I asked him to document his test cases in a spreadsheet. Basic, lightweight test documentation I'm sure most testers are familiar with. We are using Microsoft Team Foundation Server, but I'm not yet comfortable with the test documentation features to use it yet. I want some flexibility to enable us to learn more about how we are going to work with our test case assets.
Anyway, a few days passed and I went to check up on the testing progress from my other tester. He hadn't gotten very far.
"How much testing and how much documenting are you doing?"
"About 50/50".
Oh.
That struck me as too much documentation for our team. 80/20 seemed better. Our job is to give timely feedback to the dev team and get as much testing as we can get done. I'd like to think 80% testing and 20% documenting is a good balance. Analyzing the software and coming up with test cases is a separate activity that falls outside of this time recommendation. The 20% spent on documentation is actually writing out the tests, filing bugs, taking notes, etc. as you test.
I don't yet know how this will work out, but for now this is the heuristic I'm going to use.
Tuesday, December 18, 2007
Nice job kid!
One of the questions I ask in my phone screens for potential new tester hires is a classic: give me the test cases to test a Coke vending machine.
http://www.5min.com/Video/How-To-Hack-a-Soda-Machine-2497
I think I know where to look for my next hire!
http://www.5min.com/Video/How-To-Hack-a-Soda-Machine-2497
I think I know where to look for my next hire!
Monday, December 10, 2007
The Ongoing Struggle with Tool Acquisition
Once again I find myself in the situation where a new tool was adopted without first seeing if it was able to support the end-to-end use cases of our team. This is not an uncommon problem, but is always frustrating.
We are using Team Foundation Server and MSBuild is the problematic piece. I'm a big believer in a "one button build" that goes from clean machine to installation media. If any step in-between fails, the entire build is considered to have failed. We are using WIX to create our installer and between it and MSBuild we are not able to sign our builds or have the build fail if the binary or installer build fails. So, we had to split things up and build the binaries and installer separately. And then do some deleting of binary builds so the installer build fails correctly. It really lowered my confidence in the builds. And good builds that you believe in are a foundation of testing.
I've seen the same problems with 3rd party libraries. "We don't have time to build this ourselves so we need to buy this fancy huge library package." Then it turns out you use 5-10% of the library and you would have been better off just building that small piece. Now you are now tied to the vendor's release cycle and Murphy will always make sure there is a bug right before you ship but six months away from the vendor's next release. Finding bugs in 3rd party code is my least favorite kind of bug.
I don't see the issue of tool acquisition getting any attention in the talk about what teams need to do better. Somehow were are all staggering around, grabbing things off the shelf that look about right, and suffering the consequences.
It is a lot of work to qualify a tool. It is just like testing your own applications. When has your team treated it as such and written acceptance tests for a new tool you are considering?
We are using Team Foundation Server and MSBuild is the problematic piece. I'm a big believer in a "one button build" that goes from clean machine to installation media. If any step in-between fails, the entire build is considered to have failed. We are using WIX to create our installer and between it and MSBuild we are not able to sign our builds or have the build fail if the binary or installer build fails. So, we had to split things up and build the binaries and installer separately. And then do some deleting of binary builds so the installer build fails correctly. It really lowered my confidence in the builds. And good builds that you believe in are a foundation of testing.
I've seen the same problems with 3rd party libraries. "We don't have time to build this ourselves so we need to buy this fancy huge library package." Then it turns out you use 5-10% of the library and you would have been better off just building that small piece. Now you are now tied to the vendor's release cycle and Murphy will always make sure there is a bug right before you ship but six months away from the vendor's next release. Finding bugs in 3rd party code is my least favorite kind of bug.
I don't see the issue of tool acquisition getting any attention in the talk about what teams need to do better. Somehow were are all staggering around, grabbing things off the shelf that look about right, and suffering the consequences.
It is a lot of work to qualify a tool. It is just like testing your own applications. When has your team treated it as such and written acceptance tests for a new tool you are considering?
Monday, December 3, 2007
"It is not possible to tell the difference between high-quality code and poor testing."
For it is a dictate of common sense, that we can be under no obligation to do what it is impossible for us to do.
--Thomas Reid
"...defect removal in total has been the most expensive cost element of large software applications for more than 50 years."
--Capers Jones Estimating Software Costs
It's fair to say that current testing theory is based on the premise that "complete" testing is impossible. With this given, the idea is to make the most intelligent and practical selection of tests to run from all available tests given the time and resources available to you. The testing processes and techniques we employ in our projects are all a result of this practical necessity. We spend a lot of our time thinking about, and performing testing as the main activity to improve quality.
A new book, The Practical Guide to Defect Prevention, makes the argument for giving more attention to defect prevention. It seems to make argument that testing is necessary and fine as far as it goes, but it doesn't go nearly as far as we think given the amount of effort that we are willing to devote to it and the results we have gotten as an industry.
The authors argue that defect detection is the first and most common level of activity that teams undertake to improve software quality. "Test quality into the software". Many defects are detected this way, but nothing is done to prevent them recurring elsewhere in the application. The next level of software quality improvement is analysis. Previous defects are analyzed to see trends and find out why they weren't detected earlier. The long term goal and highest level is prevention. At this level active measures are used to identify and eliminate potential defects. The root causes of defects are found and used to eliminate them before they are introduced.
"The ultimate goal of any quality improvement effort should be to enable the development team to invest more time in defect prevention activity."
The low efficiencies of defect removal explain why there are often a long series of defect removal steps in software projects. When unit testing, function testing, integration testing, and system testing find less than 50% of the bugs present in an application, it is reasonable to question your quality improvement efforts and see if you are spending your time and resources most productively. I'll be giving defect prevention more consideration in my projects.
Subscribe to:
Comments (Atom)