Once again I find myself in the situation where a new tool was adopted without first seeing if it was able to support the end-to-end use cases of our team. This is not an uncommon problem, but is always frustrating.
We are using Team Foundation Server and MSBuild is the problematic piece. I'm a big believer in a "one button build" that goes from clean machine to installation media. If any step in-between fails, the entire build is considered to have failed. We are using WIX to create our installer and between it and MSBuild we are not able to sign our builds or have the build fail if the binary or installer build fails. So, we had to split things up and build the binaries and installer separately. And then do some deleting of binary builds so the installer build fails correctly. It really lowered my confidence in the builds. And good builds that you believe in are a foundation of testing.
I've seen the same problems with 3rd party libraries. "We don't have time to build this ourselves so we need to buy this fancy huge library package." Then it turns out you use 5-10% of the library and you would have been better off just building that small piece. Now you are now tied to the vendor's release cycle and Murphy will always make sure there is a bug right before you ship but six months away from the vendor's next release. Finding bugs in 3rd party code is my least favorite kind of bug.
I don't see the issue of tool acquisition getting any attention in the talk about what teams need to do better. Somehow were are all staggering around, grabbing things off the shelf that look about right, and suffering the consequences.
It is a lot of work to qualify a tool. It is just like testing your own applications. When has your team treated it as such and written acceptance tests for a new tool you are considering?
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment