Moody’s error gave top ratings to debt products
By Sam Jones, Gillian Tett and Paul J Davies in London
Moody’s awarded incorrect triple-A ratings to billions of dollars worth of a type of complex debt product due to a bug in its computer models, a Financial Times investigation has discovered.
Internal Moody’s documents seen by the FT show that some senior staff within the credit agency knew early in 2007 that products rated the previous year had received top-notch triple A ratings and that, after a computer coding error was corrected, their ratings should have been up to four notches lower.
News of the coding error comes as ratings agencies are under pressure from regulators and governments, who see failings in the rating of complex structured debt as an integral part of the financial crisis. While coding errors do occur there is no record of one being so significant.
Moody’s said it was “conducting a thorough review” of the rating of the constant proportion debt obligations – derivative instruments conceived at the height of the credit bubble that appeared to promise investors very high returns with little risk. Moody’s is also reviewing what disclosure of the error was made.
The products were designed for institutional investors. In the recent credit market turmoil, those who still hold the products will have suffered some paper losses while others who have bailed out have lost up to 60 per cent of their investment.
On discovering the error early in 2007, Moody’s corrected the coding glitch and instituted methodology changes. One document seen by the FT says “the impact of our code issue after those improvements in the model is then reduced”. The products remained triple A until January this year when, amid general market declines, they were downgraded several notches.
In a statement to the FT, Moody’s said: “Moody’s regularly changes its analytical models and enhances its methodologies for a variety of reasons, including to reflect changing credit conditions and outlooks. In addition, Moody’s has adjusted its analytical models on the infrequent occasions that errors have been detected.
“However, it would be inconsistent with Moody’s analytical standards and company policies to change methodologies in an effort to mask errors. The integrity of our ratings and rating methodologies is extremely important to us, and we take seriously the questions raised about European CPDOs. We are therefore conducting a thorough review of this matter.”
Credit ratings are hugely important within the financial system because many investors – such as pension funds, insurance companies and banks – use them as a yardstick either to restrict the kinds of products they buy, or to decide how much capital they need to hold against them.
The world’s other major credit agency, Standard and Poor’s, was the first to award triple A status to CPDOs but many investors require ratings from two agencies before they invest so the Moody’s involvement supplied that crucial second rating.
S&P stood by its ratings, saying: “Our model for rating CPDOs was developed independently and, like our other ratings models, was made widely available to the market. We continue to closely monitor the performance of these securities in light of the extreme volatility in CDS prices and may make further adjustments to our assumptions and rating opinions if we think that is appropriate.”
from http://www.ft.com/cms/s/0/0c82561a-2697-11dd-9c95-000077b07658.html?nclick_check=1
Thursday, May 22, 2008
Thursday, May 1, 2008
PNSQC 2008 Last Call for Papers

World-class quality does not happen in a vacuum. Agile-inspired collaboration spans levels, disciplines, and industries. We would like to hear your ideas and experiences on…
• Collaboration between individuals
• Collaboration between teams
• Collaboration between companies
• Collaboration between industries
The Selection Committee evaluates submissions on their originality, significance, soundness, clarity, and relevance to the conference theme. Papers presented at PNSQC are peer-reviewed during the summer months.
Paper presenters receive a complimentary admission to the technical program
October 14-15, 2008 and are published in the conference Proceedings.
What can the software quality industry learn from quality in other industries like education, health care, manufacturing, government? What can other industries learn from us? Tell us.
Deadline for submissions is extended to May 1, 2008.
To submit or find out more visit www.pnsqc.org
Tuesday, April 29, 2008
Feynman on testing, I mean science
"That is the idea that we all hope you have learned in studying science in school -- we never say explicitly what this is, but just hope that you catch on by all the examples of scientific investigation. It is interesting, therefore, to bring it out now and speak of it explicitly. It's a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty -- a kind of leaning over backwards. For example, if you're doing an experiment, you should report everything that you think might make it invalid -- not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you've eliminated by some other experiment, and how they worked -- to make sure the other fellow can tell they have been eliminated."
I've found myself giving what I thought would be a quick synopsis of an issue I discovered, only to have it turn into what felt like a long, conditional, and ultimately not very useful summary. Reading what Feynman says makes me realize I was passing along more information than I realized.
I like thinking about testing from the perspective of a scientific experiment, and this quote lends some validity to that line of thought. While providing all of this information could seem like it is weakening the findings ("Well, I found this, but at the time the server was doing something funny...") the idea of providing evidence of everything you saw and your consideration of that evidence in your reporting helps anyone who has to make a decision based on your work. In the end, providing information to decision makers is what testing is about so I'm going to stop worrying about it when I have a lot to report.
I've found myself giving what I thought would be a quick synopsis of an issue I discovered, only to have it turn into what felt like a long, conditional, and ultimately not very useful summary. Reading what Feynman says makes me realize I was passing along more information than I realized.
I like thinking about testing from the perspective of a scientific experiment, and this quote lends some validity to that line of thought. While providing all of this information could seem like it is weakening the findings ("Well, I found this, but at the time the server was doing something funny...") the idea of providing evidence of everything you saw and your consideration of that evidence in your reporting helps anyone who has to make a decision based on your work. In the end, providing information to decision makers is what testing is about so I'm going to stop worrying about it when I have a lot to report.
Monday, April 7, 2008
The First Law of Software Engineering
If you don't care about quality, everything else is trivial.
--Jerry Weinberg
--Jerry Weinberg
Monday, March 10, 2008
Testers Evolving
James McCaffrey has a post talking about the software tester "prestige" issue at MS. I think the discussion mostly holds for the industry in general ignoring the MS specific parts of the post. The comment that responds to James' points completes the arguments and shows some positive trends that are encouraging.
I've brought up points like this at conferences and had the whole room go silent. It is a touchy issue for testers. Are we as valuable as developers to the dev process? Are agile methods a threat to testers?
One of the points in the comments states that devs and testers with equivalent skill level in their disciplines are now closer in pay.
"While one can say that 2 - 3 years ago the average level of a tester was about 2 levels lower than that of the average developer over the past 3 years we have seen the level of testers increase and the disparity in levels between developers and testers decrease."
This is an interesting point that doesn't often make it into the discussion. I know in my recent hiring, I've had a hard time finding testers that could do more than just run through test cases and file bugs. In any case, the trend is very positive.
I have the skills to be either a developer or a tester, so I've been asked why I am a tester. I like testing is the only answer I can give. However, on some teams, my "dev skills" were a threat somehow to the developers. I like to see the trend of testers with more of these skills. It makes for stronger teams.
"We are starting to see the 'self-fulfilling prophecy' of testing having lower prestige disappear. In many groups now testers are required to debug to line of code. In some groups testers are actually checking in bug fixes. In other groups testers are shipping automated tests on SDKs to customers."
With trends like this, I won't have to defend my debugging skills as much I hope.
It would be great to see more hard skill tracks like this at testing conferences if this really is the future.
I've brought up points like this at conferences and had the whole room go silent. It is a touchy issue for testers. Are we as valuable as developers to the dev process? Are agile methods a threat to testers?
One of the points in the comments states that devs and testers with equivalent skill level in their disciplines are now closer in pay.
"While one can say that 2 - 3 years ago the average level of a tester was about 2 levels lower than that of the average developer over the past 3 years we have seen the level of testers increase and the disparity in levels between developers and testers decrease."
This is an interesting point that doesn't often make it into the discussion. I know in my recent hiring, I've had a hard time finding testers that could do more than just run through test cases and file bugs. In any case, the trend is very positive.
I have the skills to be either a developer or a tester, so I've been asked why I am a tester. I like testing is the only answer I can give. However, on some teams, my "dev skills" were a threat somehow to the developers. I like to see the trend of testers with more of these skills. It makes for stronger teams.
"We are starting to see the 'self-fulfilling prophecy' of testing having lower prestige disappear. In many groups now testers are required to debug to line of code. In some groups testers are actually checking in bug fixes. In other groups testers are shipping automated tests on SDKs to customers."
With trends like this, I won't have to defend my debugging skills as much I hope.
It would be great to see more hard skill tracks like this at testing conferences if this really is the future.
Friday, February 22, 2008
Don't look!
A while back Jon Bach posted a blog entry about a local CSI detective visiting a testing meeting to talk to them and maybe inspire some new thinking with ideas from outside software testing.
I'm always interested in this kind of thing, so when I was browsing at my local technical bookstore and saw a CSI textbook I picked it up. I can't give a full review, but from what I glimpsed it seems like CSI is mostly about violent crime. As such, it turns out a textbook on CSI can be full of unpleasant crime scene photos.
So, word to the wise:
Leave the CSI book on the shelf and just read Jon's post :)
I'm always interested in this kind of thing, so when I was browsing at my local technical bookstore and saw a CSI textbook I picked it up. I can't give a full review, but from what I glimpsed it seems like CSI is mostly about violent crime. As such, it turns out a textbook on CSI can be full of unpleasant crime scene photos.
So, word to the wise:
Leave the CSI book on the shelf and just read Jon's post :)
Monday, February 4, 2008
Friday, January 11, 2008
Objectives for Program Testing
I'm feeling a bit overwhelmed with my testing schedule at the moment, so I'm in a strange enough mood this makes me
feel better :)
"In September of 1962, a news item was released stating that an $18 million rocket had been destroyed in early flight because "a single hyphen was left out of an instruction tape."... The nature of programming being what it is, there is no relationship between the "size" of the error and the problem it causes. Thus, it is difficult to formulate any objective for program testing, short of "the elimination of all errors" - an impossible job." -Gerald M. Weinberg,
"The Psychology of Computer Programming: Silver Anniversary Edition"
by Gerald M. Weinberg,
ISBN: 0932633420, page: 247, Chapter 13
feel better :)
"In September of 1962, a news item was released stating that an $18 million rocket had been destroyed in early flight because "a single hyphen was left out of an instruction tape."... The nature of programming being what it is, there is no relationship between the "size" of the error and the problem it causes. Thus, it is difficult to formulate any objective for program testing, short of "the elimination of all errors" - an impossible job." -Gerald M. Weinberg,
"The Psychology of Computer Programming: Silver Anniversary Edition"
by Gerald M. Weinberg,
ISBN: 0932633420, page: 247, Chapter 13
Wednesday, January 9, 2008
Testing Objects
I have never specifically tested object oriented software from the perspective of it being OO software.
I have several books on this topic:
Testing Object-Oriented Systems: Models, Patterns, and Tools
A Practical Guide to Testing Object-Oriented Software
I don't often see this discussed at testing conferences or frequently in articles so I'm not sure it is mainstream.
I think the books make a good case for the testing they discuss, but on the face of it, OO testing looks complicated and requires a coding background to understand it quicker. This is probably why it hasn't made it into common practice as SDET style testers are still rare in the industry overall.
By not doing this type of testing what coverage are you missing? I haven't found many bugs in the OO software I've tested related to OO structure issues, other than complicated OO code it turned out the developer didn't fully understand.
This could be another reason we haven't been forced to deal with this perspective. Maybe there still isn't a lot of good OO code out there that would require it. Our system tests catch the issues, and OO specific techniques aren't generally required.
I need to study the issue more, so I don't have as many answers as I would like. Seeing the books on my shelf recently reminded me to take another look and consider this perspective.
I have several books on this topic:
Testing Object-Oriented Systems: Models, Patterns, and Tools
A Practical Guide to Testing Object-Oriented Software
I don't often see this discussed at testing conferences or frequently in articles so I'm not sure it is mainstream.
I think the books make a good case for the testing they discuss, but on the face of it, OO testing looks complicated and requires a coding background to understand it quicker. This is probably why it hasn't made it into common practice as SDET style testers are still rare in the industry overall.
By not doing this type of testing what coverage are you missing? I haven't found many bugs in the OO software I've tested related to OO structure issues, other than complicated OO code it turned out the developer didn't fully understand.
This could be another reason we haven't been forced to deal with this perspective. Maybe there still isn't a lot of good OO code out there that would require it. Our system tests catch the issues, and OO specific techniques aren't generally required.
I need to study the issue more, so I don't have as many answers as I would like. Seeing the books on my shelf recently reminded me to take another look and consider this perspective.
Subscribe to:
Comments (Atom)