Range: What was wrong with NASA’s Challenger?

Let me share a small background story when NASA’s space mission Challenger exploded. This is explained in technical-friendly way in book Range, in order to connect that accident to the natural tendency of human being. Especially on how people are fundamentally afraid dropping their familiar tools.

Range by David Epstein

This story was cited from a book Range: How Generalists Triumph in a Specialized World by David Epstein. A lot of practical and philosophical insights have been obtained when I read this “antithesis of 10 000-hour rule of Malcolm Gladwell’s Outliers” book. You can easily guess the content by its title. The author explains through real stories how becoming generalist will enable people see their blindspots. Here I present one interesting story, the Challenger disaster:

In a Harvard Business School’s case study class, a racing problem (namely Carter Racing) is given. They were split up into small groups where a decision of “race” or “don’t race” should have been given eventually. Some data were given by their professor, such as temperature of a engine which cause the failures. The temperature forecasts of the racing day was also provided, which would have reached 40-degree at the coldest day.

Figure 1. Number of failed engine components at different operating temperatures

Naturally, a group would had been divided into two contrary opinions. One who against the race said that the projected temperature (40-degree) was an unknown territory and temperature has no apparent relation with the engine failure regarding to what is presented by the graph above (Figure 1). So, they didn’t want to take a risk. The pros argued that based on financial consideration, only at least 26-percent of successful chance (means no engine failure here) is needed to obtain the desired profit. Moreover, they also thought that the “cons group” was biased while reading the graph. The “pros group” said the break of 3 components at 53-degree only yielded on single engine failure. So, if the median taken, which is 65-degree, 4 engines failed below the median and 3 engines failed above the median. The conclusion the probability of failure at 40-degree is still lower than the “profitable limit” of 26-percent (3 out of 7 races). Eventually, the “race” decision was taken by this group.

Next day, the decision data of each group in the class was presented in front of everyone. Unsurprisingly, the majority chose to “race” with similar rationale as explained previously. Afterward, the professor said a question: “How many times did I say yesterday if you want additional information let me know?” The question that seems simple but lied important subtle message there. At this point, the students wer shown that they failed to ask for more relevant data. The professor then gave this shocking graph in front of everyone (Figure 2 below).

Figure 2. Number of failed and not failed engine components at different operating temperatures

Can you guess what this graph implies? When the data of not failed engines are added, you can notice that all successful engines happened at operating temperature above 65-degree. Not even one engine survived at the temperature lower than 65-degree in all 24 races. The professor then concluded with saying that the probability of engine failure at 40-degree is 99,4-percent. Eventually, he sarcastically closed the forum, “Do we have any remaining fans of racing?”

The other plot twist was those graphs were taken from real data of NASA’s wrong decision to launch space shuttle Challenger. But instead of “head gasket”, the Challenger‘s failure came from “O-rings: the rubber strips that sealed joints along the outer wall of the missile-like rocket boosters that propelled the shuttle. Cool temperatures caused O-ring rubber to harden, making them less effective seals” (cited from the book). The engineers of NASA had experienced the same-toned discussion as the Harvard students and fell down into the same misjudgement. The executives gave green-light to launch Challenger because they believe too much to the quantitative data. They failed to detect any others relevant tools to help them making a right decision.

There was a bold slogan at NASA: “In God We Trust, All Others Bring Data”. That became the strong culture among the engineers. They wouldn’t have a courage to say an opinion without holding any supporting data (quantitative data in this case). Unfortunately, this guided them to the wrong way in Challenger project. Although, few engineers possessed some qualitative data in form of O-ring images during the failures. Those images showed that the failure of O-ring could have been catastrophic to the entire system (or that 99,4-percent failure probability). But for sure, those few engineers had no guts to adress this opinion since it was not the quantitative data (which is strongly endorsed by the NASA’s slogan/culture).

From this actual event, we learn that dropping the familiar tools and using an unfamiliar tools might be useful to detect the disasters that possibly come from any directions. Here the familiar tool is quantitative data and the user is the engineer. In the book, some other examples are given. For instance, the firefighters (user) who ditch their equipments (familiar tools) were safe in dangerous situation, rather than they who keep holding their equipments were get caught by flames. Go read the book yourself to get the enjoyful and fruitful reading experiences….

MW

Published by Bonjour Marco

Hello! I'd like to share anything about aerospace engineering, book, and my journey

Leave a comment

Design a site like this with WordPress.com
Get started