When data becomes deadly
13th June, 2022
Published in The Sunday Times on June 12th 2022
Picture this scene - Carter Racing is a car racing team; John Carter, the co-owner has only an hour to decide whether to enter the most important auto race of the season. It’s your job to try and help him to decide, so read on.
Carter Racing case study
The race will be broadcast live on national television with the potential for major prize money. If his team wins, it will get a sponsorship deal and a chance to start turning a profit after years of losses.
There’s a problem. In seven of the past twenty-four races, the engine has blown out. An engine failure live on TV will jeopardise sponsorships — and even the driver’s life. But withdrawing has consequences too. The entry fee means finishing the season in debt, and the team won’t be pleased with missing the opportunity to win. Complicating his decision are conflicting data about the likely cause.
One of the engine mechanics, a high-school dropout, who has been working on the racing circuit for several years thinks that the engine’s head gasket might be stressed to the point of failure in cold weather. The chief mechanic is not convinced.
To help Carter decide what to do, a graph is devised that shows the conditions during each of the blowouts: the outdoor temperature at the time of the race plotted against the number of engine failures (see Figure 1). The dots are scattered across a range of temperatures from about fifty-five degrees to seventy-five degrees.
Figure 1: Blowout data
It is only 40 degrees Fahrenheit one hour before the race is due to start, well below anything the cars have experienced before. What’s your advice to Carter - race or withdraw?
Race or withdraw?
This case study is based on real data. It has been shown to students around the world for decades. Most groups presented with the case study look at the scattered dots on the graph and decide that the relationship between temperature and engine failure is inconclusive. Almost everyone chooses to race.
Very few look at that chart and ask to see the seventeen missing data points—the data from those races which did not involve engine failure.
The addition of those points makes a world of difference. The risk of a cold race jumps off the page. Every race in which the engine performed well was conducted when the temperature was higher than sixty-five degrees; every single attempt that occurred in temperatures at or below sixty-five degrees resulted in engine failure. The race would almost certainly end in catastrophe.
Figure 2: Complete data set
There’s a twist to this tale: the points on the graph are real but have nothing to do with auto racing. The data relates to space shuttle launches and was compiled the evening before the launch of the Challenger shuttle in 1986. Some engineers used the chart to argue that the shuttle’s O-rings (rubber ring used to stop liquid or gas leaks) had malfunctioned in the cold before and posed a major risk. But most of the experts were unconvinced. Nobody seems to have asked for additional data points, the ones they couldn’t see. The launch went ahead despite the weather. The complete chart would have shown the truth at a glance.
Investment links to the Carter Racing case study
As an investment column it looks like I may have veered from my brief. But I think there are several links to investing in this case study.
Charts rarely have such deadly consequences. But I have witnessed the use of many displays of investment data used to support the case for products which have fallen well short of appropriate standards. Hiding the failures and letting people’s survivorship bias do the rest is a trick of all sorts of advertisements. So, the first thing you should do when presented with data is ensure that it is complete and not hiding the failures.
Of particular relevance to investing is what the Carter Racing case study has to say about decision making under pressure.
The reference point was to race (go, no-go) so deviations are from that anchor. The literature on behavioural science shows that once we have an anchor established, decisions are merely adjustments around it – and biased as a result. Think of an individual stock that you owned. Was the price you bought it at ever part of the consideration when making decisions about it? That’s anchoring and good outcomes are much less likely when it’s part of your decision framework.
There is also a bias called framing at play in this story. Losses are harder to accept than equivalent gains and people take more risks when faced with a certain loss. Not racing was framed as a loss, and so decision making became riskier. The link with investing here should be clear.
And finally, there’s confirmation bias involved. Disconfirming evidence is undesirable so we will generally seek information which accords with our desired conclusion. Making investment decisions is very important and sometimes can have lifetime implications. The decisions need to be the right ones, not necessarily the ones that are most comfortable. External guidance is key.
Warning: The information in this article does not purport to be financial advice and does not take into account the investment objectives, knowledge and experience or financial situation of any particular person. It is not a recommendation or investment research and is defined as a marketing communication in accordance with the European Union (Markets in Financial Instruments) Regulations 2017. You should seek advice in the context of your own personal circumstances prior to making any financial or investment decision from your own adviser.
Warning: Past performance is not a reliable guide to future performance. The value of your investment may go down as well as up. You may not get back all of your original investment. Returns on investments may increase or decrease as a result of currency fluctuations.
Warning: Forecasts are not a reliable indicator of future performance.
It all begins with a simple, no obligation conversation.