Do you really make assumptions all the time?

I read a fascinating article in Wired magazine by Jonathan Lehrer, where he discusses the phenomena of our brains making assumptions on how things work, based on a set of data that we have collected.  In fact we collect data in our brains all the time.  And when we analyse data we start making all sorts of assumptions and conclusions based on that data.

And of course we can never have enough data to make our decisions on and at some stage we have to decide that we have enough of it to base our decisions on.

And this happens all the time in the most dangerous industry in the world, pharmaceuticals.  This article highlights some lessons for us all on how we make assumptions all the time in our private, business and social lives.

I have extracted what I believe to be the important constituents from his article:

On November 30, 2006 executives at Pfizer - the largest pharmaceutical company in the world held a meeting with investors at the firm's research centre in Groton, Connecticut.  Jeff Kindler, the then CEO began the presentation with an upbeat assessment of the company's efforts to bring new drugs to market.   He cited "exciting approaches" to the treatment of Alzheimer's disease, fibromyalgia and arthritis.  But Kindler was most excited about a new drug called torcetrapib, which had recently entered Phase III clinical trials, the last step before filing for approval.  He confidently declared that it would be "one of the most important compounds of our generation".  Kindler told investors that, by the second half of 2008, Pfizer would begin applying for approval from the US Food and Drug Administration (FDA).  The success of the drug seemed a sure thing.  And then, just two days later, on December 2, 2006, Pfizer issued a stunning announcement: the torcetrapib Phase III clinical trial was being terminated.  Although the compound was supposed to prevent heart disease, it was actually triggering higher rates of chest pain and heart failure and a 60% increase in overall mortality.  The drug appeared to be killing people.  That week, Pfizer's value plummeted by $21 billion (£14 billion).

The story of torcetrapib is one of mistaken causation.  Pfizer was operating on the assumption that raising levels of HDL cholesterol and lowering LDL would lead to predictable outcome: improved cardiovascular health.  Less arterial plaque.  Cleaner pipes.  But that didn't happen. (According to a recent analysis, more that 40% of drugs fail Phase III clinical trials).

The problem was, it's this assumption that causes a strange kind of knowledge.  This was first pointed out by David Hume, a Scottish 18th-century philosopher.  

He realised that, although people talk about causes as if they are real facts - tangible things that can be discovered - they're actually not at all factual.  Instead, Hume said, every cause is just a slippery story, a catchy conjecture, a "lively conception produced by habit".  When an apple falls from a tree, the cause is obvious: gravity.  Hume's sceptical insight was that we don't see gravity - we see only an object tugged towards earth.  We look at X and then at Y, and invent a story about what happened in between.  We can measure facts, but a cause is not a fact - it's fiction that helps us make sense of facts.

The truth is, our stories about causation are shadowed by all sorts of mental short cuts.  Most of the time, these work well enough.  They allow us to discover the law of gravity, and design wondrous technologies.  However when it comes to reasoning about highly complex systems - say the human body - these short cuts go from being slickly efficient to outright misleading.

Consider a set of classic experiments designed by Belgian psychologist Albert Michotte, first conducted in the 40's.

His research featured a series of short films about a blue ball and a red ball.  In the first  film, the red ball races across the screen, touches the blue ball and then stops.  The blue ball, meanwhile, begins moving in the shame basic direction as the red ball.  When Michotte asked people to describe the film, they automatically lapsed in the language of causation.  The red ball hit the blue ball, which caused it to move.  This is known as the launching effect, and it's a universal property of visual perception.  Although there was nothing in the two-second film - it was just a montage of animated images - people couldn't help but tell a story about what had happened.  They had translated their perceptions into causal beliefs.  Michotte would go on to conduct more than 100 of these studies manipulating the films.

http://www.youtube.com/watch?v=e_jKNlC2YKo

There are two lessons learned from these experiments.  The first is that our theories about a particular cause and effect are inherently perceptual, infected by all the sensory cheats of vision.  Hume was right that causes are never seen, only inferred, but the truth is we can't tell the difference.  And so we look at moving balls and see causes, melodrama of taps and collisions, chasing and fleeing.  The second lesson is that causal explanations are oversimplifications.  This is what makes them useful - they help us grasp the world at a glance.  

The article is far too long for me to include everything in it and I have not been able to find it online either.  However I think I have got the main message from it.

And the question I pose to you, is:  What assumptions are you making today, that are based on incorrect date or not enough data or just that you have perceived the information  in a certain way?  Is the red ball chasing the blue ball instead of them just moving independently of each other?

And then there is the other old saying: "Perception is Reality"

Success!