Every so often I get one of these people leave my clinic after a period of rehabilitation.
“Thanks so much. You’ve made such a difference. I wished they’d sent me 6yrs ago.”
This leaves me in a predicament. There feels a genuine change. But I’ve been trained to be suspect of treatment results. Regression to the mean has become a new buzzword in healthcare. It predicts that if an extreme measure is taken then the next measure will tend to be nearer the mean. Regardless of whether the initial measure was under or over the mean. Interesting.
So should I dismiss the above effect. Regression to the mean is a statistical phenomenon. Giving population data. It has uses but in isolation is reductionist. This is one concern with pure statistical interpretation of complexity. The world is reduced to numbers for our ‘benefit’. To take out the human-ness and be left with ‘reality’ or metaphysicality (a dubious claim). Statistics is bound to describe the world in numbers only. The world in black and white. 2D. Only in complexity can we see the world in colour.
Where does it work best? A good example is if you get 100 students to sit a blinded multiple choice test. Then repeat the next day. We expect the higher and lower scores to become less extreme and those around the mean to be more divergent. Repeat the test UNblinded and the effect is much less pronounced. The results less random or down to chance. Another example would be simple objective data such as inherited height which formed part of the initial idea by Sir Francis Galton.
Let’s start with terminology. Often it is easy to mistake regression with resolution. We know regression to the mean is not resolution because regression needs to occur in both directions. Towards the mean. Some better. Some worse. This does not describe resolution. Resolution continues a tendency toward improvement or resolution with multiple readings. Regression to the mean does not. It fluctuates around its own mean.
So terminology clear where do we go next? Well we give treatments a rough time when they have unknown mechanism. Rightly so. We want to know why and how. Regression to the mean works best with random unorganised data. The less random the data the less regression to the mean observed. In this view there is no causal attribution, just random stochastic variation. In cases of simple characteristics like height we know at least part of the regression process through gene sequencing in inheritance.
But what about pain? Do we know the mechanism of regression to the mean or resolution? Is it always the same? Is each resolution equal? Does the frequency of regression change? I often see people who have had a previous episode of pain and who have seen some resolution through avoidance strategies, behaviour adaptation, reduced daily effort. This is surely different to someone who resolved with good activity levels and less maladaptive beliefs. These qualitative differences are washed out with simple statistics.
Have we backed ourselves into a corner? Using regression to the mean as a convenient take down of alternative medicine. Easy to disengage. A sort of cognitive dissonance. If it works in the short term it’s placebo. If it works in the long term it’s regression to the mean. But where does it leave us? Over reliant on a statistical view of the world. (This is not an apologetic for alternative medicine. Rather a call for a deeper understanding).
Mechanisms could include subjective symptoms (like pain) being a byproduct of consciousness. Our consciousness is finite and in demand. This could lead to variation around a mean. Long term outcomes interfered by competing demands for consciousness. Stress, anxiety, other health issues, life! We might provide temporary scaffolding with our treatments. But do we see life changes? This is an excellent example of how regression actually helps us to engage in a more human level. But we should be careful to investigate it’s mechanism. Otherwise regression becomes a magic homogenising process.
The data is gathered at a population level therefore tells us little of an individual. It relies on a mean. An average. A normal. It assumes each person equal regardless of context. Their regression (or resolution) determined by pathology rather than person. Unable to compare to an alternative we can’t know how they may fluctuate other than using previous experience to inform (a priori). This is frowned on by groups of certain statisticians for risk of bias. They much prefer amalgamating a load of data from a population then assuming each person from 1-100th percentile on the continuum is bang average (mean). How accurate can this be? How wise not to use a priori reasoning? Maybe a discussion for a different day.
Statistics view the complexity of the world as disorganised when in fact social sciences generally accepts that social reality is best viewed as organised complexity. It is obsessed with linear modelling as this makes life easy to interpret. But at what cost in understanding? Complex causality is superseded by natural variation. This leads to determinism.
If we are not careful regression or resolution are examples of where useful data can reduce people to numbers. People lose out to ‘science’. Mean statistics cloak human variety. Depersonalised treatment approved in Randomised Control Trials assume situational approach i.e. putting each individual through the same situation or context will achieve the best results. This seems to be a foundation for much of healthcare. Deterministic reasoning such as “if we can determine the pathology and determine (through Randomised Control Trials) most effective treatment all our healthcare problems are solved!” Unfortunately this doesn’t seem to be the case. Factors outside the external context seem to matter greatly (i.e. the internal context/the person). I have seen regression to the mean used to explain treatment effects. The person sees you at their worst so can only improve. This may be true of some private clinics. But often people arrive in clinics 4-6 weeks from onset if we are lucky. So it is a misguided assumption to say we see them at their worst. Another call to put away the blankets and actually think. You know like humans do.
Using a bit of deep human thinking (not robot processing) we can see the above case study did not fit regression to the mean principles. Neither natural resolution after 6yr history. Clinically regression to the mean does happen but is quite obvious. They are the people who fluctuate both ways with no real improvement. Resolution is a bit trickier. Our healthcare professions should be quick to note that we may influence the context or environment for progress but do not provide the ‘magic’ of resolution. Do away with blanket thinking.
Be more human. Be less robot.
Thanks for reading this far.
Only got 5mins to look at one article? This is a brilliant piece written by a quantative researcher in the social sciences explaining how quantitative approaches fail by themselves. Must read! http://bit.ly/quantfail
Another interesting piece on protocol v reasoning https://sfbayareaspt.wordpress.com/2016/02/01/protocols-vs-expertise/