Peter Massey explains why everybody needs some maths and basic stats, in the sixth of nine articles exploring things he has found surprising in business.
We all know people who say they are rubbish at maths and yet can calculate backwards from 501 faster than you can throw the darts. Or tell you the price of the round of drinks is wrong without going near the till.
Maths can be useful in everyday situations, but it doesn’t need to be all in Greek letters, like it is in my long-neglected physics!
Literacy Is Expected
We expect reading and writing to a good level to be a basic skill.
But do we accept that we need to understand maths and statistics in the same way?
In work situations we expect literacy, but we don’t always expect numeracy.
We expect people to be literate, to be able to write to communicate, to clarify their ideas or findings. And to be able to read and understand what someone else has written.
Why don’t we expect people to be equally literate in numbers?
Not just numerate, but able to take and probe understanding from someone else’s numbers, in a world of business cases, evidence-based decisions and arguments.
Sometimes it’s just mathematically wrong.
As Georgina Sturge a statistician in the House of Commons Library, quotes:
So much of the data presented might be spurious, taken out of context, from unrepresentative samples. Or deliberately presented to back up messages, regardless of other data.
I’ve seen many situations where arguments have been spuriously based on statistically insignificant samples, on %s of %s and partial or selective data.
For example, most opinion polls are a sample of 1000 people. Some conclusions may be valid, but the selection of the 1000 people makes a big difference.
Is a 1% or 2% movement a change or a rounding error? There’s a simple calculator for sample sizes and error rates at calculator.net
Partial to Data
For example, drug company data on effectiveness is fraught with two problems.
They only have to publish the data which supports their argument, not the data that doesn’t.
And the data doesn’t have to compare to the do nothing and/or placebo options which may be just as good.
We do that all the time in business cases. We claim most customers want X or Y based on a sample size of diddly squat. We want to prove our hunch or our belief, so we are selective, ignoring contradictory data.
We don’t consider that we would have got the same improvement had we done nothing or something much simpler, because we confuse cause and correlation.
There’s a Mark Twain quote, “There are three kinds of lies: Lies, damn lies and statistics”. It behoves us all to educate ourselves to deal with all three.
Take this quote in the paper as I write this:
Yet examining the arguments put forward, there is:
- No base number of burglaries or trend data across previous years.
- No definition of a burglary or burglaries reported or “related to burglaries”.
- No definition of solved vs. arrests vs. convictions.
- No data on number of burglaries per culprit or arrest.
- No assumptions are given or source data links provided.
This illustrates the laziness with which numbers are quoted in the press without challenge – but we aren’t better at work. The use of language and specificity of data is sadly missing from most board reports I read.
Lack of mathematical savvy leads to a common business issue: lazy readers who accept weak arguments and lazy proposers who are not challenged on what’s argued.
The OECD defines mathematical literacy as:
That’s quite a mouthful!
Some Skills to Practise
So what should you check for in your skills with numbers at work?
Some simple favourites are:
BBC Radio 4’s “More or Less” is great at debunking statements with scrutiny of the data and its use. It demonstrates the tactics and counter-arguments every week.
The presenter, the undercover economist Tim Harford, has included his top tips from his book “How to Make the World Add Up”.
- From migration to railways, how bad data infiltrated British politics
- Sampling Errors
- Open Data
- Health groups call for end to redacted clinical trial data in technology assessments
Written by: Peter Massey at Budd
If you liked this article from Peter, read these articles next: