Axiom: We are drawn to the Novel and the Negative

A quick reminder before we start, because it’s been a while, an axiom – “a statement or proposition which is regarded as being established, accepted, or self-evidently true”.

Normally, in these axiom posts, I would go off and link to articles that give evidence for, or against, the hypothesis, but I’m struggling this time around.

To be absolutely honest, I’ve no idea where this statement came from, and I’ve really struggled to find anything similar to this phrase anywhere else. I do, however, have lots of evidence to support the proposition, where I’m struggling, a bit, is in the joining of them together in a single statement.

Let me illustrate and you can decide whether I’ve stepped from proposition into fantasy by taking the parts of this axiom separately to start with.

The second part of this axiom highlights our attraction to the negative. I’m not sure I need to describe this given what we’ve been through over the last year. How many of us have found ourselves hanging on for the daily negative news as the pandemic continued? We know it’s not doing us any good, but for some reason we keep watching. This is where our old friend bias comes in – in this case Negativity Bias.

Negativity Bias defines our tendency to focus on the negative things, over positive things. This is our brain trying to protect us, keeping us alert to the danger around us, looking out for those scary beasts lurking in those hidden places. Not many of us are expecting a fearsome man-eating cat as part of our normal day, but our brain doesn’t know that.

Moving on to the second part. One of our super-powers as humans is our ability to pattern-match, we see patterns everywhere, even when they aren’t there. If we see a series of numbers, we instinctively continue the pattern – trying reading 1234 without thinking 5. On the reverse of this ability, we are fascinated by the things that don’t match the pattern, the novel things. Why are we so fascinated by Pi? It’s partly because it’s uniquely novel, we think, it doesn’t fit any pattern. People who are perfect spellers (not me) struggle to read something with spelling mistakes. This is not because they can’t understand the meaning of what is written, but because the mismatch from the pattern is so distracting. Where I struggle with reading is in the use of double spacing 😉.

Put these two things together and you have a fertile recipe for all sorts of behaviours.

Why do people buy into conspiracy theories? Do they fit the pattern of novel and negative? So, so many times.

Why does certain gossip travel faster and wider than other pieces of information? Is it the news about someone’s unexpected downfall or failure? Absolutely novel and negative.

Working, as I do, in technology, you’ll find that people love to talk about all the technical difficulties on a project. They particularly like to discuss the high profile and the unexpected issues – novel and negative. These difficulties often dominate the energy of the team way beyond their significance to the overall outcome. The attention that the novel and negative demands saps the team from other activities.

Why does clickbait work? So often it’s because the title is constructed to suggest something novel and negative: “ABC Returns from Holiday, Neighbour Does This.” Accompany this with a negative picture and you have a winning formula.

Why are there so many news sources, and why are so many of them so awful. It’s a constant stream of novel and negative and we are hooked.

Look at any list of best-sellers and you’ll see them covered in the negative-and-novel – crime, biography, thriller, horror.

You’ll see these negative-and-novel responses everywhere once you start looking for them, but what are we to do about it? I return to a quote I’ve used before on this site:

Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom.

Viktor E. Frankl

If we recognise the stimuli in our lives, and the likely background to them, we are, in my experience, more likely to recognise where the gap is and use our power to choose our response. If we are looking out for that tricky novel-and-negative stimuli there is an increased chance we will treat it for what it is.

I’ve not delved into the impact that these stimuli have on us, but I have some anecdotal evidence that it’s not good. Like many people I went on an enforced news fast for a period to create a separation between myself and the stimuli, I felt so much better. In my work context I’ve tried to look at issues from the perspective of their overall impact and not to get sucked into worrying about the latest panic. I’m finding that doing this helps me to focus on the important things.

Just because our brain thinks it’s important doesn’t mean that we have to pay attention to it, and in this case it’s probably better that we learn to filter out the novel and the negative.

Header Image: This is Bassenthwaite Lake just before a lovely evening swim.

Axiom: “You can’t teach an old dog new tricks” – Does age make you resistant to change?

For those of you not familiar with this saying it’s primarily referring to a reluctance to change that comes from old age. In other words, the older you get, the more resistant to change you are.

“You can’t teach on old dog new tricks” is generally referred to an idiom,  meaning:

a group of words established by usage as having a meaning not deducible from those of the individual words.

But in practice many people treat it as an axiom:

a statement or proposition which is regarded as being established, accepted, or self-evidently true.

A conversation with a colleague got me thinking, is there truth in the saying, or is it just a rhyme that we’ve all assumed to be true and embedded it in our attitudes towards older people?

So I thought I would go on a journey of discovery because if it is true there ought to be good evidence to support such a strong statement.

My first thought was to try and define  the age at which you become an “old dog”? The “old dogs” idiom was likely first published in a book of proverbs by John Heywood in 1546. Which got me wondering about how long people lived in 1546. The earliest life expectancy figures I could find for England and Wales was for 1851 when, on average, women lived to 42.2 and men only to 40.2. Assuming that in 1546 it was something similar, an “old dog” would be anyone older than 35 perhaps, but definitely 40? Or perhaps I’m messing with statistics a bit too much and average isn’t such a great indicator but it’s enough to get you thinking.

Even without a clear definition of what constitutes an “old dog” I started my search for evidence. I was particularly hoping for resistance to change in the workforce.

Can’t Teach Old Dogs: 0
Can Teach Old Dogs: 1

Starting with a search of “resistance to change” age I was presented with a study from 2013 entitled “Age, resistance to change, and job performance” by Florian Kunze, Stephan Boehm and Heike Bruch. They investigated the correlation of resistance to change (RTC) with age but also looked at the correlation with tenure in a role and job status (blue collar v white collar). This was their conclusion:

Contrary to common stereotypes, employee age is negatively related to RTC. Tenure and occupational status are further identified as boundary conditions for this relationship.

Age, resistance to change, and job performance

Just to be clear here, when is says that employee age is negatively related it means that older people tend to have a lower resistance to change. Within the report the relation isn’t huge, but it’s there all the same, and if definitely doesn’t support the axiom. Someones job status and their tenure also have an impact on their RTC, but these correlate in the stereotypical way; a lower job status creates a higher RTC as does an extended period in a role.

Can’t Teach Old Dogs: 1
Can Teach Old Dogs: 1

But that’s just one study so I continued my search and ended up as the Sloan Center on Aging & Work. They published a survey in 2008 which concluded:

“late-career employees were perceived to be the most resistant to change (41%), reluctant to try new technologies (34%), and difficult to train (18%), according to the States as Employers-of-Choice survey (Fall 2008).”

Older Workers Seen as More Loyal But More Resistant to Change

So there’s certainly a perception that late-career employees are resistant to change. Treating surveys as evidence is always tricky, you have to look at the actual questions and make judgements of whether the perceptions being highlighted are genuine. You also need to look at the cohort of people who were surveyed in order to understand whether their may be bias in the data. I’ve not had chance to do this so I’ll leave the information here as potential evidence for the axiom. Another challenge with this survey is the term late-career employees, there are bound to be more late-career employees who have had a long tenure than early career employees, you have to have been around for a whole to have enjoyed a long tenure so some of this perception may simply be the challenge of long tenure.

So one report that says that age isn’t the issue and one that says that people perceive that late-career employees are resistant to change. Let’s continue our searching.

Can’t Teach Old Dogs: 1
Can Teach Old Dogs: 2

Another survey? This time looking at people in government organisations and snappily entitled: “An Investigation of the Difference in the Impact of Demographic Variables on Employees’ Resistance to Organizational Change in Government Organizations of Khorasan Razavi” (Khorasan Razavi is a region in Iran)

The aim of this study was to investigate the difference in the impact of demographic variables, including age, gender and level of education on employees’ resistance to organizational change. According to the results of Student’s ttest, the mean of variables in the groups of men and women is equal and there is no difference, thus gender has no significant impact on employees’ resistance to change. Investigating the results of correlation test indicated that since the significance level is greater than the confidence level (0.05), there is no correlation between the variables of age and resistance to change. In the following activity, the individuals were categorized into four groups, including under 30, 30 to 40, 40 to 50 and over 50 years old. The results of test analysis of variance indicate that the significance level is greater than the confidence level, thus these four groups are the same. According to the results of Duncan test, there is no difference between these four groups, thus employees’ age has no significance impact on their resistance to change.

An Investigation of the Difference in the Impact of Demographic Variables on Employees’ Resistance to Organizational Change in Government Organizations of Khorasan Razavi, 2016

This is a relatively small study and used a questionnaire technique, so can’t be defined as conclusive, but is another piece of evidence for the age has no impact side.

Can’t Teach Old Dogs: 2
Can Teach Old Dogs: 2

Where to next? Find another study? There’s this one: “Impact of Age on Employee Resistance to Change. A Case Study Cotton Company (COTTCO)
in Zimbabwe”
they surveyed 60 employees and concluded that age was a factor. They also highlight that other factors impacted this conclusion, such as the openness of the organisation.

Conclusion?

That makes it 2 for and 2 against and perhaps that also makes it time to stop. Is this simple scoring mechanism sufficient? The first study is by far the largest with 15,243 participants and should carry more weight than the COTTCO one with 60 participants, but they weren’t asking the same questions so it’s not as simple as that. What can we conclude in this confusing landscape? There’s enough evidence to question the validity of the axiom and to question the use of age as a reason for resistance to change.

What about the other factors?

Whilst doing this research I was most struck by the idea of the other factors, particularly length of tenure, openness and role status. Whether someone becomes resistant to change as they get older, or not, would be something that was difficult to change. If it’s a biological condition, and I’m not saying it is, then it would be very difficult to change. If, however, the perception that late-career employees are resistant to change is primarily driven by factors, other than age, then organisations should take that very seriously indeed, these are things that organisations can and probably should change.

Organisations need to ask themselves whether they are creating, for themselves, individuals who are resistant to change and if they are, then what is the cost of that conditioning? Whilst I doubt whether organisations are consciously creating people resistant to change, they are creating organisational environments where that is the result.

Header Image: These are The Kelpies in Falkirk Scotland, taken on a recent visit. I left a few people in the picture so you could get an idea of the scale.

Axiom: 4-to-1 – Compliment-to-Criticism Ratio

Is there a correct compliment to criticism ratio?

I’ve carried around the ratio of 4-to-1 for a long while now, but never really investigated it’s origins, or whether it has any basis in fact.

It’s an axiom and hence feels about right, but is it too simplistic? Why 4-to-1? So off I went to do a bit of research.

It turns out that the axiom has an interesting history. I’m going to keep it short, Wikipedia has a longer chronology.

Our brief history begins in 2005 when Marcial Losada and Barbara Fredrickson publish a paper in American Psychologist called “Positive effect and the complex dynamics of human flourishing” in which they outlined that the ratio of positive to negative affect was exactly 2.9013.

So not 4-to-1, ah well.

Barbara Fredrickson went on to write a book in 2009 titled: Positivity: Top-Notch Research Reveals the 3 to 1 Ratio That Will Change Your Life. In the book she wrote:

“Just as zero degrees Celsius is a special number in thermodynamics, the 3-to-1 positivity ratio may well be a magic number in human psychology.”

The idea of a positivity ratio became popular and entered mainstream thinking, taking on names like the Losada ratio, the Losada line and the Critical Positivity Ratio. I’m not sure when I picked up the idea of a positivity ratio, but I suspect it would be around the 2009, 2010 time-frame.

Then in 2013 Nick Brown, a graduate student, became suspicious of the maths in the study. Working with Alan Sokai and Harris Friedman, Nick Brown reanalysed the data in the original study and found “numerous fundamental conceptual and mathematical errors”. This the claimed ratio completely invalid leading to a formal retraction of the mathematical elements of the study including the critical positivity ratio of 2.9013-to-1.

So not only did I get the wrong ratio, it turns out that the ratio is mathematically invalid anyway.

This is where axioms get interesting, scientifically the idea of a 3-to-1 ratio of positivity is rubbish, but there’s something about it that keeps the idea living on. Instinctively we feel that it takes a bucket load more positivity to counteract a small amount of negativity. We know that we hear a criticism much louder than a compliment.

We only have to think about it a little while, though, to realise that a ratio is a massive over simplification of far more sophisticated interactions. As we interact with people, one criticism can be nothing like another one. Imagine the difference between a criticism from a friend and one from a stranger, they are very different. The same is also true for compliments. Thinking on a different dimension, we know that a whole mountain of compliments about triviality is not going to outweigh a character impacting criticism.

Perhaps, worst of all, though, is no feedback at all?

Cognitive Bias: Planning Fallacy

In the list of cognitive biases that I highlighted last week one that intrigued me was Planning Fallacy Bias.

I suspect that anyone who has been involved in any form of project has seen this at work. You look at the project, build a plan, come to a view of how long it’s going to take. You’ve done this type of activity before and should know how long it takes. Within days, though, it’s clear that the plan is not going to work and that time is not on your side, any contingency in the plan looks like a necessity and help from a time-lord would be welcome. You’ve just been caught in the Planning Fallacy.

The same also applies for cost estimates and our ability to estimate the benefits of a project. The project management triangle tells us that we can choose two between cost, scope and schedule; but the reality is that we often get all three wrong.

Individuals and organisations get caught out in the most spectacular fashion, but it would be too easy to attribute ever project overrun to this one bias – remember there are over 160 biases to choose from.

I’ve been caught in this one so many times that I now have a rule: whatever I plan the duration to be I double it; even then I still get caught out.

Do you have an approach for overcoming this bias?

Here’s Daniel Kahneman who was one of the people who came up with the idea: The Real Reason Projects Always Take Longer Than You Think  via @Inc

Axiom: People join companies, but leave managers

I’ve had reason to use this phrase a few times recently, but it occurred to me that I didn’t really know where it had come from.

Waiting for the Olympic Torch

Like many axioms it feels correct, but does it really work out in practice? More specifically; does it work out in practice today?

In 2013 and the age of the Free Agent Nation what does it mean to leave a manager, or perhaps more interesting, what does it mean to join a company?

Doing a bit of research in this area it looks like the phrase became popular from 1998/99 on the basis of an article published by Gallup and the popular management book First, Break All the Rules by Marcus Buckingham and Curt W. Coffman.

The Gallup article is titled: How Managers Trump Companies – People join companies, but leave managers

It concludes like this:

An employee may join Disney or GE or Timer Warner because she is lured by their generous benefits package and their reputation for valuing employees. But it is her relationship with her immediate manager that will determine how long she stays and how productive she is while she is there. Michael Eisner, Jack Welch, Gerald Levin, and all the goodwill in the world can only do so much. In the end, these questions tell us that, from the employee’s perspective, managers trump companies.

The book – First, Break All the Rules – is still a very popular management book and has been used as a source of training in all sorts of organisation.

The basis of the book is a lot of research undertaken by Gallup through their world-renowned ability to find information through surveys. It’s this book that is the basis for the Gallup Q12 approach which utilises 12 questions to ascertain the level of employee engagement and from that the organisation’s performance.

In the book it states “the manager – not pay, benefits, perks or a charismatic corporate leader – was the critical player in building a strong workplace.” Which doesn’t quite roll off the tongue like people join companies, but leave managers but it makes the same point.

The fundamental point is that managers can make or break your organisation.

Has the world moved on since 1998/9 when the book and article were written?

On one side of the equation it looks like things haven’t changed much at all. According to Gallup, the findings still hold true for the organisation that they work with. The last set of research published in 2012 states that the correlation between engaged employees and productive workplaces continues. If that correlation is true then people will choose to stay at organisations where they are engaged in meaningful work by good managers.

There’s another side to the equation, though, are people still joining companies? Do people still want to be employees?

In the UK, at least, there’s been quite a shift in employment. The following chart comes from a Department for Business Innovation and Skills report Business Population Estimates for the UK and Regions 2012:

Business Growth by Size

This chart, and the report, show that the number of sole-traders and self-employed businesses (shown as businesses without employees) has massively grown over the last 10 years while larger businesses (business with 250 or more employees) are down significantly. These businesses with no employees now account for nearly 75% of all businesses and provide employment for nearly four million people. While 9.8 million people work in companies of larger than 250 employees, over 14 million work in no employee, small and medium-sized businesses (there’s also millions more people employed in the public sector).

So while it can be said, with a reasonable level of confidence, that people leave companies because of poor management, it’s no longer clear that people choose to join companies in anything like the volume that they used to.

So I think I’ll keep using this axiom, but it looks like it’s going to get less relevant as the make-up of the workforce changes.

Axiom: The 10X Employee

One of the characteristics of an axiom is that it’s obviously true and as such you rarely question it.

San FranciscoI’ve subscribed to the view that some people are 10 times more productive than others for a long time – it has been obviously true.

As I look around the place where I work I can see that some people produce wildly more than others.

I’ve also worked on many projects where I’ve seen people who can clear the workload at an astonishing pace, they are obviously, noticeably more productive.

I was reminded of this axiom recently while reading a couple of articles by Venkatesh Raso on Developeronomics:

At the centre of the debate being had here is the idea of the 10x engineer:

The thing is, software talent is extraordinarily nonlinear. It even has a name: the 10x engineer (the colloquial idea, originally due to Frederick Brooks, that a good programmer isn’t just marginally more productive than an average one, but an order of magnitude more productive). In software, leverage increases exponentially with expertise due to the very nature of the technology.

While other domains exhibit 10x dynamics, nowhere is it as dominant as in software. What’s more, while other industries have come up with systems to (say) systematically use mediocre chemists or accountants in highly leveraged ways, the software industry hasn’t. It’s still a kind of black magic.

One of the reactions comes from Larry O’Brien knowing.net describing the 10X engineer like this:

This is folklore, not science, and it is not the view of people who actually study the industry.

Professional talent does vary, but there is not a shred of evidence that the best professional developers are an order of magnitude more productive than median developers at any timescale, much less on a meaningful timescale such as that of a product release cycle. There is abundant evidence that this is not the case: the most obvious being that there are no companies, at any scale, that demonstrate order-of-magnitude better-than-median productivity in delivering software products. There are companies that deliver updates at a higher cadence and of a higher quality than their competitors, but not 10x median. The competitive benefits of such productivity would be overwhelming in any industry where software was important (i.e., any industry); there is virtually no chance that such an astonishing achievement would go unremarked and unexamined.

In another article from 2008 Larry O’Brien gets into the specifics of programmer productivity:

That incompetents manage to stay in the profession is a lot less fun than a secret society of magical programmers, but the (sparse) data seem consistent in saying that while individuals vary significantly, the “average above-average” programmer will be only a small multiple (perhaps around three times) faster than the “average below-average” developer (see, for instance, Lutz Prechelt’s work at citeseer.ist.psu.edu/265148.html).

So, it would appear, there seems to be some disagreement on this axiom which is precisely why I started this series – how many of my axioms are really just nice ideas?

One of the problems with axioms is working out where I first came across them, this one is proving difficult to remember. I suspect that it comes from my old friends Tom DeMarco and Timothy Lister writing in Peopleware:

Three rules of thumb seem to apply whenever you measure variations in performance over a sample of individuals:

  • Count on the best people outperforming the worst by about 10:1.
  • Count on the best performer being about 2.5 times better than the median performer.
  • Count on the half that are better-than-median performers out-doing the other half by more than 2:1.

Peopleware: Individual Differences

But where did this come from: "[this diagram], for example, is a composite of the findings from three different sources on the extent of variations among individuals". So it comes from research undertaken around 1984 on software programmers.

You may have notice that I was vague at the beginning of the post about who the 10X people were being compared with – the median, the worst? It was deliberate, because I didn’t know, the axiom had become degraded over time and I couldn’t be specific. I was confused, and after doing some digging, I don’t think I’m the only one.

DeMarco and Lister point to and reference some real research for 10X being between worst and best which seems like a safe place to be. Everyone seems to agree that there is an order of magnitude difference between median and worst so that seems like a safe place to be too.

I feel like I’m having to constrain my curiosity a bit because there would appear to be so much more to learn but my time is limited. So I’m sticking to the safe areas.

Whatever the true axiom, we all need to understand that there is a significant difference in people’s productivity (however you might be measuring productivity) which makes it’s vitally important that we get the right people doing the right things. But it’s also important that we understand what our 10X place is and seek to optimise our time there and try to remove the constraints that are keeping us from getting there (he writes after a day of endless interruptions and chats resulting in very little personal productivity Smile ).

Axiom: Interruptions cost 20 minutes

You’re sitting at your desk working away focussing in on a problem that’s been on your list to resolve for weeks.

Buttermere SwimmingYou start to uncover the various layers of the problem ruling some things out, adding new things in.

This isn’t a simple problem, it’s a bit complicated and you feel a bit like you are Poirot unravelling a mystery. You’re starting to build a real sense of achievement.

You’re not sure how long you’ve been working on this problem but just at the point you are starting to see some light at the end of the tunnel your boss walks in and asks why, yet again, you haven’t provided your weekly status report. You explain that you’ve been very busy doing real work and didn’t think anyone read the status reports anyway.

After a two minute conversation you return to your problem, but you’ve lost the thread – "where was I again". You curse your boss. Your curse yourself for coming into the office today.

You start all over again trying to resolve this knotty little problem. It takes you an age to regain the concentration that you had.

This is such a common problem that we accept it as normal. People have even adapted their working habits to try and carve out some time to get some work done.

The interruptions abound – email, phones, instant messaging, social media, people, meetings. But what is the cost of those interruptions.

My axiom has always been that the cost of an interruption is 20 minutes.

I thought that I’d got the 20 minute part from a book called Peopleware by Tom DeMarco and Timothy Lister but I’ve recently been rereading it and actually it says this:

During single-minded work time, people are ideally in a state that psychologists call flow. Flow is a condition of deep, nearly meditative involvement…

Not all work roles require that you attain a state of flow in order to be productive, but to anyone involved in engineering, design, development, writing, or like tasks, flow is a must. These are high-momentum tasks. It’s only when you’re in flow that the work goes well.

Unfortunately, you can’t turn on flow like a switch. It takes a slow decent into the subject, requires fifteen minutes or more of concentration before the state is locked in. During this immersion period, you are particularly sensitive to noise and interruption. A disruptive environment can make if difficult or impossible to attain flow.

So where did I get 20 minutes from? Perhaps it’s just one of those things that changes in your mind over time? Not that it’s really that important, the significant factor here is that an interruption costs you significantly more than the length of the disturbance.

What Peopleware outlines is a theory called flow and the real question, therefore, is whether this theory is really the way our minds work.

The theory of flow appears to have been popularised by a Mihaly Csikszentmihalyi (no I don’t know how to say it either), in the 1990’s based on research from the 1960’s and 1970’s. The idea of being in a flow or in the zone or being in the groove have been around for much longer than that.

There appears to be a great deal of research undertaken which, for the most part, would appear to validate the theory outlined by Csikszentmihalyi. For once the article in wikipedia appears to be reasonably authoritative and well referenced.

So I’m reasonably happy that the axiom is true even if it’s not specifically 20 minutes, but we all work in the real world. How do we work in a way that minimises the impact.

The first part of resolving most problems is recognising that it exists, many people don’t.

The second part of overcoming a problem is to recognise the part that we are in control of. I don’t think I’m unique in being able to generate my own set of interruptions. There are also things that I can do to manage many of the disruptions.

There are all sorts of schemes that people use and I don’t think that there is one that suites everyone. The following mind map (not my own) reflects some of the things that I do:

Axiom: A Picture is Worth a Thousand Words

I really like pictures.

The most visited page on this site is one about Rich Pictures.

I regularly pick out interesting Infographics.

One of my favourite books at home is called Information is Beautiful which is named after the popular website.

In Search of JimmyWhy? Because “a picture is worth a thousand words”, or at least that’s the axiom I tell myself.

I wonder, though, whether this is really true.

If it were really true we’d spend much more time drawing, and far less time writing words. Yet writing words is what we do and do a lot (much like I’m doing now).

Many think that the saying is ancient and oriental, but the evidence for that is somewhat sketchy at least the literal translation. What can be said is that it was used in the 1920’s, became popular in the 1940’s and continues to be a preferred phrase. The variation on this “A picture speaks a thousand words” didn’t come until the 1970’s:

image

Just because something is popular, and just because it appears to be true doesn’t mean that it is true.

In order to assess the validity of the axiom I set off down the scientific route. What research was there for the value if diagrams?

If it were to be true then there would be some clear evidence for a picture being a much better way of communicating than a set of either spoken or written words.

I was always taught that there were three types of learners: visual learners, auditory (listening) learners and kinaesthetic (doing) learners. So I wondered whether there might be some mileage in the research done into that particular subject. If visual learners are stronger than auditory learners then it would add weight to the premise. But it turns out that learning styles might be one of my anti-axioms. So I gave that up as a dead-end.

My next port of call was to think of one particular diagram type and see whether there was any science behind the value of a particular technique.

Most of the pictures I draw are really diagrams with the purpose of communicating something.

As a fan of mind maps as a diagramming technique I wondered whether there was any clear evidence of their value. Back in 2006 Philip Beadle wrote an article in The Guardian on this subject and the use of mind maps in education:

The popular science bit goes like this. Your brain has two hemispheres, left and right. The left is the organised swot who likes bright light, keeps his bedroom tidy and can tolerate sums. Your right hemisphere is your brain on drugs: the long-haired, creative type you don’t bring home to mother.

According to Buzan, orthodox forms of note-taking don’t stick in the head because they employ only the left brain, the swotty side, leaving our right brain, like many creative types, kicking its heels on the sofa, watching trash TV and waiting for a job offer that never comes. Ordinary note-taking, apparently, puts us into a “semi-hypnotic trance state”. Because it doesn’t fully reflect our patterns of thinking, it doesn’t aid recall efficiently. Buzan argues that using images taps into the brain’s key tool for storing memory, and that the process of creating a mind map uses both hemispheres.

The trouble is that lateralisation of brain function is scientific fallacy, and a lot of Buzan’s thoughts seem to rely on the old “we only use 10% of the neurons in our brain at one time” nonsense. He is selling to the bit of us that imagines we are potentially super-powered, probably psychic, hyper-intellectuals. There is a reason we only use 10% of our neurons at one time. If we used them all simultaneously we would not, in fact, be any cleverer. We would be dead, following a massive seizure.

He goes further:

As visual tools, mind maps have brilliant applications for display work. They appear to be more cognitive than colouring in a poster. And I think it is beyond doubt that using images helps recall. If this is the technique used by the memory men who can remember 20,000 different digits in sequence while drunk to the gills, then it’s got to be of use to the year 8 bottom set.

The problem is that visual ignoramuses, such as this writer, can’t think of that many pictures and end up drawing question marks where a frog should be.

Oh dear, another cul-de-sac. In researching the mind-map though I did get to a small titbit of evidence, unfortunately from wikipedia (not always the most reliable source:

Farrand, Hussain, and Hennessy (2002) found that spider diagrams (similar to concept maps) had a limited but significant impact on memory recall in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a 6% increase over baseline).

That’ll do for me for now, it’s not “a thousand words” but it’s good enough for my purposes.

Why am I comfortable with just a small amount of evidence? Because this is one of those axioms where it’s not only about scientific proof.

Thinking about pictures in their broadest sense there are certainly pictures that would take more than a thousand words to describe them.

There are pictures that communicate emotions in a way that words would struggle to portray.

There are diagrams which portray a simple truth in a way that words would muddle and dilute.

In these situations the picture is clearly worth a lot of words, but our words would all be different. The way I would describe an emotional picture would be different to the words you would use. So it’s not about the number of words, but the number of different words.

This little bit of research has got me thinking though.

How often do we draw a diagram thinking that everyone understands it, but we’re really excluding the “visual ignoramuses” (as Philip Beadle describes himself). or the “visually illiterate” (as others describe it)?

In order to communicate we need to embrace both visual literacy and linguistic literacy in a way that is accessible to the audience. I used to have a rule in documentation, “every diagram needs a description”. The PowerPoint age has taken us away from that a bit and perhaps it’s time to re-establish it so that we can embrace the visual and the literal.

I’m happy to keep this as an axiom, but I need to be a bit more careful about where I apply it.

Axioms: An Occasional Series

I’ve been thinking and reading quite a bit recently about axioms:

ax·i·omIn Search of Jimmy

[ak-see-uhm] noun

  1. a self-evident truth that requires no proof.
  2. a universally accepted principle or rule.
  3. Logic, Mathematics. a proposition that is assumed without proof for the sake of studying the consequences that follow from it.

As I think about the way that I approach things I realise that there are a set of axioms that I tend to work from, things that I think are self-evident. They’re normally sayings that I have in my head that shape the way I think about a situation. Some of them have been gleaned from my experience, some from my education but to be honest I don’t think I know where most of them have come from or why I think they are good principles.

I wonder how many of my personal axioms are are really any good, just because I think they are universally accepted doesn’t mean that they are. So I’ve decided to put a few of them under the microscope by doing a bit of research into their validity. I plan to write about what I’ve found honestly and hopefully I’ll uncover some things that are definitely true (as far as we understand it), but I’m also looking forward to finding some anti-axioms that are not true at all.

Now where to start?

%d bloggers like this: