Axiom: 4-to-1 – Compliment-to-Criticism Ratio

Is there a correct compliment to criticism ratio?

I’ve carried around the ratio of 4-to-1 for a long while now, but never really investigated it’s origins, or whether it has any basis in fact.

It’s an axiom and hence feels about right, but is it too simplistic? Why 4-to-1? So off I went to do a bit of research.

It turns out that the axiom has an interesting history. I’m going to keep it short, Wikipedia has a longer chronology.

Our brief history begins in 2005 when Marcial Losada and Barbara Fredrickson publish a paper in American Psychologist called “Positive effect and the complex dynamics of human flourishing” in which they outlined that the ratio of positive to negative affect was exactly 2.9013.

So not 4-to-1, ah well.

Barbara Fredrickson went on to write a book in 2009 titled: Positivity: Top-Notch Research Reveals the 3 to 1 Ratio That Will Change Your Life. In the book she wrote:

“Just as zero degrees Celsius is a special number in thermodynamics, the 3-to-1 positivity ratio may well be a magic number in human psychology.”

The idea of a positivity ratio became popular and entered mainstream thinking, taking on names like the Losada ratio, the Losada line and the Critical Positivity Ratio. I’m not sure when I picked up the idea of a positivity ratio, but I suspect it would be around the 2009, 2010 time-frame.

Then in 2013 Nick Brown, a graduate student, became suspicious of the maths in the study. Working with Alan Sokai and Harris Friedman, Nick Brown reanalysed the data in the original study and found “numerous fundamental conceptual and mathematical errors”. This the claimed ratio completely invalid leading to a formal retraction of the mathematical elements of the study including the critical positivity ratio of 2.9013-to-1.

So not only did I get the wrong ratio, it turns out that the ratio is mathematically invalid anyway.

This is where axioms get interesting, scientifically the idea of a 3-to-1 ratio of positivity is rubbish, but there’s something about it that keeps the idea living on. Instinctively we feel that it takes a bucket load more positivity to counteract a small amount of negativity. We know that we hear a criticism much louder than a compliment.

We only have to think about it a little while, though, to realise that a ratio is a massive over simplification of far more sophisticated interactions. As we interact with people, one criticism can be nothing like another one. Imagine the difference between a criticism from a friend and one from a stranger, they are very different. The same is also true for compliments. Thinking on a different dimension, we know that a whole mountain of compliments about triviality is not going to outweigh a character impacting criticism.

Perhaps, worst of all, though, is no feedback at all?

Cognitive Bias: Planning Fallacy

In the list of cognitive biases that I highlighted last week one that intrigued me was Planning Fallacy Bias.

I suspect that anyone who has been involved in any form of project has seen this at work. You look at the project, build a plan, come to a view of how long it’s going to take. You’ve done this type of activity before and should know how long it takes. Within days, though, it’s clear that the plan is not going to work and that time is not on your side, any contingency in the plan looks like a necessity and help from a time-lord would be welcome. You’ve just been caught in the Planning Fallacy.

The same also applies for cost estimates and our ability to estimate the benefits of a project. The project management triangle tells us that we can choose two between cost, scope and schedule; but the reality is that we often get all three wrong.

Individuals and organisations get caught out in the most spectacular fashion, but it would be too easy to attribute ever project overrun to this one bias – remember there are over 160 biases to choose from.

I’ve been caught in this one so many times that I now have a rule: whatever I plan the duration to be I double it; even then I still get caught out.

Do you have an approach for overcoming this bias?

Here’s Daniel Kahneman who was one of the people who came up with the idea:

//cdnapi.kaltura.com/p/1034971/sp/103497100/embedIframeJs/uiconf_id/25459841/partner_id/1034971?iframeembed=true&playerId=responsive_kaltura_player&entry_id=1_ovu3s4jn&flashvars%5BstreamerType%5D=auto

Axiom: People join companies, but leave managers

I’ve had reason to use this phrase a few times recently, but it occurred to me that I didn’t really know where it had come from.

Waiting for the Olympic Torch

Like many axioms it feels correct, but does it really work out in practice? More specifically; does it work out in practice today?

In 2013 and the age of the Free Agent Nation what does it mean to leave a manager, or perhaps more interesting, what does it mean to join a company?

Doing a bit of research in this area it looks like the phrase became popular from 1998/99 on the basis of an article published by Gallup and the popular management book First, Break All the Rules by Marcus Buckingham and Curt W. Coffman.

The Gallup article is titled: How Managers Trump Companies – People join companies, but leave managers

It concludes like this:

An employee may join Disney or GE or Timer Warner because she is lured by their generous benefits package and their reputation for valuing employees. But it is her relationship with her immediate manager that will determine how long she stays and how productive she is while she is there. Michael Eisner, Jack Welch, Gerald Levin, and all the goodwill in the world can only do so much. In the end, these questions tell us that, from the employee’s perspective, managers trump companies.

The book – First, Break All the Rules – is still a very popular management book and has been used as a source of training in all sorts of organisation.

The basis of the book is a lot of research undertaken by Gallup through their world-renowned ability to find information through surveys. It’s this book that is the basis for the Gallup Q12 approach which utilises 12 questions to ascertain the level of employee engagement and from that the organisation’s performance.

In the book it states “the manager – not pay, benefits, perks or a charismatic corporate leader – was the critical player in building a strong workplace.” Which doesn’t quite roll off the tongue like people join companies, but leave managers but it makes the same point.

The fundamental point is that managers can make or break your organisation.

Has the world moved on since 1998/9 when the book and article were written?

On one side of the equation it looks like things haven’t changed much at all. According to Gallup, the findings still hold true for the organisation that they work with. The last set of research published in 2012 states that the correlation between engaged employees and productive workplaces continues. If that correlation is true then people will choose to stay at organisations where they are engaged in meaningful work by good managers.

There’s another side to the equation, though, are people still joining companies? Do people still want to be employees?

In the UK, at least, there’s been quite a shift in employment. The following chart comes from a Department for Business Innovation and Skills report Business Population Estimates for the UK and Regions 2012:

Business Growth by Size

This chart, and the report, show that the number of sole-traders and self-employed businesses (shown as businesses without employees) has massively grown over the last 10 years while larger businesses (business with 250 or more employees) are down significantly. These businesses with no employees now account for nearly 75% of all businesses and provide employment for nearly four million people. While 9.8 million people work in companies of larger than 250 employees, over 14 million work in no employee, small and medium-sized businesses (there’s also millions more people employed in the public sector).

So while it can be said, with a reasonable level of confidence, that people leave companies because of poor management, it’s no longer clear that people choose to join companies in anything like the volume that they used to.

So I think I’ll keep using this axiom, but it looks like it’s going to get less relevant as the make-up of the workforce changes.

Axiom: The 10X Employee

One of the characteristics of an axiom is that it’s obviously true and as such you rarely question it.

San FranciscoI’ve subscribed to the view that some people are 10 times more productive than others for a long time – it has been obviously true.

As I look around the place where I work I can see that some people produce wildly more than others.

I’ve also worked on many projects where I’ve seen people who can clear the workload at an astonishing pace, they are obviously, noticeably more productive.

I was reminded of this axiom recently while reading a couple of articles by Venkatesh Raso on Developeronomics:

At the centre of the debate being had here is the idea of the 10x engineer:

The thing is, software talent is extraordinarily nonlinear. It even has a name: the 10x engineer (the colloquial idea, originally due to Frederick Brooks, that a good programmer isn’t just marginally more productive than an average one, but an order of magnitude more productive). In software, leverage increases exponentially with expertise due to the very nature of the technology.

While other domains exhibit 10x dynamics, nowhere is it as dominant as in software. What’s more, while other industries have come up with systems to (say) systematically use mediocre chemists or accountants in highly leveraged ways, the software industry hasn’t. It’s still a kind of black magic.

One of the reactions comes from Larry O’Brien knowing.net describing the 10X engineer like this:

This is folklore, not science, and it is not the view of people who actually study the industry.

Professional talent does vary, but there is not a shred of evidence that the best professional developers are an order of magnitude more productive than median developers at any timescale, much less on a meaningful timescale such as that of a product release cycle. There is abundant evidence that this is not the case: the most obvious being that there are no companies, at any scale, that demonstrate order-of-magnitude better-than-median productivity in delivering software products. There are companies that deliver updates at a higher cadence and of a higher quality than their competitors, but not 10x median. The competitive benefits of such productivity would be overwhelming in any industry where software was important (i.e., any industry); there is virtually no chance that such an astonishing achievement would go unremarked and unexamined.

In another article from 2008 Larry O’Brien gets into the specifics of programmer productivity:

That incompetents manage to stay in the profession is a lot less fun than a secret society of magical programmers, but the (sparse) data seem consistent in saying that while individuals vary significantly, the “average above-average” programmer will be only a small multiple (perhaps around three times) faster than the “average below-average” developer (see, for instance, Lutz Prechelt’s work at citeseer.ist.psu.edu/265148.html).

So, it would appear, there seems to be some disagreement on this axiom which is precisely why I started this series – how many of my axioms are really just nice ideas?

One of the problems with axioms is working out where I first came across them, this one is proving difficult to remember. I suspect that it comes from my old friends Tom DeMarco and Timothy Lister writing in Peopleware:

Three rules of thumb seem to apply whenever you measure variations in performance over a sample of individuals:

  • Count on the best people outperforming the worst by about 10:1.
  • Count on the best performer being about 2.5 times better than the median performer.
  • Count on the half that are better-than-median performers out-doing the other half by more than 2:1.

Peopleware: Individual Differences

But where did this come from: "[this diagram], for example, is a composite of the findings from three different sources on the extent of variations among individuals". So it comes from research undertaken around 1984 on software programmers.

You may have notice that I was vague at the beginning of the post about who the 10X people were being compared with – the median, the worst? It was deliberate, because I didn’t know, the axiom had become degraded over time and I couldn’t be specific. I was confused, and after doing some digging, I don’t think I’m the only one.

DeMarco and Lister point to and reference some real research for 10X being between worst and best which seems like a safe place to be. Everyone seems to agree that there is an order of magnitude difference between median and worst so that seems like a safe place to be too.

I feel like I’m having to constrain my curiosity a bit because there would appear to be so much more to learn but my time is limited. So I’m sticking to the safe areas.

Whatever the true axiom, we all need to understand that there is a significant difference in people’s productivity (however you might be measuring productivity) which makes it’s vitally important that we get the right people doing the right things. But it’s also important that we understand what our 10X place is and seek to optimise our time there and try to remove the constraints that are keeping us from getting there (he writes after a day of endless interruptions and chats resulting in very little personal productivity Smile ).

Axiom: Interruptions cost 20 minutes

You’re sitting at your desk working away focussing in on a problem that’s been on your list to resolve for weeks.

Buttermere SwimmingYou start to uncover the various layers of the problem ruling some things out, adding new things in.

This isn’t a simple problem, it’s a bit complicated and you feel a bit like you are Poirot unravelling a mystery. You’re starting to build a real sense of achievement.

You’re not sure how long you’ve been working on this problem but just at the point you are starting to see some light at the end of the tunnel your boss walks in and asks why, yet again, you haven’t provided your weekly status report. You explain that you’ve been very busy doing real work and didn’t think anyone read the status reports anyway.

After a two minute conversation you return to your problem, but you’ve lost the thread – "where was I again". You curse your boss. Your curse yourself for coming into the office today.

You start all over again trying to resolve this knotty little problem. It takes you an age to regain the concentration that you had.

This is such a common problem that we accept it as normal. People have even adapted their working habits to try and carve out some time to get some work done.

The interruptions abound – email, phones, instant messaging, social media, people, meetings. But what is the cost of those interruptions.

My axiom has always been that the cost of an interruption is 20 minutes.

I thought that I’d got the 20 minute part from a book called Peopleware by Tom DeMarco and Timothy Lister but I’ve recently been rereading it and actually it says this:

During single-minded work time, people are ideally in a state that psychologists call flow. Flow is a condition of deep, nearly meditative involvement…

Not all work roles require that you attain a state of flow in order to be productive, but to anyone involved in engineering, design, development, writing, or like tasks, flow is a must. These are high-momentum tasks. It’s only when you’re in flow that the work goes well.

Unfortunately, you can’t turn on flow like a switch. It takes a slow decent into the subject, requires fifteen minutes or more of concentration before the state is locked in. During this immersion period, you are particularly sensitive to noise and interruption. A disruptive environment can make if difficult or impossible to attain flow.

So where did I get 20 minutes from? Perhaps it’s just one of those things that changes in your mind over time? Not that it’s really that important, the significant factor here is that an interruption costs you significantly more than the length of the disturbance.

What Peopleware outlines is a theory called flow and the real question, therefore, is whether this theory is really the way our minds work.

The theory of flow appears to have been popularised by a Mihaly Csikszentmihalyi (no I don’t know how to say it either), in the 1990’s based on research from the 1960’s and 1970’s. The idea of being in a flow or in the zone or being in the groove have been around for much longer than that.

There appears to be a great deal of research undertaken which, for the most part, would appear to validate the theory outlined by Csikszentmihalyi. For once the article in wikipedia appears to be reasonably authoritative and well referenced.

So I’m reasonably happy that the axiom is true even if it’s not specifically 20 minutes, but we all work in the real world. How do we work in a way that minimises the impact.

The first part of resolving most problems is recognising that it exists, many people don’t.

The second part of overcoming a problem is to recognise the part that we are in control of. I don’t think I’m unique in being able to generate my own set of interruptions. There are also things that I can do to manage many of the disruptions.

There are all sorts of schemes that people use and I don’t think that there is one that suites everyone. The following mind map (not my own) reflects some of the things that I do:

Axiom: A Picture is Worth a Thousand Words

I really like pictures.

The most visited page on this site is one about Rich Pictures.

I regularly pick out interesting Infographics.

One of my favourite books at home is called Information is Beautiful which is named after the popular website.

In Search of JimmyWhy? Because "a picture is worth a thousand words", or at least that’s the axiom I tell myself.

I wonder, though, whether this is really true.

If it were really true we’d spend much more time drawing, and far less time writing words. Yet writing words is what we do and do a lot (much like I’m doing now).

Many think that the saying is ancient and oriental, but the evidence for that is somewhat sketchy at least the literal translation. What can be said is that it was used in the 1920’s, became popular in the 1940’s and continues to be a preferred phrase. The variation on this "A picture speaks a thousand words" didn’t come until the 1970’s:

image

Just because something is popular, and just because it appears to be true doesn’t mean that it is true.

In order to assess the validity of the axiom I set off down the scientific route. What research was there for the value if diagrams?

If it were to be true then there would be some clear evidence for a picture being a much better way of communicating than a set of either spoken or written words.

I was always taught that there were three types of learners: visual learners, auditory (listening) learners and kinaesthetic (doing) learners. So I wondered whether there might be some mileage in the research done into that particular subject. If visual learners are stronger than auditory learners then it would add weight to the premise. But it turns out that learning styles might be one of my anti-axioms. So I gave that up as a dead-end.

My next port of call was to think of one particular diagram type and see whether there was any science behind the value of a particular technique.

Most of the pictures I draw are really diagrams with the purpose of communicating something.

As a fan of mind maps as a diagramming technique I wondered whether there was any clear evidence of their value. Back in 2006 Philip Beadle wrote an article in The Guardian on this subject and the use of mind maps in education:

The popular science bit goes like this. Your brain has two hemispheres, left and right. The left is the organised swot who likes bright light, keeps his bedroom tidy and can tolerate sums. Your right hemisphere is your brain on drugs: the long-haired, creative type you don’t bring home to mother.

According to Buzan, orthodox forms of note-taking don’t stick in the head because they employ only the left brain, the swotty side, leaving our right brain, like many creative types, kicking its heels on the sofa, watching trash TV and waiting for a job offer that never comes. Ordinary note-taking, apparently, puts us into a "semi-hypnotic trance state". Because it doesn’t fully reflect our patterns of thinking, it doesn’t aid recall efficiently. Buzan argues that using images taps into the brain’s key tool for storing memory, and that the process of creating a mind map uses both hemispheres.

The trouble is that lateralisation of brain function is scientific fallacy, and a lot of Buzan’s thoughts seem to rely on the old "we only use 10% of the neurons in our brain at one time" nonsense. He is selling to the bit of us that imagines we are potentially super-powered, probably psychic, hyper-intellectuals. There is a reason we only use 10% of our neurons at one time. If we used them all simultaneously we would not, in fact, be any cleverer. We would be dead, following a massive seizure.

He goes further:

As visual tools, mind maps have brilliant applications for display work. They appear to be more cognitive than colouring in a poster. And I think it is beyond doubt that using images helps recall. If this is the technique used by the memory men who can remember 20,000 different digits in sequence while drunk to the gills, then it’s got to be of use to the year 8 bottom set.

The problem is that visual ignoramuses, such as this writer, can’t think of that many pictures and end up drawing question marks where a frog should be.

Oh dear, another cul-de-sac. In researching the mind-map though I did get to a small titbit of evidence, unfortunately from wikipedia (not always the most reliable source:

Farrand, Hussain, and Hennessy (2002) found that spider diagrams (similar to concept maps) had a limited but significant impact on memory recall in undergraduate students (a 10% increase over baseline for a 600-word text only) as compared to preferred study methods (a 6% increase over baseline).

That’ll do for me for now, it’s not "a thousand words" but it’s good enough for my purposes.

Why am I comfortable with just a small amount of evidence? Because this is one of those axioms where it’s not only about scientific proof.

Thinking about pictures in their broadest sense there are certainly pictures that would take more than a thousand words to describe them.

There are pictures that communicate emotions in a way that words would struggle to portray.

There are diagrams which portray a simple truth in a way that words would muddle and dilute.

In these situations the picture is clearly worth a lot of words, but our words would all be different. The way I would describe an emotional picture would be different to the words you would use. So it’s not about the number of words, but the number of different words.

This little bit of research has got me thinking though.

How often do we draw a diagram thinking that everyone understands it, but we’re really excluding the "visual ignoramuses" (as Philip Beadle describes himself). or the "visually illiterate" (as others describe it)?

In order to communicate we need to embrace both visual literacy and linguistic literacy in a way that is accessible to the audience. I used to have a rule in documentation, "every diagram needs a description". The PowerPoint age has taken us away from that a bit and perhaps it’s time to re-establish it so that we can embrace the visual and the literal.

I’m happy to keep this as an axiom, but I need to be a bit more careful about where I apply it.

To conclude:

image

Axioms: An Occasional Series

I’ve been thinking and reading quite a bit recently about axioms:

ax·i·omIn Search of Jimmy

[ak-see-uhm] noun

  1. a self-evident truth that requires no proof.
  2. a universally accepted principle or rule.
  3. Logic, Mathematics. a proposition that is assumed without proof for the sake of studying the consequences that follow from it.

As I think about the way that I approach things I realise that there are a set of axioms that I tend to work from, things that I think are self-evident. They’re normally sayings that I have in my head that shape the way I think about a situation. Some of them have been gleaned from my experience, some from my education but to be honest I don’t think I know where most of them have come from or why I think they are good principles.

I wonder how many of my personal axioms are are really any good, just because I think they are universally accepted doesn’t mean that they are. So I’ve decided to put a few of them under the microscope by doing a bit of research into their validity. I plan to write about what I’ve found honestly and hopefully I’ll uncover some things that are definitely true (as far as we understand it), but I’m also looking forward to finding some anti-axioms that are not true at all.

Now where to start?