Concept of the Day: Campbell’s Law

Campbell’s law is defined by the following quote from Donald T. Campbell:

“The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

In other words: the higher the stakes associated with a measure, the more likely it is that the measure is corrupt and in so doing that the system being measured becomes corrupt.

If you put high stakes against a school exam the more likely it is that people teach to get a high pass mark and in so doing teaching become corrupt.

If you put high stakes against a business measure the more likely it is that people manage to the measure, or even falsify the measure, and in so doing corrupt the business.

There are numerous places where you can see this being worked out historically; the more important question, though, is where is this happening today?

What effect does it have if you stop people’s benefits if they don’t fill out a defined number of job applications?

What effect does it have if you pay a traffic warden on the basis of the number of fines they manage to issue?

What effect does it have if you fine rail operators for late trains?

What effect does it have if you pay doctors on the basis of the number of appointments they complete?

I’m sure there are many, many more.

This little video does a really nice job of explaining Campbell’s Law:

Office Speak: Sunsetting

The other day I received an email along the lines of:

On the first of the month after next we will be sunsetting the whatamI4 system.

I knew what it meant, but it struck me as a strange phrase to use.

I suppose I ought to explain what it meant for those of you who don’t understand the meaning. I’ll replace the word sunsetting with something else to see if that helps:

On the first of the month after next we will be turning off the whatamI4 system

That’s right sunsetting = turning off.

Sunsetting with 10 characters = turning off with 10 characters.

Sunsetting with 3 syllables = turning off with 3 syllables.

I suppose that’s my question, why not just say that it’s being turned off.

Returning to the original sentence, why not say:

On the first of the month after next whatamI4 will be turned off.

There you go, that’s shorter and simpler than either of the previous ones.

Or even:

whatamI4 will be turned off on the first of the month after next

I prefer this because it gives a much better call to action.

I’m not objecting to sunsetting it just feels like redundant complexity.

Perhaps I’m not being entirely fair though. There is a picture being drawn here and there is a difference between turning off and sunsetting. The term sunsetting is trying to communicate that the light is drawing in on a the application and that it’s time to move over to something else. Turning something off happens quite quickly, even instantaneously; sunsetting may happen over an extended period.

It’s not a word I hear people use in normal life though – it’s office speak.

Because it’s Friday: “WoodSwimmer” by Brett Foxwell

Stop motion appears quite often on a Friday, that’s partly because I like it, but also because I’m fascinated by the amount of time people will spend creating a piece.

Today’s stop motion must have taken an extraordinary amount of effort and patience with each frame being built by shaving off another layer of wood.

Brett Foxwell who created it describes it like this:

It was a challenging technique to perfect, but once I did, I was able to shoot short sequences that move the camera through samples of hardwood, burls and branches. The result is beautiful imagery both abstract and very real. In the twisting growth rings and the swirling rays, a new universe is revealed.

Via Colossal

Humans and Robots: Are the Robots Biased?

Humans are biased in all sorts of ways, there are over 160 classified cognitive biases:

  • Bandwagon effect
  • Confirmation bias
  • Framing effect
  • Mere exposure effect
  • etc.

As our world is increasingly dominated by algorithms how do we know what their biases are? That’s the question that Matthias Spielkamp over at MIT Technology Review is asking:

Courts, banks, and other institutions are using automated data analysis systems to make decisions about your life. Let’s not leave it up to the algorithm makers to decide whether they’re doing it appropriately.

Inspecting Algorithms for Bias  MIT Technology Review

As we hand over capability to algorithms we need to decide what we are doing about the responsibility for the results of those algorithms. One of the reasons we use algorithms is to remove the human bias but how do we know that the algorithms aren’t deeply biased themselves? We should be very concerned when the results of an algorithm produce racial biases like the ones highlighted in the article (COMPAS is an algorithm used to assess whether people will re-offend):

When ProPublica compared COMPAS’s risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend, it discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.” But when the algorithm was wrong, it was wrong in different ways for blacks and whites. Specifically, “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend.” And COMPAS tended to make the opposite mistake with whites: “They are much more likely than blacks to be labeled lower risk but go on to commit other crimes.”

The algorithm in question is operated by a commercial organisation, it’s not published openly, the people for whom it is making life impacting recommendations have no right of appear against its results. That’s an ethical issue that doesn’t just apply to this one algorithm, it applies to many, many algorithms:

  • What is your right of appeal against the algorithm that decides your credit rating?
  • Can you examine the algorithm that decides your car insurance premium?
  • What are the biases built into the algorithm that sends you advertising every day?

As algorithms embed themselves deeper into our lives how are we going to respond to their biases?

  • How do we respond to an autonomous cars which has biases in a crash situation?
  • How do we know that economic algorithms aren’t going to favour some parts of society over others?

We still have a long way to go before we have a equitable and workable 21st Century ethics for the machines.

Axiom: 4-to-1 – Compliment-to-Criticism Ratio

Is there a correct compliment to criticism ratio?

I’ve carried around the ratio of 4-to-1 for a long while now, but never really investigated it’s origins, or whether it has any basis in fact.

It’s an axiom and hence feels about right, but is it too simplistic? Why 4-to-1? So off I went to do a bit of research.

It turns out that the axiom has an interesting history. I’m going to keep it short, Wikipedia has a longer chronology.

Our brief history begins in 2005 when Marcial Losada and Barbara Fredrickson publish a paper in American Psychologist called “Positive effect and the complex dynamics of human flourishing” in which they outlined that the ratio of positive to negative affect was exactly 2.9013.

So not 4-to-1, ah well.

Barbara Fredrickson went on to write a book in 2009 titled: Positivity: Top-Notch Research Reveals the 3 to 1 Ratio That Will Change Your Life. In the book she wrote:

“Just as zero degrees Celsius is a special number in thermodynamics, the 3-to-1 positivity ratio may well be a magic number in human psychology.”

The idea of a positivity ratio became popular and entered mainstream thinking, taking on names like the Losada ratio, the Losada line and the Critical Positivity Ratio. I’m not sure when I picked up the idea of a positivity ratio, but I suspect it would be around the 2009, 2010 time-frame.

Then in 2013 Nick Brown, a graduate student, became suspicious of the maths in the study. Working with Alan Sokai and Harris Friedman, Nick Brown reanalysed the data in the original study and found “numerous fundamental conceptual and mathematical errors”. This the claimed ratio completely invalid leading to a formal retraction of the mathematical elements of the study including the critical positivity ratio of 2.9013-to-1.

So not only did I get the wrong ratio, it turns out that the ratio is mathematically invalid anyway.

This is where axioms get interesting, scientifically the idea of a 3-to-1 ratio of positivity is rubbish, but there’s something about it that keeps the idea living on. Instinctively we feel that it takes a bucket load more positivity to counteract a small amount of negativity. We know that we hear a criticism much louder than a compliment.

We only have to think about it a little while, though, to realise that a ratio is a massive over simplification of far more sophisticated interactions. As we interact with people, one criticism can be nothing like another one. Imagine the difference between a criticism from a friend and one from a stranger, they are very different. The same is also true for compliments. Thinking on a different dimension, we know that a whole mountain of compliments about triviality is not going to outweigh a character impacting criticism.

Perhaps, worst of all, though, is no feedback at all?

Humans and Robots: Is your job doomed? Your robot chauffeur is waiting.

How close are the robots? They’re coming, but how far away are they? In 2013 Carl Benedikt Frey and Michael A. Osborne did the analysis for the US. The aptly named WILL ROBOTS TAKE MY JOB? site has made this analysis available alongside other data. The data set is very broad from shoe and leather workers (robots are watching) to personal care aides (robots are watching) to model makers, metal and plastic (doomed) to software developers, systems software (no worries).

Each of the roles and the analysis undertaken is debatable and a 2017 perspective almost certainly changes many of them, but it’s still a fun exercise to think through the impact of the robots on the many roles that people undertake.

The answer for Taxi Drivers and Chauffeurs is that your job has an 89% probability of automation and that the robots are watching.

I picked on Taxi Drivers and Chauffeurs because it’s been an interesting week for car technology.

Yandex which is Russia’s equivalent of Google and Uber has joined the race to create an autonomous vehicle with a project named Yandex.Taxi aiming for Level 5 autonomy (the highest level of autonomy defined by the SAE) also known as Full Automation.

They already have a demo available:

Cadillac are testing technology that is further integrating vehicles into the infrastructure in which they operate with vehicle-to-infrastructure (V2I) communication. This technology is also being developed to allow cars to talk to each other. In it’s latest announcement Cadilac are demonstrating how a vehicle would talk to traffic lights to enable a driver to know when the lights are going to change. At one level I think that this is a great idea, at another I can see all sorts of challenges. For me the greatest challenge is nearly always the humans, we have a wonderful ability to outsource our responsibilities to the technology, the smarter the technology becomes the lower our feeling of responsibility will become “Sorry officer, I ran the red light because my car failed to stop.”

The robots are definitely watching.

But how far can the robots go? When will their intelligence overtake human intelligence? When will we reach singularity and what will its impact be? That’s the question posed by a colleague Annu Singh:

now’s the time to think about and prepare for this tomorrow, before the limits of human intelligence startle us like a soft whisper.


Yandex.Taxi Unveils Self-Driving Car Project via Yandex

The driverless car incorporates Yandex’s own technologies some of which, such as mapping, real-time navigation, computer vision and object recognition, have been functioning in a range of the company’s services for years. The self-driving vehicle’s ability to ‘make decisions’ in complex environments, such as busy city traffic, is ensured by Yandex’s proprietary computing algorithms, artificial intelligence and machine learning.


Cadillac tech ‘talks’ to traffic lights so you don’t run them via Mashable

Cadillac tested out the V2I system by rigging two traffic signals near the GM Warren Technical Center campus to send data to its demo CTS vehicles. The automaker said the stop lights were able to use Dedicated Short-Range Communications (DSRC) protocol — which is the same system used for inter-car V2V communication — to send data to the cars about when the light would turn red.


Singularity in AI: Are we there yet? via DXC Blogs

While we may not be at the point of singularity yet, the growing capability of AI to make decisions, learn and correct its own decision-making process does seem to raise moral, ethical, social and security concerns. Consider the dilemmas being confronted now around self-driving cars. Insurance companies are questioning who owns the liability and risks, who carries the insurance policy. Developers are faced with unimaginable decisions about whose life gets saved in a deadly collision.