Humans and Robots: Are the Robots Biased?

Humans are biased in all sorts of ways, there are over 160 classified cognitive biases:

  • Bandwagon effect
  • Confirmation bias
  • Framing effect
  • Mere exposure effect
  • etc.

As our world is increasingly dominated by algorithms how do we know what their biases are? That’s the question that Matthias Spielkamp over at MIT Technology Review is asking:

Courts, banks, and other institutions are using automated data analysis systems to make decisions about your life. Let’s not leave it up to the algorithm makers to decide whether they’re doing it appropriately.

Inspecting Algorithms for Bias  MIT Technology Review

As we hand over capability to algorithms we need to decide what we are doing about the responsibility for the results of those algorithms. One of the reasons we use algorithms is to remove the human bias but how do we know that the algorithms aren’t deeply biased themselves? We should be very concerned when the results of an algorithm produce racial biases like the ones highlighted in the article (COMPAS is an algorithm used to assess whether people will re-offend):

When ProPublica compared COMPAS’s risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend, it discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.” But when the algorithm was wrong, it was wrong in different ways for blacks and whites. Specifically, “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend.” And COMPAS tended to make the opposite mistake with whites: “They are much more likely than blacks to be labeled lower risk but go on to commit other crimes.”

The algorithm in question is operated by a commercial organisation, it’s not published openly, the people for whom it is making life impacting recommendations have no right of appear against its results. That’s an ethical issue that doesn’t just apply to this one algorithm, it applies to many, many algorithms:

  • What is your right of appeal against the algorithm that decides your credit rating?
  • Can you examine the algorithm that decides your car insurance premium?
  • What are the biases built into the algorithm that sends you advertising every day?

As algorithms embed themselves deeper into our lives how are we going to respond to their biases?

  • How do we respond to an autonomous cars which has biases in a crash situation?
  • How do we know that economic algorithms aren’t going to favour some parts of society over others?

We still have a long way to go before we have a equitable and workable 21st Century ethics for the machines.

Is spelling overrated?

If I were to list out my strengths spelling wouldn’t come anywhere near the top of the list. The construct of letters to make words has always been an unfathomable mystery. So I was really interested to read an article in Wired Magazine by Anne Trubek suggesting that we all loosen up a bit:

English spelling is a terrible mess anyway, full of arbitrary contrivances and exceptions that outnumber rules. Why receipt but deceit? Water but daughter? Daughter but laughter? What is the logic behind the ough in through, dough, and cough? Instead of trying to get the letters right with imperfect tools, it would be far better to loosen our idea of correct spelling.

Anne then goes on to say:

So who shud tell us how to spel? Ourselves. Language is not static—or constantly degenerating, as many claim. It is ever evolving, and spelling evolves, too, as we create new words, styles, and guidelines (rules governing use of the semicolon date to the 18th century, meaning they’re a more recent innovation than the steam engine). The most widely used American word in the world, OK, was invented during the age of the telegraph because it was concise. No one considers it, or abbreviations like ASAP and IOU, a sign of corruption. More recent textisms signal a similarly creative, bottom-up play with language: “won” becomes “1,” “later” becomes “l8r.” After all, new technology creates new inertia for change: The apostrophe requires an additional step on an iPhone, so we send text messages using “your” (or “UR”) instead of “you’re.” And it doesn’t matter—the messagee will still understand our message.

I have a lot of sympathy for this point of view, even my surname is an example of how the language has shifted. "Chastney" isn’t how it was originally spelled, it’s not even how it was originally said, neither is "Chesney", "Chasney", "Chasnet", "Cheney" and the many other derivatives. Are these all wrong? Are any of them wrong?

(The comment stream at the bottom of the article are exactly what I would expect. This subject bring out very strong opinions in people for reasons that are beyond my understanding.)