- Bandwagon effect
- Confirmation bias
- Framing effect
- Mere exposure effect
As our world is increasingly dominated by algorithms how do we know what their biases are? That’s the question that Matthias Spielkamp over at MIT Technology Review is asking:
Courts, banks, and other institutions are using automated data analysis systems to make decisions about your life. Let’s not leave it up to the algorithm makers to decide whether they’re doing it appropriately.
As we hand over capability to algorithms we need to decide what we are doing about the responsibility for the results of those algorithms. One of the reasons we use algorithms is to remove the human bias but how do we know that the algorithms aren’t deeply biased themselves? We should be very concerned when the results of an algorithm produce racial biases like the ones highlighted in the article (COMPAS is an algorithm used to assess whether people will re-offend):
When ProPublica compared COMPAS’s risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend, it discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.” But when the algorithm was wrong, it was wrong in different ways for blacks and whites. Specifically, “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend.” And COMPAS tended to make the opposite mistake with whites: “They are much more likely than blacks to be labeled lower risk but go on to commit other crimes.”
The algorithm in question is operated by a commercial organisation, it’s not published openly, the people for whom it is making life impacting recommendations have no right of appear against its results. That’s an ethical issue that doesn’t just apply to this one algorithm, it applies to many, many algorithms:
- What is your right of appeal against the algorithm that decides your credit rating?
- Can you examine the algorithm that decides your car insurance premium?
- What are the biases built into the algorithm that sends you advertising every day?
As algorithms embed themselves deeper into our lives how are we going to respond to their biases?
- How do we respond to an autonomous cars which has biases in a crash situation?
- How do we know that economic algorithms aren’t going to favour some parts of society over others?
We still have a long way to go before we have a equitable and workable 21st Century ethics for the machines.