Humans and Robots: China, Microsoft-Baidu, Google and Einstein

The Economist has been reporting recently on the advances of China in AI using the volume of patents and the number of AI companies as an indicator:

If you still have China pegged as just a cheap offshore manufacturing country then you have been wrong for some time now. The future of Silicon Valley as the preeminent driver of innovation in the world is not assured and China is likely to have a significant impact. If you’d like to think a bit more about China as innovator then this is a good place to start:

The problem with China isn’t China – it’s usually us

Another recent example of China as innovator has been the establishment of a partnership between Baidu and Microsoft with Microsoft providing Azure capabilities to Baidu’s autonomous vehicle technologies.

“We are excited to have Microsoft as part of the Apollo alliance. Our goal with Apollo is to provide an open and powerful platform to the automotive industry to further the goal of autonomous vehicles,” said Ya-Qin Zhang, president of Baidu. “By using Azure, our partners outside of China will have access to a trustworthy and secure public cloud, enabling them to focus on innovating instead of building their own cloud-based infrastructure.”

That’s not the only news from Microsoft as they talk about the work they are doing in machine reading through ReasoNet:

Teaching a computer to read and answer general questions pertaining to a document is a challenging yet unsolved problem. In this paper, we describe a novel neural network architecture called the Reasoning Network (ReasoNet) for machine comprehension tasks. ReasoNets make use of multiple turns to effectively exploit and then reason over the relation among queries, documents, and answers.

If you want a broader view of what Microsoft are doing in this area here’s a great overview:

Microsoft is doubling down on machine reading as part of its AI focus

Whilst Microsoft is talking about teaching AI to read and understand; Google have been talking about how they have been using AI to edit photos:

Landscape photography is hard, no matter how beautiful an environment you’re shooting in. You need to be well-versed in composition, deal with weather conditions, know how to adjust your camera settings for the best possible shot, and then edit it to come up with a pleasing picture.

Google might be close to solving the last part of that puzzle: a couple of its Machine Perception researchers have trained a deep-learning system to identify objectively fine landscape panorama photos from Google Street View, and then artistically crop and edit them like a human photographer would.

The results are really very good:

Whilst this is all very interesting, it’s not as much fun as having a robot Einstein on your desk (even if the effect is a bit creepy):

Humans and Robots: Are the Robots Biased?

Humans are biased in all sorts of ways, there are over 160 classified cognitive biases:

  • Bandwagon effect
  • Confirmation bias
  • Framing effect
  • Mere exposure effect
  • etc.

As our world is increasingly dominated by algorithms how do we know what their biases are? That’s the question that Matthias Spielkamp over at MIT Technology Review is asking:

Courts, banks, and other institutions are using automated data analysis systems to make decisions about your life. Let’s not leave it up to the algorithm makers to decide whether they’re doing it appropriately.

Inspecting Algorithms for Bias  MIT Technology Review

As we hand over capability to algorithms we need to decide what we are doing about the responsibility for the results of those algorithms. One of the reasons we use algorithms is to remove the human bias but how do we know that the algorithms aren’t deeply biased themselves? We should be very concerned when the results of an algorithm produce racial biases like the ones highlighted in the article (COMPAS is an algorithm used to assess whether people will re-offend):

When ProPublica compared COMPAS’s risk assessments for more than 10,000 people arrested in one Florida county with how often those people actually went on to reoffend, it discovered that the algorithm “correctly predicted recidivism for black and white defendants at roughly the same rate.” But when the algorithm was wrong, it was wrong in different ways for blacks and whites. Specifically, “blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend.” And COMPAS tended to make the opposite mistake with whites: “They are much more likely than blacks to be labeled lower risk but go on to commit other crimes.”

The algorithm in question is operated by a commercial organisation, it’s not published openly, the people for whom it is making life impacting recommendations have no right of appear against its results. That’s an ethical issue that doesn’t just apply to this one algorithm, it applies to many, many algorithms:

  • What is your right of appeal against the algorithm that decides your credit rating?
  • Can you examine the algorithm that decides your car insurance premium?
  • What are the biases built into the algorithm that sends you advertising every day?

As algorithms embed themselves deeper into our lives how are we going to respond to their biases?

  • How do we respond to an autonomous cars which has biases in a crash situation?
  • How do we know that economic algorithms aren’t going to favour some parts of society over others?

We still have a long way to go before we have a equitable and workable 21st Century ethics for the machines.

Is spelling overrated?

If I were to list out my strengths spelling wouldn’t come anywhere near the top of the list. The construct of letters to make words has always been an unfathomable mystery. So I was really interested to read an article in Wired Magazine by Anne Trubek suggesting that we all loosen up a bit:

English spelling is a terrible mess anyway, full of arbitrary contrivances and exceptions that outnumber rules. Why receipt but deceit? Water but daughter? Daughter but laughter? What is the logic behind the ough in through, dough, and cough? Instead of trying to get the letters right with imperfect tools, it would be far better to loosen our idea of correct spelling.

Anne then goes on to say:

So who shud tell us how to spel? Ourselves. Language is not static—or constantly degenerating, as many claim. It is ever evolving, and spelling evolves, too, as we create new words, styles, and guidelines (rules governing use of the semicolon date to the 18th century, meaning they’re a more recent innovation than the steam engine). The most widely used American word in the world, OK, was invented during the age of the telegraph because it was concise. No one considers it, or abbreviations like ASAP and IOU, a sign of corruption. More recent textisms signal a similarly creative, bottom-up play with language: “won” becomes “1,” “later” becomes “l8r.” After all, new technology creates new inertia for change: The apostrophe requires an additional step on an iPhone, so we send text messages using “your” (or “UR”) instead of “you’re.” And it doesn’t matter—the messagee will still understand our message.

I have a lot of sympathy for this point of view, even my surname is an example of how the language has shifted. "Chastney" isn’t how it was originally spelled, it’s not even how it was originally said, neither is "Chesney", "Chasney", "Chasnet", "Cheney" and the many other derivatives. Are these all wrong? Are any of them wrong?

(The comment stream at the bottom of the article are exactly what I would expect. This subject bring out very strong opinions in people for reasons that are beyond my understanding.)