Humans and Robots: Is your job doomed? Your robot chauffeur is waiting.

How close are the robots? They’re coming, but how far away are they? In 2013 Carl Benedikt Frey and Michael A. Osborne did the analysis for the US. The aptly named WILL ROBOTS TAKE MY JOB? site has made this analysis available alongside other data. The data set is very broad from shoe and leather workers (robots are watching) to personal care aides (robots are watching) to model makers, metal and plastic (doomed) to software developers, systems software (no worries).

Each of the roles and the analysis undertaken is debatable and a 2017 perspective almost certainly changes many of them, but it’s still a fun exercise to think through the impact of the robots on the many roles that people undertake.

The answer for Taxi Drivers and Chauffeurs is that your job has an 89% probability of automation and that the robots are watching.

I picked on Taxi Drivers and Chauffeurs because it’s been an interesting week for car technology.

Yandex which is Russia’s equivalent of Google and Uber has joined the race to create an autonomous vehicle with a project named Yandex.Taxi aiming for Level 5 autonomy (the highest level of autonomy defined by the SAE) also known as Full Automation.

They already have a demo available:

Cadillac are testing technology that is further integrating vehicles into the infrastructure in which they operate with vehicle-to-infrastructure (V2I) communication. This technology is also being developed to allow cars to talk to each other. In it’s latest announcement Cadilac are demonstrating how a vehicle would talk to traffic lights to enable a driver to know when the lights are going to change. At one level I think that this is a great idea, at another I can see all sorts of challenges. For me the greatest challenge is nearly always the humans, we have a wonderful ability to outsource our responsibilities to the technology, the smarter the technology becomes the lower our feeling of responsibility will become “Sorry officer, I ran the red light because my car failed to stop.”

The robots are definitely watching.

But how far can the robots go? When will their intelligence overtake human intelligence? When will we reach singularity and what will its impact be? That’s the question posed by a colleague Annu Singh:

now’s the time to think about and prepare for this tomorrow, before the limits of human intelligence startle us like a soft whisper.


Yandex.Taxi Unveils Self-Driving Car Project via Yandex

The driverless car incorporates Yandex’s own technologies some of which, such as mapping, real-time navigation, computer vision and object recognition, have been functioning in a range of the company’s services for years. The self-driving vehicle’s ability to ‘make decisions’ in complex environments, such as busy city traffic, is ensured by Yandex’s proprietary computing algorithms, artificial intelligence and machine learning.


Cadillac tech ‘talks’ to traffic lights so you don’t run them via Mashable

Cadillac tested out the V2I system by rigging two traffic signals near the GM Warren Technical Center campus to send data to its demo CTS vehicles. The automaker said the stop lights were able to use Dedicated Short-Range Communications (DSRC) protocol — which is the same system used for inter-car V2V communication — to send data to the cars about when the light would turn red.


Singularity in AI: Are we there yet? via DXC Blogs

While we may not be at the point of singularity yet, the growing capability of AI to make decisions, learn and correct its own decision-making process does seem to raise moral, ethical, social and security concerns. Consider the dilemmas being confronted now around self-driving cars. Insurance companies are questioning who owns the liability and risks, who carries the insurance policy. Developers are faced with unimaginable decisions about whose life gets saved in a deadly collision.

Humans and Robots: AI in Retail, Automotive, Weather and the Newsroom

If I could think of a way to present it I would create a chart showing the various predictions for job creation and job losses that the robots are going to cause. One thing we can be sure about nearly all of these predictions is that they are likely to be wrong in detail, but correct in concept.

The latest prediction is one for retail which is using a World Economic Forum figure of 30%-50% of retail jobs being at risk from known automation capabilities.  The challenge with this figure for most developed economies is that there are more people employed in retail than in manufacturing and many of us know the repercussions of the manufacturing switch, the predicted change is even greater than that experienced by manufacturing.

You might think that retail is at risk because it is easy to automate? But what about journalism? Earlier this month Google brought together a number of journalists to talk about the impact of AI in the newsroom. This meeting was discussing a report by the Associated Press “Report: How artificial intelligence will impact journalism”. Google were highlighting their Google News Lab which was it developed “to support the creation and distribution of the information that keeps us all informed about what’s happening in our world today—quality journalism.” Fake news has, of course, been a huge subject recently, I’m not so much concerned about outright fake news, that’s pretty easy to check, I’m more concerned by the potential for AI to create narrow news where it’s only the statistically high ranking items that become news.

Google were also highlighting their prowess at automatically classifying video content which they will soon be making available via Google Cloud Video Intelligence API. Classification of content is a massive issue for news organisations and having a machine do it for you has to be a winner.

In a more specific case for AI the UK Met Office has been talking about its use of AI to help predict the weather, something that’s something of an obsession for this island nation. This is underlined by the Met Office being one of the UK’s largest users of super-computing.

The impact of technology in the automotive business was recently underlined as Ford replaced its CEO with the person who was heading up their self-driving car business. Most of the content in this article is in the video.

And finally, anyone want an autonomous robot security guard with a built-in drone?


Retail Automation: Stranded Workers? Opportunities and risks for labor and automation by IRRC Institute (pdf)

The retail landscape is experiencing unprecedented change in the face of disruptive forces, one of the most recent and powerful being the rapid rise of automation in the sector. The World Economic Forum predicts that 30-50% of retail jobs are at risk once known automation technologies are fully incorporated. This would result in the loss of about 6 million retail jobs and represents a greater percentage reduction than the manufacturing industry experienced. Using Osborne and Frey study with the Bureau of Labor Statistics, the analysis suggests that more than 7.5 million jobs are at high risk of computerization. A large proportion of the human capital represented by the retail workforce is therefore at risk of becoming “stranded workers.”


Report: How artificial intelligence will impact journalism via AP Insights

Streamlining workflows, automating mundane tasks, crunching more data, digging out insights and generating additional outputs are just a few of the mega-wins that can result from putting smart machines to work in the service of journalism.

Innovators throughout the news industry are collaborating with technology companies and academic researchers to push the envelope in a number of related areas, affecting all points on the news value chain from news gathering to production and distribution.


AI in the newsroom: What’s happening and what’s next? via Google

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”


 

Announcing Google Cloud Video Intelligence API, and more Cloud Machine Learning updates via Google

Cloud Video Intelligence API (now in Private Beta) uses powerful deep-learning models, built using frameworks like TensorFlow and applied on large-scale media platforms like YouTube. The API is the first of its kind, enabling developers to easily search and discover video content by providing information about entities (nouns such as “dog,” “flower” or “human” or verbs such as “run,” “swim” or “fly”) inside video content. It can even provide contextual understanding of when those entities appear; for example, searching for “Tiger” would find all precise shots containing tigers across a video collection in Google Cloud Storage.

 

 

Humans and Robot: Google I/O and Self Driving Bin Lorries

It’s Google’s big developer conference this week – I/O. So far centre stage has been given over to Artificial Intelligence and Machine Learning.

There are a set of articles that have been published, some of which I’ve highlighted below but I can summarise all of them with this one quote:

“We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world.”

Sandar Pichai, CEO, Google

For many the shift to mobile has made little impact on their day-to-day work, it’s had far more impact on their personal life. The switch to AI-first will have a massive impact across both our work and personal lives.

The keynote for I/O was just under 2 hours long, but thankfully The Verge have put together a 10 minute video of the highlights:

Also, Volvo have announced that they are working on a system for self driving refuse collection lorries. This is yet another self-driving initiative, but one with a specific purpose in mind. Instead of trying to resolve the generic problem of self-driving vehicles, in all contexts, this project is seeking to enable self-driving in the urban refuse collection context. Historically targeted innovations like this one are adopted prior to more generic innovations like self-driving cars:


Making AI work for everyone via Google

We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.


Partnering on machine learning in healthcare via Google

Our researchers at Google have shown over the past year how our machine learning can help clinicians detect breast cancer metastases in lymph nodes and screen for diabetic retinopathy. We’re working with Alphabet’s Verily arm and other biomedical partners to translate these research results into practical medical devices, such as a tool to help prevent blindness in patients with diabetes.

Now we’re ready to do more: machine learning is mature enough to start accurately predicting medical events—such as whether patients will be hospitalized, how long they will stay, and whether their health is deteriorating despite treatment for conditions such as urinary tract infections, pneumonia, or heart failure.


Why Google’s CEO Is Excited About Automating Artificial Intelligence via MIT Technology

Machine-learning experts are in short supply as companies in many industries rush to take advantage of recent strides in the power of artificial intelligence. Google’s CEO says one solution to the skills shortage is to have machine-learning software take over some of the work of creating machine-learning software.

Humans and Robots: What are you worried about? Machines in the home?

When it comes to risk the things that we rank highest in the machine learning world are physical things, in the UK at least. In a survey commissioned by the Royal Society it’s self driving cars and machines in the home that we give the highest risk to, we don’t think that health diagnosis technologies poses the same level of risk.

I find that intriguing, but not surprising, we are unnerved by the things that are physically close, but not the things that are hidden.

Whilst we are worrying about being physically harmed by self driving cars we don’t worry about predictive policing which could have a much greater impact on our society. This is a common problem, people aren’t very good at recognising the impact of things that are hidden because they are too blinded by the thing they can see. Like the conjurer’s misdirection we are too busy looking one way to see the thing that has just happened directly in-front of us.

Another interesting statement from the survey:

Results from the UK’s first in-depth assessment of public views on machine learning – carried out by the Royal Society and Ipsos MORI – demonstrate that while most people have not heard the term ‘machine learning’ (only 9% have), the vast majority have heard about or used at least one of its applications.

In other words, machine learning is having a significant impact on people’s lives even if they don’t recognise it.

This survey on social risk is published within a few days of an announcement by Durham Police (UK) that they are going to use artificial intelligence to help in the decision on whether, or not, a suspect should be kept in custody. How would you associate the social risk of such a system? I suspect that it depends on your background and how you regard the police.

It’s not really got anything to do with today’s theme, but I was quite intrigued to see that Google is setting it’s AI sights on musical instrumentation with a Neural Synthesizer, or NSynth. I’ve always been fascinated by the intersections of art and technology; pioneering artists have always embraced new technology to enable them to express their art. Music has been at the forefront of that pioneering so it will be interesting to see how musicians use these new technologies.


People are scared of artificial intelligence for all the wrong reasons via Quartz

People in Britain are more scared of the artificial intelligence embedded in household devices and self-driving cars than in systems used for predictive policing or diagnosing diseases. That’s according to a survey commissioned by the Royal Society, which is billed as the first in-depth look at how the public perceives the risks and benefits associated with machine learning, a key AI technique.


Durham Police AI to help with custody decisions via BBC

The system classifies suspects at a low, medium or high risk of offending and has been tested by the force.

It has been trained on five years’ of offending histories data.

One expert said the tool could be useful, but the risk that it could skew decisions should be carefully assessed.


Google’s creating sounds you’ve never heard before via Mashable

To create music, NSynth uses a dataset containing sounds from individual instruments and then blends them to create hybrid sounds. According to the company, NSynth gives “artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” the company said in their announcement.

Humans and Robots: Augmented Productivity

For a few million years we’ve been augmenting our productivity with tools. Those tools helped us to catch more meat and to fight our enemies, in other words they made us productive. We continue to augment our productivity with new tools that help us achieve modern day productivity needs. Whilst productivity itself is a simple measure of input, added value and output it’s not always easy to define what the added value is. How people add value is going to be a key question as we transform the meaning of productivity in the coming years as the tools available change dramatically.

There have been a number of items highlighting these new tools over the last few days:

  • The MIT Technology Review is reporting on the impact of augmented reality on healthcare and the Operating Room in particular. The key thing here is that the information is augmenting the operating environment within the context of the operating environment.
  • Improbable has secured a $500m investment to help it continue to develop it’s simulation technologies. As Virtual Reality and Augmented Reality devices become more mainstream there’s the potential for a huge market in creating the simulations that bring those devices to life.
  • Cisco, Google and Microsoft have all made announcements aimed at augmenting today’s office productivity environment with various uses of AI.
  • And someone decided to make a robot that looks and moves like a spider (but only 6 legs so no needs to worry) 🙂

AR Is Making Its Way into the OR via MIT Technology Review

Doctors may soon be able to augment their view of your body, but it will be some time before it’s commonplace.

“Scalpel. Forceps. Suction. Oh, and nurse, pass me the HoloLens.”


If we’re living in a simulation, this UK startup probably built it via Wired

Improbable’s platform, SpatialOS, is designed to let anyone build massive agent-based simulations, running in the cloud: imagine Minecraft with thousands of players in the same space, or researchers creating simulated cities to model the behaviour of millions. Its ultimate goal: to create totally immersive, persistent virtual worlds, and in doing so, change how we make decisions.

Or more simply, as Narula often jokes, “Basically, we want to build the Matrix.”


How machine learning in G Suite makes people more productive via Google Enterprise Blog

According to a Google study in 2015, the average worker spends only about 5 percent of his or her time actually coming up with the next big idea. The rest of our time is caught in the quicksand of formatting, tracking, analysis or other mundane tasks. That’s where machine learning can help.


Transforming Collaboration Through Artificial Intelligence with Cisco’s Acquisition of MindMeld

Artificial Intelligence represents a tremendous opportunity to expand the reach and enhance the capabilities of enterprise technology. At Cisco, we have already been introducing AI into our solutions across security, orchestration, application performance and collaboration. Today, I’m excited to share Cisco’s intent to acquire MindMeld Inc., a San Francisco-based company that has developed a conversational platform based on natural language understanding (NLU). This acquisition, Cisco’s third in two weeks, represents how the buy pillar of our innovation strategy continues to impact our strategic shift to become more of a software company.


Microsoft’s Presentation Translator translates presentations in real time via TechCrunch

The Presentation Translator can automatically provide real-time translated subtitles or translate the text of their actual PowerPoint presentation while still preserving the original formatting.

In its current iteration, the service supports Arabic, Chinese, English, French, German, Italian, Japanese, Portuguese, Russian and Spanish. While the focus here is on translation, you also could use the same service to caption a presentation for audience members who are deaf or hard of hearing.


Man’s homemade robot spider looks real and we are sufficiently freaked out via Mashable

Humans and Robots: AI, AI, AI, AI

There have been several Artificial Intelligence (AI) articles over the last couple of days.

A number of these have been commentaries on some research put out by Gartner. The simplified story within the Gartner research is that things that professionals do today will be done by AI at a significantly lower cost at some point in the future. Once that happens those things can be regarded as utilities like electricity. I don’t think that there is any news in this that’s been the general trajectory for some time, the unknown is the speed and nature of that shift. Gartner is going for 2022 by which they are really saying is something like “within around 5 years”.

(One of the things that you need to understand about Gartner is that people listen to them, so when they report something it’s worth taking note even if it’s just to understand where a Gartner reader like CIOs and CTOs may be coming from in the future.)

Interestingly that electricity utility thought is also one of the key points raised by Stowe Boyd in A Q&A with Erica Morphy where he quotes Andrew Ng as saying “AI is the new electricity”.

To further underline that thought both ServiceNow and Grammerly made AI related announcements. ServiceNow are focusing their AI attentions on the automation of work. Grammerly is raising money to help augment our language skills.

Oh, and also, Amazon released another personal assistant based on Alexa, the Echo Show. This time Echo has been given a screen.


Gartner Says Artificial Intelligence Could Turn Some Skilled Practices Into Utilities

“The economics of AI and machine learning will lead to many tasks performed by professionals today becoming low-cost utilities,” said Stephen Prentice, vice president and Gartner Fellow. “AI’s effects on different industries will force the enterprise to adjust its business strategy. Many competitive, high-margin industries will become more like utilities as AI turns complex work into a metered service that the enterprise pays for, like electricity.”


A Q&A with Erica Morphy

“we have to learn to dance with the robots, not to run away from them. But that means we have to develop AI that is dance-withable, and not unrunnable-away-from.”


ServiceNow launches machine learning, AI automation engine

The four main use cases for ServiceNow’s automation efforts include:

  1. Anomaly detection to prevent outages in IT departments. ServiceNow will apply algorithms to find patterns and outliers that can lead to an outage. Anomalies can also be correlated with past events and workflows.
  2. Routing and categorizing of work. Learning algorithms will automatically route work based on past patterns. Tasks such as assessing risk, assigning owners, and categorizing work will be automated.
  3. Performance predictions. The Intelligent Automation Engine can be used to set a performance goal and data profile and get predictive analytics on hitting goals.
  4. Benchmarks vs. peers. ServiceNow is using the automation engine to compare companies to their industries and peers to gauge efficiency and make recommendations.

Grammarly raises $110 million in its first ever funding round

The company’s pitch centers on its machine learning capabilities. It claims this technology can dig into the substance of users’ writing in a way that’s not possible with Microsoft Word or other autocorrect programs.

Grammarly says it can advise not only on proper grammatical structure but on tone and word selection as well.


Amazon’s ‘Echo Show’ Gives Alexa the Touchscreen It Needed

Humans and Robots: Large Scale Changes from Many Smaller Scale Changes

The Institute for Public Policy Research Scotland has been looking at the impact of automation on employment in Scotland. Their estimate is that 46% of jobs are at high risk of automation. They also identify the primary challenge with this shift as being people’s ability to gain new skills whilst in employment, mid-career.

It’s not at all clear whether human skills can change at a rate that will allow us to outpace AI skills. There are differing views (see also Humans and Robots: Skills, Manufacturing and Construction) on this but I’m not sure we’ve got any choice but to try.

Many societies has been through these changes before, but not at this scale or this pace.

Whilst large changes are being predicted, the big shift will be made up of millions of smaller changes. One example of this are the Artificial Intelligence integrations that Microsoft are making in Microsoft Office. From design advice in PowerPoint to the Focused Inbox in Outlook these automations will soon become second nature to how we work. You’re already dependent upon the AI in the spelling and grammar capabilities. Driving all of these enhancements is AI that Microsoft is training with the data from over 100 million Office 365 users.

Also, there’s a little word from Dilbert at the end.


Automation poses a high risk to 1.2m Scottish jobs, report says

It put forward the recommendations in its Scotland’s Skills 2030 report, which said: “The world of work in 2030 will be very different to that in 2017. People are more likely to be working longer, and will often have multiple jobs, with multiple employers and in multiple careers.

“Over 2.5 million adults in Scotland (nearly 80%) will still be of working age by 2030. At the same time, over 46% of jobs (1.2 million) in Scotland are at high risk of automation.

“We will therefore need a skills system ready to work with people throughout their careers.


Microsoft and Artificial Intelligence’s long relationship is about to deepen