One of the core skills we have as humans is the ability to recognise and recognise things that we see. The ability for robots to do this has advanced significantly in recent year as the TED Talk by Joseph Redmon demonstrates:
As robots continue to gain skills a number of people are advocating that the United Nations should ban robots that kill:
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.
We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.
We have a log history of weaponising technology advances, perhaps even as long as human history. Once you remove humans from the field of war the moral needs change significantly. What’s to stop an ever escalating conflict when there is limited moral need to stop?
If warring robots is a scary thought, how about dancing ones. Guinness world records recently published this video of dancing Dobi robots, 1069 in all:
Some of the people closest to the ongoing robotic revolution have looked and decided that it’s time to have another alternative.
Until a couple of years ago, Antonio Garcia Martinez was living the dream life: a tech-start up guy in Silicon Valley, surrounded by hip young millionaires and open plan offices.
He’d sold his online ad company to Twitter for a small fortune, and was working as a senior exec at Facebook (an experience he wrote up in his best-selling book, Chaos Monkeys). But at some point in 2015, he looked into the not-too-distant future and saw a very bleak world, one that was nothing like the polished utopia of connectivity and total information promised by his colleagues.
“I’ve seen what’s coming,” he told me when I visited him recently for BBC Two’s Secrets of Silicon Valley. “And it’s a big self-driving truck that’s about to run over this economy.”
Most of the reported opinions on the future represent our future as if we are at a fork in the road with one way leading to a future Utopia and the other leading to a Dystopia. I’m sure that there are plenty of opinions that are somewhere in the middle but they tend not to get to much air time probably because it’s not very good copy.
A middle road is the most likely outcome with part of Utopia mixed with parts of Dystopia. I’m currently listening to a long book on the history of England and one of the things I’m learning from it is that good times and bad times generally live together side-by-side.
One area that has already seen significant automation is air travel. The pilot may be ultimately in charge but the systems available to them make them mostly redundant for most of the journey. yet, there is something settling about knowing that there is a human at the front making sure everything is going well. How would you feel about travelling a plane without a pilot?
UBS analysts expect the effort to familiarize the public with commercial self-piloting crafts will begin at that 2025 target date with autonomous cargo planes, which could demonstrate how the systems can safely fly from point A to B without a hitch. A next step could be to remove pilots gradually, shifting from a two-person cockpit to one person monitoring the system before phasing out humans entirely.
Last week I was on a remote island in the Outer Hebrides, we had WiFi at our cottage but it was slow, we didn’t have any mobile signal. There was a mobile signal in the nearby town, 3 miles away, but again, that was slow and covered a small portion of the island. It was a great reminder of how much we take connectivity for granted and that for much of the world that assumption is invalid.
Whilst I was away though there have been a number of AI, Machine Learning and Robotics related things happening:
As seems to be the case for all technologies a point is reached where people need to talk about the negative aspects including the hype-levels of the current hot tech. That has been the case this last week with MIT and others running stories. The MIT Technology Review one is looking specifically at IBM Watson and the perceived rate of progress it is making in healthcare. Progress in any technology is rarely a smooth ride and some high visibility failures are normal.
The MIT Technology Review has also been looking at the progress GE are making by using AI and Machine Learning. GE is going through a huge transformation that will embed advanced technologies and robotics into many of its products. This transformation is also radically changing the way people work with people taking on what would have previously been two separate roles. The role change is one that is already happening and I’ve got an post brewing on that.
“in the last 60 years automation has only eliminated one occupation: elevator operators.”
I’m not sure that’s really true, but I get the point. Aside from the statistics the core question people want to know the answer to is: “what can I do to prepare?” It’s not an easy question to answer, the only sure thing is that change is going to happen and humans have adapted to change for hundreds of thousands of years. That ability to adapt is what’s going to be key in the future, one way of being adaptable is to diversify and be able to take on multiple roles.
Jacques Mattheij had a dilemma, how to sort 2 metric tonnes of Lego which he did with some Lego (what else?), some hardware, python and a neural network. Although there won’t be many people with 2 metric tonnes of Lego I’m sure that many of us would love to be able to sort the boxes that do exist.
None of those companies has garnered anywhere near the attention that Watson has, thanks to its victory on the television quiz show Jeopardy! in 2011 and zealous marketing by IBM ever since. But lately, much of the press for Watson has been bad. A heavily promoted collaboration with the M.D. Anderson Cancer Center in Houston fell apart this year. As IBM’s revenue has swooned and its stock price has seesawed, analysts have been questioning when Watson will actually deliver much value. “Watson is a joke,” Chamath Palihapitiya, an influential tech investor who founded the VC firm Social Capital, said on CNBC in May.
When Jason Nichols joined GE Global Research in 2011, soon after completing postdoctoral work in organic chemistry at the University of California, Berkeley, he anticipated a long career in chemical research. But after four years creating materials and systems to treat industrial wastewater, Nichols moved to the company’s machine-learning lab. This year he began working with augmented reality. Part chemist, part data scientist, Nichols is now exactly the type of hybrid employee crucial to the future of a company working to inject artificial intelligence into its machines and industrial processes.
Today’s technological revolution is an entirely different beast from the industrial revolution. The pace of change is exponentially faster and far wider in scope. As Stanford University academic Jerry Kaplan writes in Humans Need Not Apply: today, automation is “blind to the color of your collar.” It doesn’t matter whether you’re a factory worker, a financial advisor or a professional flute-player: automation is coming for you.
In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.
Computer skills to the rescue! A first proof of concept was built of – what else – lego. This was hacked together with some python code and a bunch of hardware to handle the parts. After playing around with that for a while it appeared there were several basic problems that needed to be solved, some obvious, some not so obvious. A small collection:
Each of the roles and the analysis undertaken is debatable and a 2017 perspective almost certainly changes many of them, but it’s still a fun exercise to think through the impact of the robots on the many roles that people undertake.
The answer for Taxi Drivers and Chauffeurs is that your job has an 89% probability of automation and that the robots are watching.
I picked on Taxi Drivers and Chauffeurs because it’s been an interesting week for car technology.
Yandex which is Russia’s equivalent of Google and Uber has joined the race to create an autonomous vehicle with a project named Yandex.Taxi aiming for Level 5 autonomy (the highest level of autonomy defined by the SAE) also known as Full Automation.
They already have a demo available:
Cadillac are testing technology that is further integrating vehicles into the infrastructure in which they operate with vehicle-to-infrastructure (V2I) communication. This technology is also being developed to allow cars to talk to each other. In it’s latest announcement Cadilac are demonstrating how a vehicle would talk to traffic lights to enable a driver to know when the lights are going to change. At one level I think that this is a great idea, at another I can see all sorts of challenges. For me the greatest challenge is nearly always the humans, we have a wonderful ability to outsource our responsibilities to the technology, the smarter the technology becomes the lower our feeling of responsibility will become “Sorry officer, I ran the red light because my car failed to stop.”
The driverless car incorporates Yandex’s own technologies some of which, such as mapping, real-time navigation, computer vision and object recognition, have been functioning in a range of the company’s services for years. The self-driving vehicle’s ability to ‘make decisions’ in complex environments, such as busy city traffic, is ensured by Yandex’s proprietary computing algorithms, artificial intelligence and machine learning.
Cadillac tested out the V2I system by rigging two traffic signals near the GM Warren Technical Center campus to send data to its demo CTS vehicles. The automaker said the stop lights were able to use Dedicated Short-Range Communications (DSRC) protocol — which is the same system used for inter-car V2V communication — to send data to the cars about when the light would turn red.
While we may not be at the point of singularity yet, the growing capability of AI to make decisions, learn and correct its own decision-making process does seem to raise moral, ethical, social and security concerns. Consider the dilemmas being confronted now around self-driving cars. Insurance companies are questioning who owns the liability and risks, who carries the insurance policy. Developers are faced with unimaginable decisions about whose life gets saved in a deadly collision.
If I could think of a way to present it I would create a chart showing the various predictions for job creation and job losses that the robots are going to cause. One thing we can be sure about nearly all of these predictions is that they are likely to be wrong in detail, but correct in concept.
The latest prediction is one for retail which is using a World Economic Forum figure of 30%-50% of retail jobs being at risk from known automation capabilities. The challenge with this figure for most developed economies is that there are more people employed in retail than in manufacturing and many of us know the repercussions of the manufacturing switch, the predicted change is even greater than that experienced by manufacturing.
You might think that retail is at risk because it is easy to automate? But what about journalism? Earlier this month Google brought together a number of journalists to talk about the impact of AI in the newsroom. This meeting was discussing a report by the Associated Press “Report: How artificial intelligence will impact journalism”. Google were highlighting their Google News Lab which was it developed “to support the creation and distribution of the information that keeps us all informed about what’s happening in our world today—quality journalism.” Fake news has, of course, been a huge subject recently, I’m not so much concerned about outright fake news, that’s pretty easy to check, I’m more concerned by the potential for AI to create narrow news where it’s only the statistically high ranking items that become news.
In a more specific case for AI the UK Met Office has been talking about its use of AI to help predict the weather, something that’s something of an obsession for this island nation. This is underlined by the Met Office being one of the UK’s largest users of super-computing.
The retail landscape is experiencing unprecedented change in the face of disruptive forces, one of the most recent and powerful being the rapid rise of automation in the sector. The World Economic Forum predicts that 30-50% of retail jobs are at risk once known automation technologies are fully incorporated. This would result in the loss of about 6 million retail jobs and represents a greater percentage reduction than the manufacturing industry experienced. Using Osborne and Frey study with the Bureau of Labor Statistics, the analysis suggests that more than 7.5 million jobs are at high risk of computerization. A large proportion of the human capital represented by the retail workforce is therefore at risk of becoming “stranded workers.”
Streamlining workflows, automating mundane tasks, crunching more data, digging out insights and generating additional outputs are just a few of the mega-wins that can result from putting smart machines to work in the service of journalism.
Innovators throughout the news industry are collaborating with technology companies and academic researchers to push the envelope in a number of related areas, affecting all points on the news value chain from news gathering to production and distribution.
“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”
Cloud Video Intelligence API (now in Private Beta) uses powerful deep-learning models, built using frameworks like TensorFlow and applied on large-scale media platforms like YouTube. The API is the first of its kind, enabling developers to easily search and discover video content by providing information about entities (nouns such as “dog,” “flower” or “human” or verbs such as “run,” “swim” or “fly”) inside video content. It can even provide contextual understanding of when those entities appear; for example, searching for “Tiger” would find all precise shots containing tigers across a video collection in Google Cloud Storage.
It’s Google’s big developer conference this week – I/O. So far centre stage has been given over to Artificial Intelligence and Machine Learning.
There are a set of articles that have been published, some of which I’ve highlighted below but I can summarise all of them with this one quote:
“We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world.”
Sandar Pichai, CEO, Google
For many the shift to mobile has made little impact on their day-to-day work, it’s had far more impact on their personal life. The switch to AI-first will have a massive impact across both our work and personal lives.
The keynote for I/O was just under 2 hours long, but thankfully The Verge have put together a 10 minute video of the highlights:
Also, Volvo have announced that they are working on a system for self driving refuse collection lorries. This is yet another self-driving initiative, but one with a specific purpose in mind. Instead of trying to resolve the generic problem of self-driving vehicles, in all contexts, this project is seeking to enable self-driving in the urban refuse collection context. Historically targeted innovations like this one are adopted prior to more generic innovations like self-driving cars:
We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.
Our researchers at Google have shown over the past year how our machine learning can help clinicians detect breast cancer metastases in lymph nodes and screen for diabetic retinopathy. We’re working with Alphabet’s Verily arm and other biomedical partners to translate these research results into practical medical devices, such as a tool to help prevent blindness in patients with diabetes.
Now we’re ready to do more: machine learning is mature enough to start accurately predicting medical events—such as whether patients will be hospitalized, how long they will stay, and whether their health is deteriorating despite treatment for conditions such as urinary tract infections, pneumonia, or heart failure.
Machine-learning experts are in short supply as companies in many industries rush to take advantage of recent strides in the power of artificial intelligence. Google’s CEO says one solution to the skills shortage is to have machine-learning software take over some of the work of creating machine-learning software.
When it comes to risk the things that we rank highest in the machine learning world are physical things, in the UK at least. In a survey commissioned by the Royal Society it’s self driving cars and machines in the home that we give the highest risk to, we don’t think that health diagnosis technologies poses the same level of risk.
I find that intriguing, but not surprising, we are unnerved by the things that are physically close, but not the things that are hidden.
Whilst we are worrying about being physically harmed by self driving cars we don’t worry about predictive policing which could have a much greater impact on our society. This is a common problem, people aren’t very good at recognising the impact of things that are hidden because they are too blinded by the thing they can see. Like the conjurer’s misdirection we are too busy looking one way to see the thing that has just happened directly in-front of us.
Another interesting statement from the survey:
Results from the UK’s first in-depth assessment of public views on machine learning – carried out by the Royal Society and Ipsos MORI – demonstrate that while most people have not heard the term ‘machine learning’ (only 9% have), the vast majority have heard about or used at least one of its applications.
In other words, machine learning is having a significant impact on people’s lives even if they don’t recognise it.
This survey on social risk is published within a few days of an announcement by Durham Police (UK) that they are going to use artificial intelligence to help in the decision on whether, or not, a suspect should be kept in custody. How would you associate the social risk of such a system? I suspect that it depends on your background and how you regard the police.
It’s not really got anything to do with today’s theme, but I was quite intrigued to see that Google is setting it’s AI sights on musical instrumentation with a Neural Synthesizer, or NSynth. I’ve always been fascinated by the intersections of art and technology; pioneering artists have always embraced new technology to enable them to express their art. Music has been at the forefront of that pioneering so it will be interesting to see how musicians use these new technologies.
People in Britain are more scared of the artificial intelligence embedded in household devices and self-driving cars than in systems used for predictive policing or diagnosing diseases. That’s according to a survey commissioned by the Royal Society, which is billed as the first in-depth look at how the public perceives the risks and benefits associated with machine learning, a key AI technique.
To create music, NSynth uses a dataset containing sounds from individual instruments and then blends them to create hybrid sounds. According to the company, NSynth gives “artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” the company said in their announcement.
For a few million years we’ve been augmenting our productivity with tools. Those tools helped us to catch more meat and to fight our enemies, in other words they made us productive. We continue to augment our productivity with new tools that help us achieve modern day productivity needs. Whilst productivity itself is a simple measure of input, added value and output it’s not always easy to define what the added value is. How people add value is going to be a key question as we transform the meaning of productivity in the coming years as the tools available change dramatically.
There have been a number of items highlighting these new tools over the last few days:
The MIT Technology Review is reporting on the impact of augmented reality on healthcare and the Operating Room in particular. The key thing here is that the information is augmenting the operating environment within the context of the operating environment.
Improbable has secured a $500m investment to help it continue to develop it’s simulation technologies. As Virtual Reality and Augmented Reality devices become more mainstream there’s the potential for a huge market in creating the simulations that bring those devices to life.
Cisco, Google and Microsoft have all made announcements aimed at augmenting today’s office productivity environment with various uses of AI.
And someone decided to make a robot that looks and moves like a spider (but only 6 legs so no needs to worry) 🙂
Improbable’s platform, SpatialOS, is designed to let anyone build massive agent-based simulations, running in the cloud: imagine Minecraft with thousands of players in the same space, or researchers creating simulated cities to model the behaviour of millions. Its ultimate goal: to create totally immersive, persistent virtual worlds, and in doing so, change how we make decisions.
Or more simply, as Narula often jokes, “Basically, we want to build the Matrix.”
According to a Google study in 2015, the average worker spends only about 5 percent of his or her time actually coming up with the next big idea. The rest of our time is caught in the quicksand of formatting, tracking, analysis or other mundane tasks. That’s where machine learning can help.
Artificial Intelligence represents a tremendous opportunity to expand the reach and enhance the capabilities of enterprise technology. At Cisco, we have already been introducing AI into our solutions across security, orchestration, application performance and collaboration. Today, I’m excited to share Cisco’s intent to acquire MindMeld Inc., a San Francisco-based company that has developed a conversational platform based on natural language understanding (NLU). This acquisition, Cisco’s third in two weeks, represents how the buy pillar of our innovation strategy continues to impact our strategic shift to become more of a software company.
The Presentation Translator can automatically provide real-time translated subtitles or translate the text of their actual PowerPoint presentation while still preserving the original formatting.
In its current iteration, the service supports Arabic, Chinese, English, French, German, Italian, Japanese, Portuguese, Russian and Spanish. While the focus here is on translation, you also could use the same service to caption a presentation for audience members who are deaf or hard of hearing.
There have been several Artificial Intelligence (AI) articles over the last couple of days.
A number of these have been commentaries on some research put out by Gartner. The simplified story within the Gartner research is that things that professionals do today will be done by AI at a significantly lower cost at some point in the future. Once that happens those things can be regarded as utilities like electricity. I don’t think that there is any news in this that’s been the general trajectory for some time, the unknown is the speed and nature of that shift. Gartner is going for 2022 by which they are really saying is something like “within around 5 years”.
(One of the things that you need to understand about Gartner is that people listen to them, so when they report something it’s worth taking note even if it’s just to understand where a Gartner reader like CIOs and CTOs may be coming from in the future.)
Interestingly that electricity utility thought is also one of the key points raised by Stowe Boyd in A Q&A with Erica Morphy where he quotes Andrew Ng as saying “AI is the new electricity”.
To further underline that thought both ServiceNow and Grammerly made AI related announcements. ServiceNow are focusing their AI attentions on the automation of work. Grammerly is raising money to help augment our language skills.
Oh, and also, Amazon released another personal assistant based on Alexa, the Echo Show. This time Echo has been given a screen.
“The economics of AI and machine learning will lead to many tasks performed by professionals today becoming low-cost utilities,” said Stephen Prentice, vice president and Gartner Fellow. “AI’s effects on different industries will force the enterprise to adjust its business strategy. Many competitive, high-margin industries will become more like utilities as AI turns complex work into a metered service that the enterprise pays for, like electricity.”
“we have to learn to dance with the robots, not to run away from them. But that means we have to develop AI that is dance-withable, and not unrunnable-away-from.”
The four main use cases for ServiceNow’s automation efforts include:
Anomaly detection to prevent outages in IT departments. ServiceNow will apply algorithms to find patterns and outliers that can lead to an outage. Anomalies can also be correlated with past events and workflows.
Routing and categorizing of work. Learning algorithms will automatically route work based on past patterns. Tasks such as assessing risk, assigning owners, and categorizing work will be automated.
Performance predictions. The Intelligent Automation Engine can be used to set a performance goal and data profile and get predictive analytics on hitting goals.
Benchmarks vs. peers. ServiceNow is using the automation engine to compare companies to their industries and peers to gauge efficiency and make recommendations.
The company’s pitch centers on its machine learning capabilities. It claims this technology can dig into the substance of users’ writing in a way that’s not possible with Microsoft Word or other autocorrect programs.
Grammarly says it can advise not only on proper grammatical structure but on tone and word selection as well.
The Institute for Public Policy Research Scotland has been looking at the impact of automation on employment in Scotland. Their estimate is that 46% of jobs are at high risk of automation. They also identify the primary challenge with this shift as being people’s ability to gain new skills whilst in employment, mid-career.
It’s not at all clear whether human skills can change at a rate that will allow us to outpace AI skills. There are differing views (see also Humans and Robots: Skills, Manufacturing and Construction) on this but I’m not sure we’ve got any choice but to try.
Many societies has been through these changes before, but not at this scale or this pace.
Whilst large changes are being predicted, the big shift will be made up of millions of smaller changes. One example of this are the Artificial Intelligence integrations that Microsoft are making in Microsoft Office. From design advice in PowerPoint to the Focused Inbox in Outlook these automations will soon become second nature to how we work. You’re already dependent upon the AI in the spelling and grammar capabilities. Driving all of these enhancements is AI that Microsoft is training with the data from over 100 million Office 365 users.
Also, there’s a little word from Dilbert at the end.
It put forward the recommendations in its Scotland’s Skills 2030 report, which said: “The world of work in 2030 will be very different to that in 2017. People are more likely to be working longer, and will often have multiple jobs, with multiple employers and in multiple careers.
“Over 2.5 million adults in Scotland (nearly 80%) will still be of working age by 2030. At the same time, over 46% of jobs (1.2 million) in Scotland are at high risk of automation.
“We will therefore need a skills system ready to work with people throughout their careers.
At humans we are pretty good at falling into the trap of believing that what is has always been. We simplify the complexity around us by treating as many things as possible as permanent. Many of the macro systems which define our lives every day are not as permanent or historic as we treat them.
The idea of going out to a job is only a couple of hundred years old.
While Capitalism has been around since the 14th century; industrial capitalism has only been around since the 18th century.
People’s skills, and the way that they gain those skills, changed dramatically through that time. The skills we are going to need for the Robot future are likely to be very different to the skills we need today, that almost certainly means that the way we gain the skills will change dramatically also. But the big question is, will the Humans be able to keep up? In today’s Humans and Robots we look at some research by the Pew Research Centre debating the skills future.
We’ll also look at some of the areas already being impacted by the rise of the Robot:
As robots, automation and artificial intelligence perform more tasks and there is massive disruption of jobs, experts say a wider array of education and skills-building programs will be created to meet new demands. There are two uncertainties: Will well-prepared workers be able to keep up in the race with AI tools? And will market capitalism survive?
This report picks up on five major themes for skills and training in the emerging technology age:
Theme 1: The training ecosystem will evolve, with a mix of innovation in all education formats
Theme 2: Learners must cultivate 21st‑century skills, capabilities and attributes
Theme 3: New credentialing systems will arise as self-directed learning expands
Theme 4: Training and learning systems will not meet 21st‑century needs by 2026
Theme 5: Jobs? What jobs? Technological forces will fundamentally change work and the economic landscape
There’s a phrase that I’ve used a number of time on this blog: “Learning is work, work is learning” Harold Jarche. This is going to remain true for the present, and ever more so into the future, but it’s not clear that we will keep up, I’ll leave you with this thought:
About a third of respondents expressed no confidence in training and education evolving quickly enough to match demands by 2026.
As we’ve explained in the past, advanced manufacturing—with all of its automation and super-efficiencies—can certainly bring productivity gains. But it won’t bring back manufacturing jobs. Just last month we finally got some hard numbers on the impact of automation on the labor force in our factories and warehouses: more robots bring with them decreased employment and lower wages. So if Apple’s focus is indeed going to be on using robotics, it’s not going to be good for the workforce.
We are, apparently, entering a new era of automation and robotics.
For some this new era is one of opportunity where we explore new horizons with the help of robots.
For others the new horizon is one where we are beholden to our robot overlords.
I’m not sure anyone really knows the answer to where it will all end up predicting the future is notoriously tricky to do. So, I’ve decided to start curating some of the content I am seeing coming through and providing my own perspective on that content as a learning activity for me, and hopefully for you also.
I’m using the term robots to encompass anything that automates something that a human currently does, hence the title Humans and Robots.
Proterra, an electric bus manufacturer, just announced its three-phase plan to create the self-driving public transit system of the future, filled with autonomous, emission-free electric buses. The company says the move to autonomy should make mass transit safer and more efficient than ever before.
The automation of transportation is picking up pace with lots of very large organisations already committing significant investment budget. We have become used to autonomous trains in various situations, moving to autonomous buses is a significantly more complicated if those buses are to use the same road infrastructure as human drivers.
The days of driverless buses aren’t that far away, Proterra estimate 2019. That would have a significant impact on UK employment figures; Transport for London operates over 8,000 buses, as an example, I suspect that means that they employ over 16,000 drivers but I couldn’t find any definitive numbers. That’s a significant resource to redeploy in a transitional period that could be as short as 10 years. There will be some residual work for these drivers in cities like London where the tourists will pay for a heritage experience, but that’s a very small number compared to the needs for mass transport.
Whilst the impact on human employability is significant, so is the impact of safer emission-free mass transit. Many cities are struggling with air quality and what’s not to like about improved safety.