YouTube is now your Mum/Dad/Practical Friend

One of the things that fascinates me is the social change that is driven by the internet and internet services.

Once upon a time we would answer practical problems in one of two ways:

1. Ask someone we trusted

The question would normally be to our mum or dad or to that a practical friend who knows how to do anything. Their proximity would allow them to show us how to do something in person, or talk us through it over the phone. Sometimes their answer would be to talk to someone else that they know who is practical in a particular way: “Talk to your grandma she’s really good at buttonholes.”; “Ask Eddie he knows how to protect a Koi pond from herons.”; “Ask Mary she’s good for advice on home automation systems.”

As a result our wisdom was limited by their knowledge, or the knowledge of the people that they know. What’s more we only knew if their knowledge was any good when we tried what they suggested. We had to decide whether to try what the suggested by judging their level of confidence in their knowledge. I suspect we’ve all had friends who’ve confidently told us to do something that has later turned out to be the last thing we should have done.

This was the normal way of finding out how to do something.

2. Go to the library or take a course

If we needed to know something outside the knowledge of the individuals we trust we may go as far as to do some formal research. This research would have mandated a trip to the local library and wading through reference manuals and the like. In extreme cases we may even take a course on how to do something, but this was only for the truly dedicated.

This was not the normal way of finding out how to do something, it was only used in exceptional circumstances.

Along comes YouTube (other video sources are available)

For many YouTube has now replaced your mum, dad and practical friend. it’s even replaced the library and training courses for some.

I’ve had two situations recently where this was the case:

Windscreen Washer Failure

It’s been an interesting winter here in the UK with different whether each day, switching from warm and wet to bitterly cold. Windscreen washers have, therefore, become a vital part of road travel, when the washer in the car that my wife drives failed it was important that it was fixed.

My first instinct was that it was just a fuse problem so opened up the in-car manual to see which one, only to discover that the windscreen washer wasn’t listed. Fortunately YouTube had most of the answer – someone called Andy Robertson had experienced exactly the same problem and posted a video. I say most of the answer because the fuse box that Andy shows isn’t quite the same as the one that’s in our Polo, but it did allow me to know that it was a 7.5 amp fuse and following a short process of illumination to find the one that had blown.

iPhone Charging Problem

I’ve been struggling to charge my iPhone recently – I’d plug a lightening cable into it and leave it, when I came back to it later the cable would be slightly out of the socket and no charging will have taken place. Having tried a number of different cables I realised that the problem was with the socket in the iPhone itself, not the cables. Going to the Apple Store to get it fixed sounded like an expensive proposition so I took to YouTube for help. It wasn’t long before I found a set of videos from people all telling me that it was likely to be dust and/or lint in the mechanism and simply to get a pin and dig it out. Putting a metal thing into a charging point didn’t sound like a good idea, but the basic idea worked a treat and now my phone stays plugged in.

I’m not sure which of my practical friends would have known to do that, mu parents certainly wouldn’t.

The New Normal

These are a couple of personal examples of what I think is the new normal way of working out how to do something, but it’s not just me. The car fuse video has been watched over 27,000 times, the iPhone one nearly 700,000 times. A friend recently used another YouTube video to work out how to get a broken headphone jack out of an iPad. Another friend gives overviews of his allotment that people use to get advice on the technicalities of an allotment and allotment life.

I wonder how many of the 1 billion hours of YouTube video that is watched every day is so helping people with their how do I questions?

Predictions: “in about 15 years” | “within the next 10 years” | “25 years from now”

Imagine that the year is 2032.

What do you foresee?

What dramatic change has occurred?

How has your daily life change?

You are almost certainly wrong. We like to think that we can see the next 10, 15 even 20 years, but the reality is that we are very poor at it.

In 1955 we predicted: “Nuclear-powered vacuum cleaners will probably be a reality in 10 years.” Thankfully, that didn’t happen.

As I child I would watch Tomorrow’s World and marvel at the impending future that it outlined. Here’s one from 1969 imagining the Office of the Future (there are two articles in this clip, the Office of the Future is in the first couple of minutes):

Even then we imagined robots doing our bidding even if it was one that looked more like a teasmaid than R2D2.

It’s interesting to see how many of these functional predictions happened, but in completely different ways – look out for the huge camera that fulfils the purpose many people use a mobile phone camera for today.

This wasn’t really “tomorrow’s” world being shown many of the functions shown that have been revolutionised took another 20 to 30 years to become mainstream. Many of the functions still aren’t mainstream and i’m not sure we would want them if they were.

How about this one outlining “Cassette Navigation” from 1971:

The use of GPS based navigation systems is second nature to most of us, but that was only possible when the GPS network was completed in 1994 and even then it didn’t become mainstream until the mid-2000’s when the likes of Garmin, TomTom and Magellan created the market.  Whilst GPS based SatNav systems do a functionally similar thing to the Cassette Navigation system their implementation is completely different and I doubt that anyone seeing the Cassette Navigation system imagined a future SatNav system. Again, this wasn’t “tomorrow’s” world, this was a problem that wouldn’t be solved for another 25 years.

In 2010 Jerry Zucker said: “It’s Moore’s Law, everything will be obsolete in 10 years – I’ll be obsolete in 10 years!” in reference to the iPad. It’s nearly the end of 2017 and I don’t see the iPad, or Jerry Zucker, being obsolete in the next couple of years.

Whilst we are terrible at predicting the longer term future it is fortunately for us most things progress along predictable pathways most of the time.

Within IT we are currently telling ourselves that we are living in a world of unparalleled and rapidly expanding automation, but we’ve been in that would since the invention of the Spinning Jenny in 1764, and arguably for millennia before that. What we are seeing now is the next step in the pathway that has been running for over 250 years.

I’m not saying that we shouldn’t try to imagine a future, or even try to predict it, we just need to be careful how much trust we place in our ability to predict.

I suspect that science fiction writers and film makers have done a better job than many of us deeply embedded in today’s technology. Minority Reports, which was 15 years old in 2017, was apparently a quite a good predictor of a number of technologies. I’m still waiting for my flying car though.

“I never think of the future, it comes soon enough.” Albert Einstein

Our password system is broken, and has been for over 50 years!

There has been a lot of commentary over the weekend about the pronouncement from Nadine Dorries that she shares her login with her staff:

I’m not planning to add to that overall commentary because others have done that already.

The issue that I want to address is that this is that it’s symptomatic of a broken system.

Passwords as a method of verifying authentication was adopted by computing in it’s very earliest of days. Passwords probably originated as a way of identifying who was doing what in the earliest time-sharing system which was MIT’s Compatible Time-Sharing System (CTSS) in the mid-1960s.

This early password system suffered from many of the same problems we experience with passwords today – in other words the password system has been broken for over 50 years and yet we persist.

The CTSS has been documented as the first case of password theft, this was caused by an insider circumventing the system. Allan Scherr, a researcher, wanted more computer time, which was very limited at the time. Scherr came up with the idea that he could increase his own usage by using the time that others weren’t using. He did this by using a privilege that had been granted to him which was to get a physical printout of any of the files on the system, so Scherr asked for a printout of the password file, which was, a text file:

There was a way to request files to be printed offline by submitting a punched card with the account number and file name. Late one Friday night, I submitted a request to print the password files and very early Saturday morning went to the file cabinet where printouts were placed and took the listing out of the M1416 folder. I could then continue my larceny of machine time.

Things got a bit more interesting when Scherr handed the password list out to other students and one of them decided to use it to log in to the computer lab director’s account and leave “taunting messages”.

Since those days in the mid-60’s we have been trying to convince ourselves that passwords are still the right way to go.

We’ve spent many hours training people how best to use passwords – long, complex, changing, non-repeating, etc.

We’ve invested many hours into code to strengthen passwords stores and probably just as many hours deploying, fixing and then redeploying that code.

Many lines of journalistic content have been invested on passwords and password related problems.

Passwords have resulted in an immeasurable volume of hours in lost productivity as people struggle to work out what the right password is. How many times have you lost hours of your working day caused by a password problem?

Then there’s all of the damage caused to individuals and organisations by hacked, poorly protected or poorly handled passwords.

We have, at least, created an opportunity for people to create applications to manage our passwords and to build businesses on the back of that opportunity.

Yet, the fundamental issues that existed 50 years ago still exist and those issues primarily surround the weak link in the password chain and that’s the human. Humans will always circumvent the system from inside. This is normally because people are very poor at estimating the risk of poor password practices and will circumvent them for almost any advantage. I suspect that Nadine Dorries gives her staff her password because there’s an advantage to her to do so, even if it is very unwise.

We’ve fixed the password problems in the physical world by using physical security which limits the access to the person with the physical entity. We started using physical keys as a way of securing physical property over 1000 years ago! Imagine how strange it would seem to go up to your car and type in a password, we’d soon have people patrolling car parks to stop miscreants trying to brute force attack on the car keyboard. How about walking up to a highly secure office environment, tapping on the small window in the door and saying “The weather in Moscow is mild for the time of year”? Would you expect to be let in?

In conclusion, the last 50 years have shown us that passwords have fundamental problems that we shouldn’t expect to fix because that would require humans to change. We need to move to a different authentication system, one based on physical security.

Managing the white-space | Leaving the smaller screens behind in iOS 11

One of the things I’ve noticed as the user of both an older and a newer iPhone is that the 4.7″ screen that is on the iPhone 6/7/8 is now the baseline standard being used for iOS design decisions.

In iOS 11 Apple have made a number of design decisions that increase the amount of screen being used by items.

In the AppStore, as an example, the icons have got bigger and the titles have got bigger, so that the number of apps you see in the Update section have reduced and the titles are often truncated on a 4″ iPhone 5/5S:

20171003_104203000_iOS
AppStore Updates on the iPhone 5S 4″ Screen with iOS 11.

Another example of the design choices being made is the lock screen and associated notifications. If you have a clock on your lock screen and you are playing some audio then notifications are almost useless because you only get part of the first notification without scrolling:

Lock Screen
Lock Screen on the iPhone 5S 4″ Screen with iOS 11

Screen design decisions are a balance between content and white-space, white-space is the space between the content. Good design is defined by the white-space more than the content. That’s where the iOS 11 design decisions are being driven from, as screens have got bigger on the iPhone 6/7/8 (4.7″) and the 6/7/9 Plus (5.5″) Apple are increasing the amount of white-space so that the design stays good on those devices.

Anyone who has used a corporate application will know how awful it is when white-space is ignored and content is crammed on to screens. Apple could have used the extra screen space on the newer iPhone models to squeeze in more content, which I’m sure they’ve done, but they’ve balanced it with an increase in white-space. Those design decisions mean that the content on the 4″ screen feels like it’s a bit too spaced out.

Humans and Robots: Seeing Robots, Warring Robots and Dancing Robots

One of the core skills we have as humans is the ability to recognise and recognise things that we see. The ability for robots to do this has advanced significantly in recent year as the TED Talk by Joseph Redmon demonstrates:

As robots continue to gain skills a number of people are advocating that the United Nations should ban robots that kill:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.

We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.

An Open Letter to the United Nations Convention on Certain Conventional Weapons

We have a log history of weaponising technology advances, perhaps even as long as human history. Once you remove humans from the field of war the moral needs change significantly. What’s to stop an ever escalating conflict when there is limited moral need to stop?

If warring robots is a scary thought, how about dancing ones. Guinness world records recently published this video of dancing Dobi robots, 1069 in all:

Personally I think that this is quite scary.

Humans and Robots: Having an off-grid alternative and self-flying planes.

Some of the people closest to the ongoing robotic revolution have looked and decided that it’s time to have another alternative.

Until a couple of years ago, Antonio Garcia Martinez was living the dream life: a tech-start up guy in Silicon Valley, surrounded by hip young millionaires and open plan offices.

He’d sold his online ad company to Twitter for a small fortune, and was working as a senior exec at Facebook (an experience he wrote up in his best-selling book, Chaos Monkeys). But at some point in 2015, he looked into the not-too-distant future and saw a very bleak world, one that was nothing like the polished utopia of connectivity and total information promised by his colleagues.

“I’ve seen what’s coming,” he told me when I visited him recently for BBC Two’s Secrets of Silicon Valley. “And it’s a big self-driving truck that’s about to run over this economy.”

Silicon Valley luminaries are busily preparing for when robots take over

Most of the reported opinions on the future represent our future as if we are at a fork in the road with one way leading to a future Utopia and the other leading to a Dystopia. I’m sure that there are plenty of opinions that are somewhere in the middle but they tend not to get to much air time probably because it’s not very good copy.

A middle road is the most likely outcome with part of Utopia mixed with parts of Dystopia. I’m currently listening to a long book on the history of England and one of the things I’m learning from it is that good times and bad times generally live together side-by-side.

One area that has already seen significant automation is air travel. The pilot may be ultimately in charge but the systems available to them make them mostly redundant for most of the journey. yet, there is something settling about knowing that there is a human at the front making sure everything is going well. How would you feel about travelling a plane without a pilot?

UBS analysts expect the effort to familiarize the public with commercial self-piloting crafts will begin at that 2025 target date with autonomous cargo planes, which could demonstrate how the systems can safely fly from point A to B without a hitch. A next step could be to remove pilots gradually, shifting from a two-person cockpit to one person monitoring the system before phasing out humans entirely. 

Pilotless planes might be here by 2025, if anyone wants to fly in them

2025 isn’t very far away, and that’s the estimate for a start. i expect the transition period to be long.

The Huge Failure | Dilbert on Open Plan Offices

I’ve really enjoyed the recent series of cartoon from Scott Adams on open-plan offices:

Some of the comments on these cartoons are just as fabulous:

Let’s not forget that cubicles were a massive failure before open plans managed to out-fail them.

Open spaces are supposed to invite the open flow and exchanges of Ideas. And they do, ideas like……”How bad traffic was this morning?….Did you catch the game last night?….how was your weekend?…..etc” Maybe some work topics might get discussed

Working environments is a very emotive subject and rightly so. Many of us spend more time at work than we do at home and we want to be productive.

What fascinates me is that many organisations spend huge amounts of money creating something that people don’t want.

Here’s something I wrote earlier: Productivity and place: Where are you most productive?

Humans and Robots: The AI Over-hype and Sorting Lego

Last week I was on a remote island in the Outer Hebrides, we had WiFi at our cottage but it was slow, we didn’t have any mobile signal. There was a mobile signal in the nearby town, 3 miles away, but again, that was slow and covered a small portion of the island. It was a great reminder of how much we take connectivity for granted and that for much of the world that assumption is invalid.

Whilst I was away though there have been a number of AI, Machine Learning and Robotics related things happening:

As seems to be the case for all technologies a point is reached where people need to talk about the negative aspects including the hype-levels of the current hot tech. That has been the case this last week with MIT and others running stories. The MIT Technology Review one is looking specifically at IBM Watson and the perceived rate of progress it is making in healthcare. Progress in any technology is rarely a smooth ride and some high visibility failures are normal.

The MIT Technology Review has also been looking at the progress GE are making by using AI and Machine Learning. GE is going through a huge transformation that will embed advanced technologies and robotics into many of its products. This transformation is also radically changing the way people work with people taking on what would have previously been two separate roles. The role change is one that is already happening and I’ve got an post brewing on that.

The Guardian is the latest organisation to do a round up of the research and current thinking into the impact of automation on jobs:

“in the last 60 years automation has only eliminated one occupation: elevator operators.”

I’m not sure that’s really true, but I get the point. Aside from the statistics the core question people want to know the answer to is: “what can I do to prepare?” It’s not an easy question to answer, the only sure thing is that change is going to happen and humans have adapted to change for hundreds of thousands of years. That ability to adapt is what’s going to be key in the future, one way of being adaptable is to diversify and be able to take on multiple roles.

If you want to know more about Machine Learning then this nice Visual Introduction from R2D3 will get you started.

Jacques Mattheij had a dilemma, how to sort 2 metric tonnes of Lego which he did with some Lego (what else?), some hardware, python and a neural network. Although there won’t be many people with 2 metric tonnes of Lego I’m sure that many of us would love to be able to sort the boxes that do exist.


A Reality Check for IBM’s AI Ambitions via MIT Technology Review

None of those companies has garnered anywhere near the attention that Watson has, thanks to its victory on the television quiz show Jeopardy! in 2011 and zealous marketing by IBM ever since. But lately, much of the press for Watson has been bad. A heavily promoted collaboration with the M.D. Anderson Cancer Center in Houston fell apart this year. As IBM’s revenue has swooned and its stock price has seesawed, analysts have been questioning when Watson will actually deliver much value. “Watson is a joke,” Chamath Palihapitiya, an influential tech investor who founded the VC firm Social Capital, said on CNBC in May.


General Electric Builds an AI Workforce via MIT Technology Review

When Jason Nichols joined GE Global Research in 2011, soon after completing postdoctoral work in organic chemistry at the University of California, Berkeley, he anticipated a long career in chemical research. But after four years creating materials and systems to treat industrial wastewater, Nichols moved to the company’s machine-learning lab. This year he began working with augmented reality. Part chemist, part data scientist, Nichols is now exactly the type of hybrid employee crucial to the future of a company working to inject artificial intelligence into its machines and industrial processes.


What jobs will still be around in 20 years? Read this to prepare your future by The Guardian.

Today’s technological revolution is an entirely different beast from the industrial revolution. The pace of change is exponentially faster and far wider in scope. As Stanford University academic Jerry Kaplan writes in Humans Need Not Apply: today, automation is “blind to the color of your collar.” It doesn’t matter whether you’re a factory worker, a financial advisor or a professional flute-player: automation is coming for you.


A visual introduction to machine learning by R2D3

In machine learning, computers apply statistical learning techniques to automatically identify patterns in data. These techniques can be used to make highly accurate predictions.


Sorting 2 Metric Tons of Lego by Jacques Mattheij

Computer skills to the rescue! A first proof of concept was built of – what else – lego. This was hacked together with some python code and a bunch of hardware to handle the parts. After playing around with that for a while it appeared there were several basic problems that needed to be solved, some obvious, some not so obvious. A small collection:

Humans and Robots: Is your job doomed? Your robot chauffeur is waiting.

How close are the robots? They’re coming, but how far away are they? In 2013 Carl Benedikt Frey and Michael A. Osborne did the analysis for the US. The aptly named WILL ROBOTS TAKE MY JOB? site has made this analysis available alongside other data. The data set is very broad from shoe and leather workers (robots are watching) to personal care aides (robots are watching) to model makers, metal and plastic (doomed) to software developers, systems software (no worries).

Each of the roles and the analysis undertaken is debatable and a 2017 perspective almost certainly changes many of them, but it’s still a fun exercise to think through the impact of the robots on the many roles that people undertake.

The answer for Taxi Drivers and Chauffeurs is that your job has an 89% probability of automation and that the robots are watching.

I picked on Taxi Drivers and Chauffeurs because it’s been an interesting week for car technology.

Yandex which is Russia’s equivalent of Google and Uber has joined the race to create an autonomous vehicle with a project named Yandex.Taxi aiming for Level 5 autonomy (the highest level of autonomy defined by the SAE) also known as Full Automation.

They already have a demo available:

Cadillac are testing technology that is further integrating vehicles into the infrastructure in which they operate with vehicle-to-infrastructure (V2I) communication. This technology is also being developed to allow cars to talk to each other. In it’s latest announcement Cadilac are demonstrating how a vehicle would talk to traffic lights to enable a driver to know when the lights are going to change. At one level I think that this is a great idea, at another I can see all sorts of challenges. For me the greatest challenge is nearly always the humans, we have a wonderful ability to outsource our responsibilities to the technology, the smarter the technology becomes the lower our feeling of responsibility will become “Sorry officer, I ran the red light because my car failed to stop.”

The robots are definitely watching.

But how far can the robots go? When will their intelligence overtake human intelligence? When will we reach singularity and what will its impact be? That’s the question posed by a colleague Annu Singh:

now’s the time to think about and prepare for this tomorrow, before the limits of human intelligence startle us like a soft whisper.


Yandex.Taxi Unveils Self-Driving Car Project via Yandex

The driverless car incorporates Yandex’s own technologies some of which, such as mapping, real-time navigation, computer vision and object recognition, have been functioning in a range of the company’s services for years. The self-driving vehicle’s ability to ‘make decisions’ in complex environments, such as busy city traffic, is ensured by Yandex’s proprietary computing algorithms, artificial intelligence and machine learning.


Cadillac tech ‘talks’ to traffic lights so you don’t run them via Mashable

Cadillac tested out the V2I system by rigging two traffic signals near the GM Warren Technical Center campus to send data to its demo CTS vehicles. The automaker said the stop lights were able to use Dedicated Short-Range Communications (DSRC) protocol — which is the same system used for inter-car V2V communication — to send data to the cars about when the light would turn red.


Singularity in AI: Are we there yet? via DXC Blogs

While we may not be at the point of singularity yet, the growing capability of AI to make decisions, learn and correct its own decision-making process does seem to raise moral, ethical, social and security concerns. Consider the dilemmas being confronted now around self-driving cars. Insurance companies are questioning who owns the liability and risks, who carries the insurance policy. Developers are faced with unimaginable decisions about whose life gets saved in a deadly collision.

Humans and Robots: AI in Retail, Automotive, Weather and the Newsroom

If I could think of a way to present it I would create a chart showing the various predictions for job creation and job losses that the robots are going to cause. One thing we can be sure about nearly all of these predictions is that they are likely to be wrong in detail, but correct in concept.

The latest prediction is one for retail which is using a World Economic Forum figure of 30%-50% of retail jobs being at risk from known automation capabilities.  The challenge with this figure for most developed economies is that there are more people employed in retail than in manufacturing and many of us know the repercussions of the manufacturing switch, the predicted change is even greater than that experienced by manufacturing.

You might think that retail is at risk because it is easy to automate? But what about journalism? Earlier this month Google brought together a number of journalists to talk about the impact of AI in the newsroom. This meeting was discussing a report by the Associated Press “Report: How artificial intelligence will impact journalism”. Google were highlighting their Google News Lab which was it developed “to support the creation and distribution of the information that keeps us all informed about what’s happening in our world today—quality journalism.” Fake news has, of course, been a huge subject recently, I’m not so much concerned about outright fake news, that’s pretty easy to check, I’m more concerned by the potential for AI to create narrow news where it’s only the statistically high ranking items that become news.

Google were also highlighting their prowess at automatically classifying video content which they will soon be making available via Google Cloud Video Intelligence API. Classification of content is a massive issue for news organisations and having a machine do it for you has to be a winner.

In a more specific case for AI the UK Met Office has been talking about its use of AI to help predict the weather, something that’s something of an obsession for this island nation. This is underlined by the Met Office being one of the UK’s largest users of super-computing.

The impact of technology in the automotive business was recently underlined as Ford replaced its CEO with the person who was heading up their self-driving car business. Most of the content in this article is in the video.

And finally, anyone want an autonomous robot security guard with a built-in drone?


Retail Automation: Stranded Workers? Opportunities and risks for labor and automation by IRRC Institute (pdf)

The retail landscape is experiencing unprecedented change in the face of disruptive forces, one of the most recent and powerful being the rapid rise of automation in the sector. The World Economic Forum predicts that 30-50% of retail jobs are at risk once known automation technologies are fully incorporated. This would result in the loss of about 6 million retail jobs and represents a greater percentage reduction than the manufacturing industry experienced. Using Osborne and Frey study with the Bureau of Labor Statistics, the analysis suggests that more than 7.5 million jobs are at high risk of computerization. A large proportion of the human capital represented by the retail workforce is therefore at risk of becoming “stranded workers.”


Report: How artificial intelligence will impact journalism via AP Insights

Streamlining workflows, automating mundane tasks, crunching more data, digging out insights and generating additional outputs are just a few of the mega-wins that can result from putting smart machines to work in the service of journalism.

Innovators throughout the news industry are collaborating with technology companies and academic researchers to push the envelope in a number of related areas, affecting all points on the news value chain from news gathering to production and distribution.


AI in the newsroom: What’s happening and what’s next? via Google

“There is a lot of work to do, but it’s about the mindset,” D’Aniello said. “Technology was seen as a disruptor of the newsroom, and it was difficult to introduce things. I don’t think this is the case anymore. The urgency and the need is perceived at the editorial level.”


 

Announcing Google Cloud Video Intelligence API, and more Cloud Machine Learning updates via Google

Cloud Video Intelligence API (now in Private Beta) uses powerful deep-learning models, built using frameworks like TensorFlow and applied on large-scale media platforms like YouTube. The API is the first of its kind, enabling developers to easily search and discover video content by providing information about entities (nouns such as “dog,” “flower” or “human” or verbs such as “run,” “swim” or “fly”) inside video content. It can even provide contextual understanding of when those entities appear; for example, searching for “Tiger” would find all precise shots containing tigers across a video collection in Google Cloud Storage.

 

 

Humans and Robot: Google I/O and Self Driving Bin Lorries

It’s Google’s big developer conference this week – I/O. So far centre stage has been given over to Artificial Intelligence and Machine Learning.

There are a set of articles that have been published, some of which I’ve highlighted below but I can summarise all of them with this one quote:

“We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world.”

Sandar Pichai, CEO, Google

For many the shift to mobile has made little impact on their day-to-day work, it’s had far more impact on their personal life. The switch to AI-first will have a massive impact across both our work and personal lives.

The keynote for I/O was just under 2 hours long, but thankfully The Verge have put together a 10 minute video of the highlights:

Also, Volvo have announced that they are working on a system for self driving refuse collection lorries. This is yet another self-driving initiative, but one with a specific purpose in mind. Instead of trying to resolve the generic problem of self-driving vehicles, in all contexts, this project is seeking to enable self-driving in the urban refuse collection context. Historically targeted innovations like this one are adopted prior to more generic innovations like self-driving cars:


Making AI work for everyone via Google

We are now witnessing a new shift in computing: the move from a mobile-first to an AI-first world. And as before, it is forcing us to reimagine our products for a world that allows a more natural, seamless way of interacting with technology. Think about Google Search: it was built on our ability to understand text in webpages. But now, thanks to advances in deep learning, we’re able to make images, photos and videos useful to people in a way they simply haven’t been before. Your camera can “see”; you can speak to your phone and get answers back—speech and vision are becoming as important to computing as the keyboard or multi-touch screens.


Partnering on machine learning in healthcare via Google

Our researchers at Google have shown over the past year how our machine learning can help clinicians detect breast cancer metastases in lymph nodes and screen for diabetic retinopathy. We’re working with Alphabet’s Verily arm and other biomedical partners to translate these research results into practical medical devices, such as a tool to help prevent blindness in patients with diabetes.

Now we’re ready to do more: machine learning is mature enough to start accurately predicting medical events—such as whether patients will be hospitalized, how long they will stay, and whether their health is deteriorating despite treatment for conditions such as urinary tract infections, pneumonia, or heart failure.


Why Google’s CEO Is Excited About Automating Artificial Intelligence via MIT Technology

Machine-learning experts are in short supply as companies in many industries rush to take advantage of recent strides in the power of artificial intelligence. Google’s CEO says one solution to the skills shortage is to have machine-learning software take over some of the work of creating machine-learning software.

Humans and Robots: What are you worried about? Machines in the home?

When it comes to risk the things that we rank highest in the machine learning world are physical things, in the UK at least. In a survey commissioned by the Royal Society it’s self driving cars and machines in the home that we give the highest risk to, we don’t think that health diagnosis technologies poses the same level of risk.

I find that intriguing, but not surprising, we are unnerved by the things that are physically close, but not the things that are hidden.

Whilst we are worrying about being physically harmed by self driving cars we don’t worry about predictive policing which could have a much greater impact on our society. This is a common problem, people aren’t very good at recognising the impact of things that are hidden because they are too blinded by the thing they can see. Like the conjurer’s misdirection we are too busy looking one way to see the thing that has just happened directly in-front of us.

Another interesting statement from the survey:

Results from the UK’s first in-depth assessment of public views on machine learning – carried out by the Royal Society and Ipsos MORI – demonstrate that while most people have not heard the term ‘machine learning’ (only 9% have), the vast majority have heard about or used at least one of its applications.

In other words, machine learning is having a significant impact on people’s lives even if they don’t recognise it.

This survey on social risk is published within a few days of an announcement by Durham Police (UK) that they are going to use artificial intelligence to help in the decision on whether, or not, a suspect should be kept in custody. How would you associate the social risk of such a system? I suspect that it depends on your background and how you regard the police.

It’s not really got anything to do with today’s theme, but I was quite intrigued to see that Google is setting it’s AI sights on musical instrumentation with a Neural Synthesizer, or NSynth. I’ve always been fascinated by the intersections of art and technology; pioneering artists have always embraced new technology to enable them to express their art. Music has been at the forefront of that pioneering so it will be interesting to see how musicians use these new technologies.


People are scared of artificial intelligence for all the wrong reasons via Quartz

People in Britain are more scared of the artificial intelligence embedded in household devices and self-driving cars than in systems used for predictive policing or diagnosing diseases. That’s according to a survey commissioned by the Royal Society, which is billed as the first in-depth look at how the public perceives the risks and benefits associated with machine learning, a key AI technique.


Durham Police AI to help with custody decisions via BBC

The system classifies suspects at a low, medium or high risk of offending and has been tested by the force.

It has been trained on five years’ of offending histories data.

One expert said the tool could be useful, but the risk that it could skew decisions should be carefully assessed.


Google’s creating sounds you’ve never heard before via Mashable

To create music, NSynth uses a dataset containing sounds from individual instruments and then blends them to create hybrid sounds. According to the company, NSynth gives “artists with intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer,” the company said in their announcement.