Learning from patterns, Google AI push has bigger problems in mind

Written by Shruti Dhapola
| Tokyo |

Published: November 29, 2017 9:14 am


1x1.trans - Learning from patterns, Google AI push has bigger problems in mind Google is focusing on Artificial intelligence (AI) and Machine learning (ML) to create better consumer products.

Artificial Intelligence (AI) and Machine Learning (ML) are two words that have become standard when it comes to most technology companies now. Big players like Google, Apple, Microsoft are focusing more and more on incorporating these technologies within their daily products.

In Google’s case, ML is driving a core of its customer related products. Be it the Google Assistant which is now present on all Android phones or the Pixel 2 camera, where ML drives portrait mode. So why is Google pushing the emphasis on ML so much? According to Google’s Senior Research Fellow and head of the Google Brain Project Jeff Dean, the company’s mission with AI and ML is not just to make their products more useful, but also to help others innovate and solve bigger problems.

At a Google ML conference in Tokyo, Dean explained, “It is impossible to code everything about the world as logical rules for computers. Hence we look at machine learning now. The science of machine learning has taken over the field of AI. ML is learning to recognise patterns about the world.” Much of the work around AI and ML was actually done in the 1980s and 1990s but it is only now with the rise of computational powers that the full potential of these technologies is reaching fruition.

Machine Learning on its part relies on large data sets and exposing machine learning algorithms to those data sets, so that they can eventually learn to understand and think of the world as humans do. For instance, machine learning is the reason why when a user types ‘Dogs’ in their Google Photos app, the results will end up displaying all pictures of dogs from the library.

“Neural networks are loosely inspired by how biological brains behave. The neurons in these ML network are taught to recognise certain kinds of patterns and they look for different kind of patterns in layers. These neurons eventually learn more complicated patterns. We feed them millions and millions of data sets, so that eventually they can learn to identify the correct label for an image,” explained Dean.

But Google and other technology players are not just training ML algorithms and networks to understand images and identify your cats pictures. There’s also focus on audio, text so that humans can eventually have a more natural conversation with their computers or smart devices. For instance, ML and AI are what allow speakers like Google Home (which is yet to launch in India) to understand natural language conversation, rather than a user relying on a strict, fixed command.

ML is what makes it possible for users to talk to the Google Assistant and other voice assistants in natural language as well. But Google is also deploying ML in other fields like research on diabetic retinopathy in health. As Dean explained, the spurt in machine learning has taken place post 2012 for Google when it started scaling its deep neural networks research. While in 2012, the typical neural network had around 10 million connections, when Google scaled this it had networks with more than 1 billion connections.

Cut to 2017 and ML is at the core of Google products and beyond and we’re reminded of this at every single company event. But the growth of AI and ML has also raised other questions around privacy, fears around AI outsmarting humans.

When asked about fears around AI, especially of the kind talked about by Elon Musk, Dean said he does not think that AI is at that level yet.

“These are distant, far off kind of fears and not necessarily realistic. There are very concrete safety problems, but we can use a lot of techniques for deploying safe AI systems. So far, the systems do not have the kind of capabilities that are being imagined,” he said.

On the privacy issue and how tech companies are collecting this data that feeds the ML systems, Dean pointed out that machine learning as such does not require specific data from a user. “We are looking at improvements through computational/algorithmic ways, rather than just getting more data. There’s also auto-ML where the network ends up generating its own ML network,” he explained. In Dean’s view, currently what we have is “narrow AI” and the need is for a “more flexible system.”

“The AI should be able to answer any question that you throw at it and help people achieve more than they can. Be it in healthcare, environment. Just look at self-driving cars, for instance. They can do a lot of good. They will be much safer than human drivers and can change how we look at urban planning,” he said.

Disclaimer: The reporter was in Tokyo at the invite of Google India, which paid for travel and accommodation

For all the latest Technology News, download Indian Express App

© IE Online Media Services Pvt Ltd

Source link

Reply