Google assistant could achieve Larry Page’s vision for the ultimate search engine or become just another me-too chatbot. At this point, it’s up to users.
“Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the Web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.” – Larry Page, October 2000
“Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them.” – Terry Pratchett, The Long Earth
Google’s forthcoming assistant may be much more than a me-too reaction to the litany of artificial intelligence and chatbots flooding the web right now. It may signal the web’s next major evolution – if Google can actually get people to use it.
Users should expect to get their first real look at Google assistant very soon. Allo, Google’s messaging app is due for release “Summer 2016” and is the first Google product to include the artificial intelligence-powered assistant.
With Allo due to drop any day, is the age of the AI bot and “conversations as a platform” officially upon us?
In this post we will address this question, first exploring what Google assistant is, how it works and what its potential benefits to users – and to Google itself – are, as well as some of the potential downsides of the new technology.
But before we get too deep into this question, it’s worth backing up a little and taking a closer look at this unusual phrase: “conversations as a platform”.
The term was introduced by Microsoft’s CEO, Satya Nadella, at the Build developer conference in March 2016. Conversations as a platform, according to Nadella’s Build keynote, is “a simple concept that’s very powerful in it’s impact.” He later adds “We think this will have as profound an impact as the previous platform shifts have had.”
Nadella goes on to describe an artificial intelligence that can converse with humans and learn more about us through our behaviour and context to assist us.
This is essentially what all the big players in tech are fighting to be the first-best at right now: Apple’s Siri, Facebook’s M, Amazon Alexa, Microsoft Cortana and Soundhound’s Hound are just some of the best known examples.
But while Microsoft may be betting bigger on artificial intelligence than anyone else – steering their entire strategy towards becoming the leader in bots – there’s plenty of evidence to suggest that Google may already be poised to come out on top.
And it all starts with Google’s imminent assistant.
What is Google assistant?
In its current form, Google assistant is essentially a significant evolution of their current Google Now.
This upgrade will let users make queries and receive answers in a way more conversational manner than they can with Google Now. Rather than the simple single question/answer way that traditional voice controlled personal assistants have worked, Google assistant has the ability to carry on with an extended conversation in a way that really pushes things forward.
Writing for TechCrunch earlier this year, Matthew Lynley explains: “Here’s an example use case: a Google Assistant user can ask Google – through chat or voice – who the director of a film like Gravity is. They could then follow up with a question, such ‘what other movies has he or she directed,’ and Google Assistant should be able to return an answer.”
Google assistant is undeniably an evolution when it comes to voice search and presents a shift towards more conversational interactions with technology: Google’s latest piece of wizardry will look to change the way that we interact with Google.
Assistant isn’t really a product, nor is it simply a platform either, it’s actually a whole new way of engaging with smart technology.
At Google’s 2016 annual I/O developer conference the company’s CEO, Sundar Pichai described assistant as a way of “building each user their own individual Google.” Pichai presented assistant as a platform that will extend across all devices and embed itself deeply into our lives.
Combining Google’s two biggest weapons: the information that it knows about the world and the data it collects about us, Pichai sees assistant as a step forward in how Google will engage with its 1.17 billion monthly users. “We truly want to take the next step in being more assistive for our users, we want to be there asking them ‘hi, how can I help?’,” he said.
What Are the Others?
There are some pretty powerful virtual assistants – many of them using artificial intelligence – out there already.
These are some of the biggest players and their key selling points that Google has to compete against if they want people to actually use assistant.
Apple’s iPhone, iPad, Apple Watch and (at long last) Mac embedded virtual assistant has been with us the longest and is the best known of Google assistant’s competitors.
Over time, Siri’s become quite good at understanding natural language and responding quickly, often with colloquialisms, jokes and pop-culture references. Siri can even be trained to recognise just one user’s voice, but it’s likely a single Google assistant will go even further and be able to recognise and converse with multiple users in a group.
There’s a lot Siri can do by integrating with a huge range of Apple device features, but it doesn’t play particularly well with third party apps and, naturally, isn’t available on non-Apple devices at all.
Siri’s greatest weakness though, is its passivity. Siri doesn’t pre-empt what you might want and doesn’t really do anything without being explicitly asked; and don’t expect Siri to learn much about you via your interactions.
The Microsoft alternative to Apple’s Siri, Cortana (named after the Halo character and voiced by the same actor) is much younger but benefits from technological advances made since Siri’s release.
Cortana works best on Microsoft devices (although the virtual assistant is available on iOS in the U.S.), where it can tap into native applications and features and even some third party apps.
Where Siri is reactive, Cortana can proactively anticipate questions you might ask and provide information automatically while keeping track of topics users are interested in via a “notebook”. Cortana also has an edge over Siri by being able to respond to both voice and text entries.
Besides being pretty slow, Cortana’s main drawback is that it is too reliant on Bing for information and will often provide answers via links and not in context or even (and this is where Google assistant will have a huge advantage) conversationally.
Although we don’t know a single person who has experienced Facebook’s virtual assistant, M will someday be integrated in Messenger – and available to all its 1 billion monthly users.
Being embedded in a text-based messaging app M will likely respond to just written text, and is reportedly not very personable, interesting (by Facebook policy M doesn’t really have an opinion about anything), and often needs a human helping hand.
M learns (primarily) from a growing team of “Trainers” who step in when M doesn’t know what it’s supposed to do, or even understand what the user is saying. If all goes according to plan, M will be able to figure out how to perform new tasks and understand users better on its own.
This learning style shows the gulf between M and Google assistant, which has Google’s entire knowledge graph and 100 billion monthly searches to learn from too. As the only other messaging app-based virtually assistant (Google assistant will initially debut in Google’s own messaging app), it will be interesting to compare how M and assistant interact with users and enhance human-to-human conversations.
Google assistant will also become the backbone of another Google product. Home is a voice-activated speaker that can answer questions, control multiple devices in your home and assist with a variety of tasks.
If this sounds familiar, it could be Home’s striking similarity to Amazon Echo. Echo is powered by an AI called Alexa, and as a virtual assistant it’s a market leader by quite a margin in smart home integration.
Most of its strength derives from the sheer number of integrations and partnerships the Echo has – including Pandora, Spotify, Uber, WeMo, Nest and even Dominos Pizza. It remains to be seen whether Google Home and (by extension) assistant will be able to compete with the Echo on these counts.
If there’s one advantage assistant has over Echo – besides total access to the search behemoth – it will be its ability to carry on a conversation and understand context.
The greatest virtual assistant you’ve probably never heard of is Soundhound’s Hound.
It only launched in March 2016 to little fanfare but, like Google assistant, Hound has more than a decade of development behind it.
Thanks to this long gestation, Hound is extremely good – and super fast – at understanding complex queries and even follow-up questions. The secret is Hound’s Speech-to-Meaning technology.
While most natural language processing technologies need to convert spoken language to text before parsing it, Speech-to-Meaning can understand spoken words directly.
The result is that you can often get answers to certain types of questions through Hound than you could by typing it into the Google query box.
Hound also has a strong API, called Houndify, which gives developers access to Hound’s Speech-to-Meaning technology for apps on a range of platforms.
While Hound has partnerships with companies like Yelp and Uber that enable it to do more, it’s important to keep in mind that Hound is an independent player – and this can have its drawbacks. Hound doesn’t use Google. It has its own Knowledge Graph, so it won’t always know the answer to (or even understand) a question. When this happens, the app will refer you to Bing search results or just tell you it doesn’t understand.
Backed by the biggest search engine and second most valuable company in the world, it’s not likely that Google assistant will have the same issue.
Now we have some understanding of the strengths and weaknesses of Google assistant’s biggest competitors, let’s take a look at how assistant works – and how it will prove itself against the others.
Data = Understanding
Google assistant is tipped to make its first public appearance as part of Google’s new Allo messaging app for Android and iOS. Scheduled to be released sometime in the next two weeks, Allo will allow users to chat with the assistant as if it were another person in the chat.
But this isn’t a stand-alone messaging app. Allo will tie into the existing search capabilities of Google’s devices and operating systems. Plus, it will tap into a range of third party apps like Spotify and Uber to proactively offer advice to you during your chats.
Over time, it will learn about you and build its own identity as one of your friends in your contacts. It will suggest places to eat, music to listen to and hotels you can stay in.
As PCWorld Senior Editor, Brad Chacos, states:
“Google Assistant appears to tie into the Search capabilities already baked into Google’s various devices and operating systems, bolstered by the power of Google’s vast cloud and summoned by your voice. Much like Gmail, Photos, and Google Now, Pichai envisions Google Assistant as ‘an ambient experience that extends across devices,’ so that you can tap into its power anywhere—including Google Home, a new Amazon Echo rival scheduled to release later this year”.
Data, Data Everywhere
Although we’ll initially only experience assistant via Allo, where it will learn a lot about how its users think and speak, Google are not exactly strangers when it comes to gathering and exploiting big data.
Google’s been studying and investing in machine learning for more than a decade now, and there’s little doubt that assistant will help push Google’s machine learning endeavours further forward than ever before.
As reported by The Guardian last June, the tech giant announced that it was opening a specialised Machine Learning group. The highly skilled team will conduct research into three key areas of machine learning: machine intelligence, natural language processing and machine perception. All three areas have been specially chosen to help Google create a machine that can behave like a human being.
According to Google, “Natural Language Processing (NLP) research at Google focuses on algorithms that apply at scale, across languages, and across domains. Our systems are used in numerous ways across Google, impacting user experience in search, mobile, apps, ads, translate and more.”
Due to its vast experience when it comes to search, Google are already extremely familiar with NLP.
Currently used to fine tune and beef up its search engine, NLP is one of the areas where the company has been focusing much of its attention. From helping develop voice search to improving its translation software, NLP will play a key in helping Google design an artificial intelligence that can pass the Turing Test – Alan Turing’s gold-standard test for any AI.
But while Allo and Google assistant are clearly a part of Google’s grand artificial intelligence agenda, they will still be providing tremendous value to users in the short term.
The assistant Who’s Always There
If the vision for assistant is to be “an ambient experience that extends across devices”, what does this mean for how this new tech will be used?
Assistant will put Google search and voice interaction inside a wealth of devices including the aforementioned Google Home, your smartphone and a new piece of wearable tech currently in development with Google.
In fact, the meteoric rise of wearable tech, home automation and the rise of the ‘internet of things’ more broadly, is a key reason Google is venturing into this space.
A report released earlier this year by the market research firm, IDTechEx shows exactly how big the wearable tech industry is becoming. According to IDTechEx the wearable tech market will be “worth over $30bn in 2016, and growing in three stages”. The report concludes that the industry may reach more than $150 billion within 10 years.
Assistant, will of course, be an integral part of your Android smartphone, but with Google planning on launching two new Nexus smartphones before the end of the year it may be more central to the mobile experience than previously thought.
One of the most talked about features of the new Nexus phones and their Android 7.0 operating system is its new ‘flower-like’ home button.
As it stands the rumours currently circulating about this enigmatic little button are that it is in fact part of the promotion of Google assistant (possibly via Now / On Tap functions), and will be a key part of both the new phone’s search functionality and Google’s new smartwatch.
A More Natural Way to Search
The rise of semantic search and the increased sophistication of Google’s search algorithms is tailored made for voice search.
Searches are far more complex than they were in five years ago and with the rise of voice search and platforms like Google assistant will see search queries not only getting more complex, but increasingly more conversational.
Even without the invention of a complete virtual assistant, voice search has continued to grow. As reported by Search Engine Watch, a recent study by artificial intelligence technology firm MindMeld shows that 60 percent of smartphone users in the U.S. had begun using voice search within the past year, with 41% of those surveyed saying that they’d only started using voice commands in the past 6 months.
Semantic search takes a much wider view of a search query than traditional search and uses many factors to return the best results. And assistant is yet another part of the semantic and universal search puzzle that is seeing Google move towards creating a future where search transforms devices into smart machines that use “connected knowledge” to answer complex questions.
Voice search, using natural language, is exploding in popularity and by taking advantage of this – as well as by making assistant available virtually everywhere, Google make their high hopes for the usefulness and adoption of their AI apparent.
You can see some evidence of this in the Home demo – debuted at I/O 2016 – as an upscale family of four get ready for their days and converse mostly with Google assistant (for every word the family says to each other, they say another 3.5 to Google assistant).
But it’s Bhavik Singh’s presentation introducing the Awareness API (shown below – the first 6 minutes are essential viewing) that gives us the full sense of just how assistive and deeply integrated in our lives Google aspires to be.
Helping You, Helping Google
Google assistant promises huge benefits to users, but there’s very little doubt that the platform will provide massive value to Google itself – even if the company has no real idea how they’re going to monetise it yet.
That said, there’s no way Google would launch a product without some way of making money from it.
Something You Can Buy
Initially Google assistant will appear in two places. Allo and Google Home. Home is, according to almost all the text on its official page:
A voice-activated home product that allows you and your family to get answers from Google, stream music, and manage everyday tasks.
Although the page doesn’t mention that the brain and living heart of Home is Google assistant, it does make it perfectly clear that this is a product. A product you can buy to put in your house – and (okay – we may be extrapolating this next bit from the video) the more rooms you put it in, the better.
Google hasn’t indicated a price, but Amazon Echo, its Alexa-enabled prime competitor, retails at $179.99. The virtual assistant will also become a key selling point of other saleable devices including smartphones and (especially) wearables like smart watches.
Knowledge is Power
Tapping into our conversations, our homes and most of our day-to-day tasks exposes Google to a much richer pool of data on how (and, in real time, exactly when) users really talk and interact with the physical world and other people around them. All this helps Google develop “a deeper profile of the customer” that can be used to improve the profitability of Google’s existing revenue streams – most notably Google AdWords and Google Shopping.
One of the most obvious ways Google can take advantage of this data is by, potentially, solving the problem of user intent.
Intent is a big, complicated problem but Amit Singhal (a Senior Software Engineer at Google) summed it up perfectly in a 2008 post for the Google blog:
Search in the last decade has moved from give me what I said to give me what I want.
For anyone whose entire business model relies on giving people the results they need as fast possible, this problem of matching what users really meant, rather than what they appear to have said is absolutely vital.
The issue is that words and phrases can have a range of meanings besides the one “true” meaning intended by the person who originally said them. Some of the leading thinkers and philosophers of the 20th century, like Jacques Derrida, Roland Barthes and Jacques Lacan, have addressed this issue at length. They argue that it is impossible to know exactly what someone really means by using certain words or phrases unless you make a hypothesis and verify it by asking the original speaker or writer if that interpretation is right.
Google assistant solves intent for Roland Barthes
Of course, adding such a box to search results pages may just be too interruptive improve the search experience for users. But if it was simply asked by a virtual assistant during the natural flow of a conversation, there may not be any detriment to the user experience.
Pair this with assistant’s artificial intelligence and over time it will be able to improve its abilities to accurately interpret your words to arrive at the correct meaning.
By doing this, Google can ensure that the AdWords and Google Shopping results it delivers across the board are more relevant and therefore more likely to result in the clicks that advertisers pay for.
These improvements in matching results to searches – and the greater click through rates that result – are surely good news for advertisers too.
It doesn’t end there, though. With unprecedented access to new data and new types of data, it’s only a matter of time before Google discovers revenue streams that no one has been aware of – or that even existed – until now.
Google’s Piece of the Pie
As recently as a few years ago, Google was the unassailable king of the internet. But that’s largely because search was king of the internet.
Today, the online landscape is very different. Facebook accounts for more than 20% of the time Americans spend online, there’s an app for everything, virtual assistants retrieve information without users needing to perform a search themselves, and every man and his dog can make a chatbot.
The harsh reality is that Google is fighting against all of these things for market share. Every piece of information retrieved by an assistant, bot or app is one less Google search. Every piece of content served and every connection made via social media (or again: an assistant, bot, or app) is one less that Google can help with.
Google may still have 100 billion searches per month (and voice search now accounts for about a fifth of them), but search is not as big a part of the internet pie as it once was.
So for Google to remain competitive, they must either broaden the scope of what “search” is or look beyond it.
Google seem to be doing both. If search is essentially a process of telling Google what information you need followed by Google finding and serving you that information; an intelligent, conversational assistant is essentially augmenting that process. Assistant will work with you to home in on the information you need before the assistant itself searches for and retrieves that information.
The other side of the equation – looking “beyond” search – becomes apparent when you consider the entire ecosystem to be populated by Google assistant. By placing their assistant everywhere users might otherwise go before (or instead of) conducting a search, Google blurs the lines between engaging with the web via search and engaging with it any other way.
The Fly in the Ointment
There’s no doubt that Google’s virtual assistant has the potential to not only be best-in-class, provide tremendous value both to users and Google itself, but also revolutionise the online landscape altogether.
There is one question left to answer: will people actually use it?
We will almost certainly first meet assistant in Allo, Google’s new messaging app – and this may be a problem.
One of Allo’s key selling points is a Smart Reply feature (which is also likely one of assistant’s key learning mechanisms) that scans all your and your friends’ messages in order to suggest replies for you.
A demo at Google’s I/O 2016 show just how powerful this feature is, but it has a cost that may deter many users.
For Google be able to read and analyse all those messages, however, they must be sent without end-to-end encryption. Google’s decision to disable this encryption by default has been broadly criticised, including in statements by Edward Snowden who warns that the “new Allo chat app is dangerous” and cautions against using it.
An Incognito mode available to Allo users does feature end-to-end encryption, as well as a Snapchat-like ability to “expire” messages after a certain amount of time, but this mode will not feature Smart Reply.
Inertia may prove a much bigger challenge.
But there’s still no guarantee that if you build it, the people will come.
With most people already using messaging apps like Facebook Messenger or Slack that they’re happy enough with, is Google assistant’s AI enough of a drawcard to convince them – and enough of their contacts to make it worthwhile – to switch to something new?
It certainly does not help the case for Google that their attempt to do the exact same thing in social media (with Google Plus, but also Google Buzz, Google Friend Connect and Orkut before that) when they thought Facebook was going to kill them.
Despite Google’s best-in-industry work to date, it’s clear they still have an uphill battle to demonstrate its assistant’s usefulness and get people to actually use it. If Google can pull this off, they may very well usher in the next evolution of the web.
If they can’t, it’s entirely possible that someone else – like Facebook and their ever-expanding walled garden or Microsoft with their aspirations to a human-assist-bot-search engine for bots system – will.