SMX West keynote: Google talks about ranking, more in Google Assistant
Google's Jason Douglas shares what marketers and developers need to know about Actions on Google.
Good morning from day one of SMX West 2017! We’re in San Jose, Calif., where the opening keynote conversation is set to begin at 9 a.m. PT. Jason Douglas, PM Director for Actions on Google, will be talking about Google Assistant with Search Engine Land’s Chris Sherman and Greg Sterling.
Douglas will be talking about the developer ecosystem that’s growing around Google Assistant, and specifically about “Actions” — Google’s term for tools and features that let brands, marketers, developers and others interact with users through Google Assistant. Think of Actions kinda like apps for a smartphone or tablet.
The keynote conversation is set to begin at the top of the hour, and we’ll be live-blogging right here once it gets started. So come back then, or just stay tuned, and feel free to refresh the page for the latest from SMX West!
Okay, we’re set to go. I’ll be using JD for Jason Douglas, CS for Chris Sherman and GS for Greg Sterling. (Cross your fingers that I can keep up with three people talking on stage!) I’ll also try to sprinkle in some tweets along the way.
Jason is going to start with a brief presentation before the conversation begins.
He reminds the audience about Google’s mission to organize the world’s information. He says Google thinks we’re at an inflection point now in how information is accessed. Douglas talks about how Google was born in the era of organizing web content, accessible via a simple text box — users could type what they were looking for and get blue links to web pages and documents.
Technology doesn’t stand still, he says — computing has changed. We have computers in our pockets, and we use them for doing more than looking for information. We order movie tickets and food. Our cars and TVs are computers. Computers are everywhere.
The cost is complexity. Should I have to install a new app just to change my light bulb? That doesn’t make sense. We need a tool to tame this.
Enter the Google Assistant. It’s a conversational experience between you and Google to help you get things done in your world. Whatever you need help with, you should be able to ask the Assistant. You should be able to use natural language. We believe that conversation is the simplest and most universal way of getting things done. But it’s not just about understanding words. It’s about context.
The Assistant should know something about you to understand you — your situation, your location, your need, so it can understand you as efficiently as possible. This is exciting because it feels like it’s the right time for this — the technology is just now making this possible. Speech recognition and machine learning are important developments. Maps, the Knowledge Graph, structured data — all these are helpful in understanding how to get things done.
Assistant began on Google Allo, then Google Home and Pixel — it’s in cars, too. Anywhere people need help, Google Assistant should be there. This is why we think it’s important to invest in a robust developer platform. That’s where Actions on Google comes in. This is what I work on. We think this can be the next big ecosystem in the tradition of Search and Google Play.
Three focuses for Actions on Google:
- Connect with users wherever the Google Assistant is available.
- Help users get things done with your service.
- Innovate with the conversational interface.
1. Connect with users. We’ve launched a simple directory for users to discover services and marketers to promote their service. Actions can also be found by name and by intent.
2. Help users get things done. We should understand their intent. If a user asks for a recipe, we can bring in the right Action to help. Identity and payments — making them as easy and natural as possible.
3. Innovation — conversation is very different from other mediums. We’re trying to share the best practices that we’ve learned about using conversation as a medium.
And with that, the presentation is over and Chris Sherman and Greg Sterling are taking the stage.
CS: What would you do to encourage this audience to think about and take advantage of Actions?
JD: I’d start with the developer site. Think about the design.
GS: I was interested to hear you say that third-party services aren’t things that need to be installed. Mentions how you have to install apps on Amazon Echo.
JD: Yes, anything that’s been registered with the platform is available to users. Mentions the Personal Chef example from his conversation.
GS: In mobile, there’s the challenge of app discovery. App stores, social media, search. What do you envision for Google Home — for Actions, how do envision discovery in the future?
JD: Actions will eventually be available across devices and platforms, not just on Home. So that’s more potential distribution. Voice is best for discovery, but there are challenges — presenting lots of choices is difficult on voice. In terms of being discovered, the most important thing will be to provide something useful for users. We are providing some hooks to be able to market a service.
GS: Is there an Actions directory?
JD: It’s buried inside the app, but we’ll make it more prominent over time.
GS: asks question about search and Assistant.
JD: I think of them as somewhat different. Search is part of what Assistant is about, but it’s also about cooking timers and knowing your calendar and stuff that’s personal to you, things you want to get done.
We do see a user journey where search can lead into something transactional and helpful, like asking about a movie and then buying a ticket to see it.
CS: How do you deal with services and choosing which one to surface for a user? Is a search index even the way to think about things anymore?
JD: It’s very different from how a web document is ranked. Ultimately, I think it’s going to be about what is the easiest way to help the user get things done — we’re still figuring that out.
CS: You mentioned identity in your talk, which raises the obvious question about privacy.
JD: We take privacy extremely seriously. A product like the Assistant isn’t going to work without trust. When it comes to sharing information, there’s always consent. We want to give users control over all of this.
GS: What does Google’s research reveal about user preference for how they receive information via voice?
JD: There’s been an emphasis, especially on mobile, on moving toward providing answers. That succinct summary is often the best thing for users.
GS: Will I able to tell Assistant what my preferred services are — I prefer TripAdvisor for travel queries, for example.
JD: That’s one way. We’re trying to decide now how sticky those preferences should be. In some cases, you can set some preferences in the app. We’re trying to learn as we go. For shopping, is it convenience or best price that matters most. There are a lot of new ranking and quality challenges.
Some of these services are pretty high stakes. It can be money, or health, or home repair … it’s a big deal if you get a bad result. We’re exploring how to manage quality there so users can continue to trust Assistant.
Q: How do you envision ads being incorporated into the Assistant ecosystem?
JD: It’s early on. We’re trying to learn what the best opportunities and user experience is. It won’t be successful as a developer platform if developers can’t be successful as businesses. We’re still trying to figure this out.
Q: How does Assistant deal with slang?
JD: I’m not the best person to answer this, but we’ve made a lot of big investments around language recognition and machine learning.
Q: How do you see Assistant fitting into a B2B landscape?
JD: I think Google has a history of starting with the consumer side and bringing it to B2B later. There was a demo at the Cloud conference recently showing some possibilities. I’m focused on the consumer side.
Q: How do you decide between proximity and quality on what to recommend?
JD: I think the signals are different between a web search and voice search. Ads I said earlier, we’re still working on figuring out the best signals for users.
And with that, the keynote conversation is over! Thanks for tuning in.