Google recently shared their 2017 AdWords product roadmap at Google Marketing Next. Because the audience is primarily comprised of executives at big agencies and big brands, and Google is doing its best to get them excited about all their capabilities, the event sometimes skims over some of the details that matter to those of us managing accounts day-to-day.
I’ll share my take on the announcements and what excited or frustrated me the most. Even though it’s now been five years since I left Google, all but one of the presenters are people I used to work with, and they were kind enough to invite me backstage to get a bit more detail than what was covered in the keynote.
I’m a PPC geek, so I obviously love better targeting. That’s why the announcement of in-market audiences for search got me so excited. How often have we all wished for a way to look beyond the query and distinguish between a prospective buyer and a kid doing research for a school project? Access to in-market audiences lets us make that distinction so that we can bid more aggressively for better-qualified leads.
But guess what? Everyone will now bid more for better-qualified traffic because it should convert better. According to Bhanu Narasimhan, Google’s director of audience products, conversion rates for in-market audiences are on average 10 percent better. So get your boss ready for the inevitable run-up in maximum bids you’ll need to set to remain competitive.
Unfortunately, this feature’s exact launch date was not announced; Google only said it’d be available by the end of 2017.
Currently in the US, there are 493 in-market audiences for the Display Network across a number of verticals. That’s a lot of options, but just as we had affinity audiences before custom affinity audiences, now we’re about to get custom in-market audiences.
Karen Yao, Google’s group product manager for ad platforms, revealed this very cool update: we will be able to create custom in-market audiences by adding keywords we believe someone would have used if they were in the market for our product or service. Combined with Google’s vast amounts of data, this can then help us find an audience of people in the market for what we sell.
The way Google can give us custom in-market audiences and targeting based on life events is by having really good machine learning. Knowing who is going through a targetable life event like graduating from college, buying a home, getting married or having a baby is done by understanding the online behavior that corresponds to these events.
In simple artificial intelligence (AI), engineers could write some simple “if-then” statements to place people into these targeting groups based on a handful of searches they did. But with Google now having built a much faster Tensorflow processor that underpins their AI efforts, you can bet their systems for finding which users are going through a particular life event will be really good and useful for advertisers.
In the example they gave, they showed how people in different cultures might search for different things related to a wedding. Google’s machine learning can pick up on these differences and know that it corresponds with the marriage life stage.
We’ve all been doing A/B ad testing for years. But that’s becoming much less relevant if you look at what Google is now able to do with AI. Sridhar Ramaswamy, senior vice president of ads and commerce, showed an example of three users all searching for something pretty generic (like “cheapest hotel”) but each one being served a different ad from the same advertiser.
The different ads weren’t driven by audience bid adjustments or some other thing we control — rather, it was AdWords predicting each user’s preference in order to show subtle ad text variations, focusing either on price, value or selection.
As someone who’s created tools for ad optimization in our software suite at Optmyzr, what I heard was that we should focus primarily on creating a ton of ad variations and then let the machines decide which one to serve. What that means for advertisers is that the creation of many ad variations is likely to become a bigger task than before, so that we can feed the machine all the possible variations it requires to do an amazing optimization.
Search Engine Land paid media reporter Ginny Marvin wrote a great recap of what Google Attribution is, an important piece to read if you’ve been wondering why Google decided there was need for yet one more tool to do attribution modeling (we already get it in AdWords, Analytics and DoubleClick).
I am excited about this new offering because when I got to play with it, I saw just how quick and easy it was to get up and running. But easy setup is meaningless unless the tool is also really good, so the real reason for my excitement is that data-driven attribution modeling is now becoming much more accessible.
The problem with attribution models is that they are our best-effort attempt at modeling real-world behavior with a somewhat limited set of tools. Thanks to improved store visitdata, store sales data, easier consolidation of data and Google’s AI — four themes of the event — we no longer have to flail around trying to do something really complicated by hand. Data-driven models evaluate how each touch point contributes to the eventual outcome.
In AdWords, that means knowing how a click on one more keyword will change conversion rates. By looking beyond AdWords, it means knowing how the interplay of channels, impressions, clicks and more contribute to a conversion.
With Google Attribution, Google runs the models and feeds the data back into AdWords, where we can use a flexible bid strategy, or use the enhanced data to achieve better results using the bid management tool of our choice. In Optmyzr, that means you’ll get better insights to help set bids and do optimizations with the same tools you’re already used to.
The thing I wish Google would work on next is to make it easier to import data from competing channels into Analytics. Right now, to get the full picture, we still need to tag campaigns and import cost data. I also hope that somehow they can use data across accounts to reduce the currently very high requirement that a conversion has 600 conversions over a 30-day period before data driven models start to work.
At I/O, Google announced that 20 percent of searches in the Google mobile app in the US are done by voice. Sridhar Ramaswamy repeated that amazing stat at Google Marketing Next.
Does that mean that we’re on the verge of not needing keywords anymore? Luckily not — it turns out that the majority of voice searches still lead to a traditional search results page. The difference is merely in how users enter the query into the search box: users are substituting typing with speaking. Only a small portion of the voice interactions are with the Google Assistant. The key difference is that in a substitute for typing, the results are still returned on-screen, whereas with the Assistant, the entire interaction is by voice.
Regardless, I hear a lot of advertisers who want to have a better presence on the Assistant-type interactions. Most of the Assistant’s data comes from data we already provide Google, so be sure to have a Google My Business account to manage your location info and to use local inventory feeds to give Google data about prices and inventory at your locations.
Google has now also opened up the ability for developers to build actions so that in response to a conversation, the Assistant could do a transaction with the user. The example given by Google is a frequent business traveler who asks her Assistant for the next flight. Knowing that she flies from SFO to LAX every week on United, it could give info on the price of the next flight and even book the ticket, all by voice.
I suspect supporting Google’s buy buttons, which they call Purchases on Google (managed in the Merchant Center), will also become a way to get your online store ready for voice-driven transactions.
Every single announcement I’ve covered here has some connection to machine learning and artificial intelligence. So where do we all fit into this evolution toward ever more complexity, where humans can no longer hope to achieve great results without the help of tremendous computing power?
This question got me thinking about Lee Sedong, the Go champion who lost to Google’s DeepMind in 2016. The part of the story that didn’t receive as much coverage is about how Lee Sedong said that being schooled by the machine taught him to become a better player. Wired Magazine said that the pivotal play in the game was also the moment that “machines and humanity finally began to evolve together.” While the move that set up the machine to win was puzzling to humans, it opened Lee Sedong’s eyes to strategies he hadn’t considered before. So how can we as marketers learn from what the AdWords machine does?
Google’s Paul Muret, one of the founders of Urchin (now Google Analytics), explained to me that Surveys 360 can help us gain insights. The idea is that through the new integration between Surveys 360 and remarketing lists, we can poll users who’ve interacted with our site and ads so we can ask them what features they wanted or what compelled them to buy or not.
On last week’s #ppcchat on Twitter, a lot of people agreed that Surveys 360 can only be as effective as the questions being asked. I gave this example:
If airlines asked consumers a question about what they wanted most and didn’t qualify this with price, they’d be putting in more seats that nobody would want to buy.
It’s clear that AdWords will continue to be a major force in online marketing in 2017 and beyond, and I am excited to try out many of the announced capabilities as soon as they are available. While I am a fan of automation, I truly hope that AdWords finds a way to add some transparency to what its artificial intelligence does so that we can learn from it and evolve together.
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.