Should Google be more transparent with its updates?

It might seem hard to recall now, but there was a time when Google would regularly announce updates to its ranking algorithms, confirming what they were and how they would affect websites.

During these halcyon days, information about Google ranking updates was generally delivered via Google engineer and head of Google’s Webspam Team Matt Cutts, who was to many marketers the public face of Google.

As someone who was involved in helping to write the search algorithms himself, Matt Cutts was an authoritative voice about Google updates, and could be depended on to provide announcements about major algorithm changes.

Since Cutts’ departure from Google, however, things have become a lot more murky. Other Google spokespeople such as Gary Illyes and John Mueller have been less forthcoming in confirming the details of algorithm updates, and the way that Google makes updates has become less clearly defined, with regular tweaks being made to the core algorithm instead of being deployed as one big update.

Occasionally Google will go on record about an upcoming major change like penalties for intrusive interstitials or a mobile-first search index, but this has become the exception rather than the rule. A glance down Moz’s Google Algorithm Change History shows this trend in action, with most recent updates referred to as “Unnamed major update” or “Unconfirmed”.

The world of SEO has adapted to the new status quo, with industry blogs fervently hunting for scraps of information divulged at conferences or on social media, and speculating what they might mean for webmasters and marketers.

But does it have to be this way? Should we be taking Google’s obscurity surrounding its updates for granted – or, given the massive influence that Google holds over so many businesses and websites, are we owed a better level of transparency from Google?

A “post-update” world

At last month’s SMX West search marketing conference, the topic of ‘Solving SEO Issues in Google’s Post-Update World’ was a key focus.

But even before SMX West took place, the issue of Google’s lack of transparency around updates had been brought front and centre with Fred, an unnamed and all but unconfirmed ranking update from Google which shook the SEO world in early March.

Fred had an impact on hundreds of websites which saw a sudden, massive drop in their organic search rankings, leaving website owners and SEOs scrambling to identify the cause of the change.

But Google consistently refused to go on record about the algorithm update and what was causing it. It only gained the name ‘Fred’ thanks to a flippant comment made by Google’s Gary Illyes that “From now on every update, unless otherwise stated, shall be called Fred”.

@rustybrick @i_praveensharma @JohnMu sure! From now on every update, unless otherwise stated, shall be called Fred

— Gary Illyes ᕕ( ᐛ )ᕗ (@methode) March 9, 2017

When pressed about Fred during a Google AMA session at SMX West, Illyes replied that the details about what Fred targeted could be found “in the webmaster guidelines”, but declined to give more specifics.

After the Fred update hit, reports surfaced that the algorithm change seemed to be targeting websites with poor link profiles, or those that were ad-heavy with low-value content.

Evidently, the websites affected were engaging in poor SEO practices, and it can be argued that sites who do this shouldn’t be surprised when they are hit with a ranking penalty by Google.

However, if Google wants to clean up the web by rewarding good practices and punishing bad ones – as its actions would suggest – then wouldn’t it be more beneficial to confirm why websites are being penalised, so that their owners can take steps to improve? After all, what’s the point of a punishment if you don’t know what you’re being punished for?

On the other hand, you could argue that if Google specified which practices webmasters were being punished for, this would only help bad actors to avoid getting caught, not provide an incentive to improve.

The pros and cons of Google transparency

In the wake of Google Fred, I asked the Search Engine Watch audience on Twitter whether they thought that Google owed it to its users to be more transparent.

Several people weighed in with strong arguments on both sides. Those who agreed that Google should be more transparent thought that Google owed it to SEOs to let them know how to improve websites.

@rainbowbex Google should be more transparent. If 1 in 100 websites gets hit with penalties, I’d like to know whats different bout that 1.

— Dani (@emo_tigger_xo) March 17, 2017

@rainbowbex Yes it should be, especially when most legitimate SEO’ers / Agencies want to keep client sites up to speed with requirements.

— Assertive-Media (@AssertiveMedia) March 17, 2017

Additionally, if Google expects website owners to make their sites more user-friendly, then maybe Google should be informing them what it thinks the user wants.

We’ve already seen how this can work in practice, with Google’s mobile-friendly ranking signal giving webmasters an incentive to improve their mobile experience for users.

@rainbowbex @sewatch Y-if G wants us to optimise the web for the user, we need to know what the user wants/what Google thinks the user wants

— Dan Tabaran (@dtabaran) March 17, 2017

Others argued that with so many bad actors and black hat SEOs already trying to abuse the system, complete Google transparency would lead to chaos, with people gaming the system left, right and center.

I can appreciate the stance. Countless people game the system already. If Google were more transparent, it could make for complete chaos. https://t.co/eGdj2GcwDL

— Brandon Wilson (@digital_visions) March 17, 2017

One Twitter user made an interesting point that Google might not necessarily want to help SEOs. At the end of the day, all SEOs are trying to game the system to some extent. Search engine optimization is a game of finding the right combination of factors that will allow a website to rank highly.

Some play by the rules and others cheat, but at the end of the day, there is an element of manipulation to it.

@rainbowbex @sewatch Google is not a fan of optimization companies. They think of it as “cheating” to get ranked higher.

— Taylor Wienke (@TaylorWienke) March 17, 2017

We have a tendency to assume that Google and SEOs – at least of the white hat variety – are on the same side, working to achieve the same goal of surfacing the most relevant, high quality content for users. By that logic, Google should help good SEOs to do their job well by disclosing details of algorithm updates.

But if Google and search specialists aren’t really on the same side, then what obligation does Google have to them?

Is obsessing about updates missing the point?

Maybe all of this debate about algorithm transparency is missing the point. If we agree that website owners should be giving users the best experience possible, then perhaps they should be concentrating on that rather than on the “game” of trying to rank highly in Google.

Michael Bertini, Online Marketing Consultant and Search Strategist at iQuanti and a long-time consultant on all things search, believes that website owners should do exactly that.

“In all my years doing this with both black hat and white hat methods, the best thing anyone could ever do is to do things for the end-user, and not for Google.

“Have you ever Google searched something in the morning and then by noon, it’s dropped a position? This happens all the time. Granted it mostly happens on page three and above, but every once in a while we do see it on page one.

“What I tell my team and clients is this: if Google makes a change in the algorithm or you notice a drop in your rankings or even in increase in your rankings – don’t take this as permanent.”

Bertini also believes that anyone who is not actively engaging in bad SEO practices should have nothing to fear from a Google algorithm update.

“So long as you’re not keyword stuffing, buying links, building links from private networks, purchasing social followers or shares, running traffic bots, or any other tactics that could come off as trying to trick Google… you should be fine.

“Those who have to worry about algorithmic updates are usually those who are always looking for a way to manipulate Google and the rankings.”

8 technical issues holding your content back

Technical SEO has certainly fallen out of fashion somewhat with the rise of content marketing, and rightly so.

Content marketing engages and delivers real value to the users, and can put you on the map, putting your brand in front of far more eyeballs than fixing a canonical tag ever could.

While content is at the heart of everything we do, there is a danger that ignoring a site’s technical set-up and diving straight into content creation will not deliver the required returns. Failure to properly audit and resolve technical concerns can disconnect your content efforts from the benefits it should be bringing to your website.

The following eight issues need to be considered before committing to any major campaign:

1. Not hosting valuable content on the main site

For whatever reason, websites often choose to host their best content off the main website, either in subdomains or separate sites altogether. Normally this is because it is deemed easier from a development perspective. The problem with this? It’s simple.

If content is not in your main site’s directory, Google won’t treat it as part of your main site. Any links acquired on subdomains will not be passed to the main site in the same way as if it was in a directory on the site.

Sistrix posted this great case study on the job site Monster, who recently migrated two subdomains into their main site and saw an uplift of 116% visibility in the UK. The chart speaks for itself:

We recently worked with a client who came to us with a thousand referring domains pointing towards a blog subdomain. This represented one third of their total referring domains. Can you imagine how much time and effort it would take to build one thousand referring domains?

The cost of migrating content back into the main site is miniscule in comparison to earning links from one thousand referring domains, so the business case was simple, and the client saw a sizeable boost from this.

2. Not making use of internal links

The best way to get Google to spider your content and pass equity between sections of the website is through internal links.

I like to look at a website’s link equity as a heat which flows through the site through its internal links. Some pages are linked to more liberally and so are really hot; other pages are pretty cold, only getting heat from other sections of the site. Google will struggle to find and rank these cold pages, which massively limits their effectiveness.

Let’s say you’ve created an awesome bit of functional content around one of the key pain points your customers experience. There’s loads of search volume in Google and your site already has a decent amount of authority so you expect to gain visibility for this immediately, but you publish the content and nothing happens!

You’ve hosted your content in some cold directory miles away from anything that is regularly getting visits and it’s suffering as a result.

This works both ways, of course. Say you have a page with lots of external links pointing to it, but no outbound internal links – this page will be red hot, but it’s hoarding the equity that could be used elsewhere on the site.

Check out this awesome bit of content created about Bears Ears national park:

Ignoring the fact this has broken rule No.1 and is on a subdomain, it’s pretty cool, right?

Except they’ve only got a single link back to the main site, and it is buried in the credits at the bottom of the page. Why couldn’t they have made the logo a link back to the main site?

You’re probably going to have lots of pages on content which are great magnets for links, but what is more than likely is that these are probably not your key commercial pages. You want to ensure relevant links are included between hot pages and key pages.

One final example of this is the failure to break up paginated content with category or tag pages. At Zazzle Media we’ve got a massive blog section which, at the time of writing, has 49 pages of paginated content! Link equity is not going to be passed through 49 paginated pages to historic blog posts.

To get around this we included links to our blog posts from our author pages which are linked to from a page in the main navigation:

This change allows our blog posts to be within three clicks of the homepage, thus getting passed vital link equity.

Another way around this would be with the additional tag or category pages for the blog – just make sure these pages do not cannibalize other sections of the site!

3. Poor crawl efficiency

Crawl efficiency is a massive issue we see all the time, especially with larger sites. Essentially Google only has a limited amount of pages it will crawl on your site at any one time. Once it has exhausted its budget it will move on and return at a later date.

If your website has an unreasonably large amount of URLs then Google may get stuck crawling unimportant areas of your website, while failing to index new content quickly enough.

The most common cause of this is an unreasonably large number of query parameters being crawlable.

You might see the following parameters working on your website:

https://www.example.com/dresses

https://www.example.com/dresses?category=maxi

https://www.example.com/dresses?category=maxi&colour=blue

https://www.example.com/dresses?category=maxi&size=8&colour=blue

Functioning parameters are rarely search friendly. Creating hundreds of variations of a single URL for engines to crawl individually is one big crawl budget black hole.

River Island’s faceted navigation creates a unique parameter for every combination of buttons you can click:

This creates thousands of different URLs for each category on the site. While they have implemented canonical tags to specify which pages they want in the index, this does not specify which pages are to be crawled, and much of their crawl budget will be wasted on this.

Google have released their own guidelines on how to properly implement faceted navigation, which is certainly worth a read.

As a rule of thumb though, we recommend blocking these parameters from being crawled, either through marking the links themselves with a nofollow attribute, or using the robots.txt or the parameter tool within Google Search Console.

All priority pages should be linked to elsewhere anyway, not just the faceted navigation. River Island have already done this part:

Another common cause of crawl inefficiency arises from having multiple versions of the website accessible, for example:

https://www.example.com

http://www.example.com

https://example.com

http://example.com

Even if the canonical tag specifies the first URL as our default, this isn’t going to stop search engines from crawling other versions of the site if they are accessible. This is certainly pertinent if other versions of the site have a lot of backlinks.

Keeping all versions of the site accessible can make four versions of a page crawlable, which will kill your crawl budget. Rule redirects should be setup to redirect any request and the non-canonicalization version of the page to 301 redirect to the preferred version in a single step.

One final example of wasted crawl efficiency is broken or redirected internal links. We once had a client query the amount of time it was taking for content in a certain directory to get indexed. From crawling the directory, we realised instantly that every single internal link within the directory was pointing to a version of the page not appended with a trailing slash, and then a redirect was forcing the trailing slash on.

Essentially for every link followed, two pages were requested. While broken and redirected internal links are not a massive priority for most sites, as the resource required to fix them does not outweigh the benefit, it is certainly worth resolving priority issues (such as issues from links within the main navigation, or in our case entire directories of redirecting links) especially if you have a problem with the speed with which your content is being indexed.

Just imagine if your site had all three issues! Infinite functioning parameters on four separate sites, all with double the amount of pages requested!

4. Large amounts of thin content

In the post Google Panda world we live in, this really is a no brainer. If your website has large amounts of thin content pages, then sprucing up one page on your website with 10x better content is not going to be sufficient to hide the deficiencies your website already has.

The Panda algorithm essentially makes a score of your website based upon the amount of unique, valuable content you have. Should the majority of the pages not meet the minimum score required to be deemed valuable, your rankings will plummet.

While everyone wants the next big viral idea on their website, when doing our initial content audit, it’s more important to look at the current content on the site and ask the following questions: Is it valuable? Is is performing? If not, can it be improved to serve a need? Removal of content may be required for pages which cannot be improved.

Content hygiene is more important initially than the “big hero” ideas, which come at a later point within the relationship.

5. Large amounts of content with overlaps in keyword targeting

We still see websites making this mistake in 2017. For example, if our main keyword is blue widgets and is being targeted on a service page, we might want to make a blog post about blue widgets too! Because it’s on our main service offering, let’s put a blurb on our homepage about blue widgets. Oh, and of course, you also have a features of our blue widgets page.

No! Just stop, please! The rule of one keyword per page has been around for nearly as long as SEO, but we still see this mistake being made.

You should have one master hub page which contains all the top line information about the topic your keyword is referencing.

You should only utilize other pages should there be significant search volume around long tail variations of the term, and on these pages target the long tail keyword and the long tail keyword only.

Then link prominently between your main topic page and your long-tail pages.

If you have any additional pages which do not provide any search benefit, such as a features page, then consider consolidating the content onto the hub page, or preventing this page from being indexed with a meta robots noindex attribute.

So, for example, we’ve got our main blue widgets page, and from it we link out to a blog post on the topic of why blue widgets are better than red widgets. Our blue widgets feature page has been removed from the index and the homepage has been de-optimized for the term.

6. Lack of website authority

But content marketing helps attract authority naturally, you say! Yes, this is 100% true, but not all types of content marketing do. At Zazzle Media, we’ve found the best ROI on content creation is the evergreen, functioning content which fulfils search intent.

When we take a new client on board we do a massive keyword research project which identifies every possible long tail search around the client’s products and services. This gives us more than enough content ideas to go about bringing in at the top of the funnel traffic that we can then try to strategically push down the funnel through the creative use of other channels.

The great thing about this tactic is that it requires no promotion. Once it becomes visible in search, it brings in traffic regularly without any additional budget.

One consideration before undergoing this tactic is the amount of authority a website already has. Without a level of authority, it is very difficult to get a web page to rank for anything well, no matter the content.

Links still matter in 2017. While brand relevancy is the new No.1 ranking factor (certainly for highly competitive niches), links are still very much No. 2.

Without an authoritative website, you may have to step back from creating informational content for search intent, and instead focus on more link-bait types of content.

7. Lack of Data

Without data it is impossible to make an informed decision about the success of your campaigns. We use a wealth of data to make informed decisions prior to creating any piece of content, then use a wealth of data to measure our performance against those goals.

Content needs to be consumed and shared, customers retained and engaged.

Keyword tools like Storybase will provide loads of long tail keywords with which to base your content on. Ahrefs content explorer can help validate content ideas by comparing the performance of similar ideas.

I love also using Facebook page insights on custom audiences (by website traffic or email list) to extract vital information about our customer demographic.

Then there is Google Analytics.

Returning visits, pages per session, measure customer retention.

Time on page, exit rate and social shares can measure the success of the content.

Number of new users and bounce rate is a good indication of the engagement of new users.

If you’re not tracking the above metrics you might be pursuing a method which simply does not work. What’s worse, how can you build on your past successes?

8. Slow page load times

This one is a no brainer. Amazon estimated that a single second increase to their page load times would cost them $1.6 billion in sales. Google have published videos, documents and tools to help webmasters address page load issues.

I see poor page load times as a symptom of a much wider problem; that the website in question clearly hasn’t considered the user at all. Why else would they neglect probably the biggest usability factor?

These websites typically tend to be clunky, have little value and what content they do have is hopelessly self-serving.

Striving to resolve page speed issues is a commitment to improving the experience a user has of your website. This kind of mentality is crucial if you want to build an engaged user base.

Some, if not all, of these topics justify their own blog post. The overriding message from this post is about maximising a return of investment for your efforts.

Everyone wants the big bang idea, but most aren’t ready for it yet. Technical SEO should be working hand in hand with content marketing efforts, letting you eke out the maximum ROI your content deserves.

How well do you know Search Engine Watch? The SEW Friday quiz

How well do you know Search Engine Watch?

Following the success of our previous Easter trivia quiz, we decided to mix it up again this Friday with another quiz – this time testing how well you’ve been paying attention to the content we’ve been publishing on Search Engine Watch this week.

All of our questions (bar one, for fun!) are drawn from the past week’s worth of content, including last week’s search news roundup. So brush up and give it your best shot!

What will the future of Google search results pages look like?

Recently, we took a nostalgic, infographic-based look back at the history of Google search results pages.

In the past 20 years, Google has gone from a university project called Backrub to a global powerhouse that continues to shape how we search for, and discover, new information.

And yet, these are still early days for Google. In fact, the rate of change is only increasing, with driverless cars and augmented reality on the horizon.

Some of Google’s core business focuses, like hyperlocal targeting and personalization, remain largely untapped opportunities and, with heightening competition from Apple, Amazon, and Facebook, the pace of progress will continue to accelerate.

In 2017 alone, for example, we are about to see an ad-blocker built into Chrome, a mobile-first index, and the increased uptake of voice search.

Google defines itself as “machine-learning first” in its approach, so we are entering an era of unprecedented – and mildly unpredictable – possibilities. If Google can integrate its Assistant software into our everyday lives, the humble search results page as we know it may soon be a thing of the past.

In our latest infographic, we have looked into a future where context will define the form and content of the search results pages we see.

You can view a high-resolution version of the image by clicking on the image below.

Infographic created by Clark Boyd, VP Strategy at Croud, and graphic designer Chelsea Herbert. Click here to read the blog post by Croud on The Future of Google Search Results Pages.

How to identify and fix indexation bloat issues

Indexation bloat is when a website has pages within a search engine “index” and can cause issues if not monitored and policed properly.

It is an extremely common SEO problem and affects all websites, ranging from small WordPress blogs to big Hybris and Magento ecommerce websites.

The more serious cases of indexation bloat usually occur on ecommerce websites, as they tend to utilize user-friendly facetted navigations and filter lists, allowing users to quickly identify the products that they want.

I’ve seen examples first hand of simple Demandware and Open Cart websites with only a few hundred products having millions of URLs appear in Google’s index because of the product filters generating URLs.

Why is indexation bloat a problem?

It’s a known fact that when Google and the other search engines crawl your website, they don’t crawl your website in its entirety. Allowing and asking them to crawl unnecessary URLs wastes this resource.

If search engines aren’t regularly crawling your “money” pages and are instead getting stuck down other rabbit holes without picking up on updates, this could impact your organic performance.

Bloat can also lead to duplicate content issues. While internal website content duplication isn’t as serious an issue as external duplication, it could dilute an individual page’s prominence and relevancy for search terms if the page itself as the search engines aren’t sure which URL to rank for the terms.

Identifying index bloat issues

One early indicator of index bloat is the number of pages appearing within search engine results.

It’s important to note here that the number of pages typically identified using the site: operator within Google and Bing search often show different numbers to what you see in Google Search Console and Bing Webmaster Tools — this isn’t something to worry about.

Website monitoring

While there are ways to resolve index bloat, the best way, in my experience, to deal with it is to prevent it from happening at all.

By checking Google Search Console and Bing Webmaster Tools on a monthly basis, specifically at crawl data, you can record what is and isn’t regular behavior for your website.

Abnormal increases, or spikes in the “Pages crawled per day” and “Kilobytes downloaded per day” can be indicators that Google is accessing more URLs than it has been.

Likewise conducting a site: search within Google and Bing will let you see how many URLs they have in the index, and you’ll know roughly how many pages your website has.

How can I fix indexation bloat?

Identifying that you have an index bloat issue is only step one, now you have to establish what is causing the bloat.

These are some of the most common causes of indexation bloat, but it’s also not uncommon to have more than one of these causes.

  • Domain URLs being served through both http and https protocols
  • Printable versions of pages causing a duplicate URL
  • Parameter URLs caused by internal search
  • Parameter URLs caused by product filters
  • Pagination
  • Blog taxonomies
  • Session IDs in URLs
  • Injection of spam pages following a hack
  • Old URLs not redirected properly following a migration
  • Trailing slashes at the end of URLs causing duplication
  • UTM source

Fixing with meta robots

A page level meta robots tag is my preferred method of dealing with index bloat and is particularly useful if implemented from a server level across multiple pages at once.

Page level meta robots also take precedence over pagination and canonicalization directives, as well as the robots.txt file (unless blocked in the robots.txt file).

These are also effective at removing URLs containing parameters caused by product filters, faceted navigations and internal search functions. Blocking these in the robots.txt file isn’t always best as it can cause some issues between what different Google user agents can see, which can negatively impact paid search campaigns.

Best practice would be to use “noindex,follow” — this way any backlinks pointing to the page will still pass equity onto the domain.

Robots.txt File

Blocking URL parameters in the robots.txt file is both a great preventative and reactive measure, but it isn’t an absolute solution.

All a Robots.txt file does is direct search engines not to crawl a page, but Google can still index the page if the page is being linked to internally or from external sites. If you know where these internal links are, add a rel=”nofollow” to them.

Canonical tags

Self-referencing canonicalization is typically best practice, apart from on bloated URLs. Ecommerce platforms, like Open Cart, can create multiple URLs for the same product and category.

Adding a canonical tag to the headers of the unnecessary product and category URLs pointing to the “main” one will help search engines understand which version of the page should be indexed.

However, the canonical directive is only a directive, and can be ignored by search engines.

Pagination

Pagination issues can arise from blog post and blog category pages, product category pages, internal search results pages; basically any element of a website that has multiple pages.

Because these pages will contain the same meta information, search engines can confuse the relationship between them and could decide it’s duplicate content.

Using rel=”next” and rel=”prev” pagination markup will help the search engines understand the relationship between these pages and, along with configuration in Google Search Console, decide which ones need indexing.

Using Google Search Console’s URL parameter tool

The URL parameter tool can be used to tell Google what specific parameters do to content on a page (i.e. sort, narrow, filter). Like other methods previously mentioned, you need to make sure you’re not accidentally requesting Google to not index URLs that you want in the index, and not to specify a parameters behaviour incorrectly.

Google classifies your parameters into two categories; active and passive. An active parameter is something that impacts content on a page, so a product filter and a passive parameter is something like a session ID or a UTM source.

This should only really be used as a last resort and used correctly in conjunction with other methods, otherwise this could negatively impact the domain’s organic search performance.

Before using this tool, be sure to read Google’s official documentation and guidance.

The URL removal tool

Depending on the authority of your domain, Google could take a while to recognize and filter out the URLs you want removing. After you have implemented something to tell Google not to index the URL again (a page level meta robots tag), you can request that Google removes the URL from index via Google Search Console.

This is only a temporary measure as it will only hide the URL for 90 days from Google search results, but it doesn’t affect Google crawling and indexing the URL.

This is good to use if you don’t want users being able to find certain pages, but each URL has to be submitted individually so this isn’t a great solution if you have severe index bloat.

Index bloat resulting from a hack

Now, obviously if your website has been hacked, index bloat is definitely not going to be a priority concern. But the bloat from a hack can cause issues for the domain.

The below screenshot is of a Swiss (.ch) domain that operates within Europe, weeks after a hack:

The website itself only has around 50 pages, but as you can see Google is currently indexing 112,000.

This means that, among other things, those 50 pages of product and product information pages are now lost among thousands of hacked URLs, so any updates to these pages may take weeks to get noticed – especially if your website doesn’t command a large crawl budget.

Another indicator of this can be a sudden increase in search visibility (for irrelevant terms):

I’ve worked on websites previously where this has been the first indicator. Whilst running a routine monthly check in Google Search Console, a website that dealt in christening gowns had started ranking for “cheap NFL jerseys” and other American sportswear terms.

These visibility spikes are often short-lived, but can destroy the trust between Google and your domain for a long time, so a lot can be said for investing in cyber security beyond https.

Conclusion

Reducing index bloat doesn’t happen overnight, so it’s important to remain patient.

It’s also important to put in place a process or framework, and giving ownership of said process to someone to conduct on a regular basis.

Actions for Google Home: Time for brands to get creative

Google’s Home device was launched in November 2016 in the US, and as recently as April 6 2017 in the UK.

As a direct rival to Amazon’s Echo in the battle to gain control of the intelligent digital assistant market, Home has made great strides already. Some sources estimate that Google may already have an installed base one-third the size of Amazon’s Echo, which launched in late 2014.

Ultimately, the more effective and useful hardware will gain the public’s vote. What makes the hardware useful will be the software that powers it – and more specifically, the functionality that it provides.

Google has increased the number of Actions available via Home, and third parties are encouraged to get involved and develop novel uses for Google’s voice-enabled assistant.

It feels as though we are at something of an inflection point for this technology.

As such, it seems timely to take stock of where we are, showcase some innovative uses of Actions, and also look at how marketers can start to profit from this largely untapped opportunity.

Google ‘Actions’ = Amazon ‘Skills’

Google Home is powered by Google Assistant, which has recently been rolled out across all Android devices. Assistant responds to voice commands, and can perform an increasing number of actions.

Actions are Google’s equivalent of Amazon’s ‘skills’ on Alexa; the full list of Actions can be accessed and enabled from the Google Home app.

Amazon has undoubtedly stolen a march in this regard, with over 10,000 skills already available. Most observers estimate there to be between 100 and 130 Actions available on Home.

A further 20 Actions were added last week by Google – but we are really just starting to scratch the surface of what this technology can achieve.

Google has opened this up to third-parties and has also provided a comprehensive guide to help developers get up and running.

The aim here is to move from a fairly one-dimensional interaction where a user voices a command and Google’s Assistant responds, to a fluid and ongoing conversation. The more interactions a user has with a digital assistant, the more intelligent the latter will become.

Actions: The fun and the functional

We can broadly separate the list of actions into two categories: the fun and the functional.

Some of the more frivolous features of digital assistants do serve to humanize them somewhat, but their use rarely extends beyond the gimmick phase. Just say “Ok Google, let’s play a game”, and the assistant will tell a joke, make animal noises, or speculate on what lies in your future.

On the side of the functional is an integration with If This Then That, which opens up a potentially limitless list of possibilities.

If This Then That integrates with over 100 web services, so there is plenty of room for experimentation here.

There are also a number of integrations with Google products like Chromecast and YouTube, along with third-party tie-ins with Spotify and Uber, for example.

One new – and innovative – use of Google Actions was released by Airbnb last week. The Airbnb Concierge Action serves as an information repository that is unique to each property.

The host can leave tips or prompts with the Assistant, which will then be relaid on to the guest when the correct voice command is made. Guests can also leave recommendations on local restaurants, for example, for the benefit of future visitors.

Marketers should pay attention to this. This is a clear example of a brand understanding that a new medium brings with it new possibilities.

Simply transposing an already existing product onto this new medium would be significantly less effective; we need to view digital assistants through an entirely different lens if we are to avail of their potential.

We have also seen a novel – if slightly mischievous – use (or abuse, depending on your perspective) of Google Home by Burger King this month. Burger King used a television ad slot to interact with Home and ask about one of its burgers, triggering the digital assistant to list the ingredients in a Whopper.

Although Google have moved swiftly to prevent this happening again, brands are clearly seeing Home as an opportunity to experiment and generate some publicity.

Digital assistants provide fertile ground for brands, as they create a new platform to connect with existing or potential customers. Moreover, with only 100 or so Actions available, there is ample room to engage with this now before the market inevitably becomes saturated.

For marketers interested in playing nicely with Google on this, you can sign up here to be informed of any partnership opportunities.

Monetizing voice-enabled assistants

This task is rather straightforward for Amazon, in the short term at least. Users can interact with Alexa to purchase from a selection of millions of items and have them delivered to their door by Amazon.

For Google, it is more complex. Their money-spinning AdWords business has depended on text-based search and a visual response. That input-output relationship is thrown off entirely by a voice-enabled digital assistant.

However, the smart money is on Google to find a way to integrate paid placements into their Home product, even if it takes some trial and error to find a solution that does not diminish the user experience.

During Alphabet’s (Google’s parent company) fourth-quarter earnings call in 2016, Google CEO Sundar Pichai informed investors, “[Home] is the core area where we’ve invested in for the very long term.”

The significance of those words cannot be understated. Google is, like any privately-held company, under pressure from its shareholders to deliver ever greater profits.

Selling hardware alone is unlikely to bring the profits Google needs to keep growing from its already dominant position, so there are clearly plans to monetize their Assistant in an ongoing capacity.

That level of fierce competition will bring advantages for consumers, as the products will improve and prices may even drop.

The advantages for marketers are potentially even greater, should they be willing to take some risks and work to get the most out of this still nascent technology.

Top Tips on Voice Search: Artificial Intelligence, Location and SEO

By 2020 it is projected there will be nearly 21 billion internet-connected devices, or “things” in the world.

The explosive ubiquity of this mobile-connected technology has led people to depend on these devices more regularly, with 94 percent of smartphone users claiming that they carry their phones with them frequently and 82 percent reporting that they never, or rarely, turn their phones off.

These numbers fall in line with a trend that is longer-standing, with Morgan Stanley reporting as early as 2011 that 91 percent of mobile users have some kind of mobile device within arm’s reach 100 percent of the time.

Corresponding with this increase in mobile device usage is the rise of what is called “voice search,” as well as the increasing prevalence of devices that contain “personal assistant” software like Alexa and Siri. People have become increasingly accustomed to the idea of speaking directly with computer devices and accessing information on the internet wherever and whenever they might need it. Naturally, like mobile usage in general, these emergent technologies have begun to influence search, and the impact will likely become even more apparent as usage grows.

Much in the way mobile devices have disrupted search by bringing on-the-go, local queries and results into the equation, voice search is introducing new methods of query and different results-experiences for users. Now, when a person activates voice search, particularly on personal assistant devices, most personal assistant technology will only deliver what is considered the best answer, essentially reducing the SERP to one result. That means that brands either occupy the first position, or, as far as voice search is concerned, they do not receive any attention at all.

Of course, the single-result SERP isn’t uniformly true for voice search. For voice-activated technologies connected to visual displays like smartphones and laptops, there is a greater possibility for more results. Even so, brands still need to remain focused on appearing in the top results. When someone uses voice search because they are on-the-go or they need an immediate answer, they don’t intend to scroll through pages. Rather, they’re looking for Google rich answers, such as a Quick Answer (which provides a high-quality, immediate answer to a query), Rich Card (information-rich content previews), or other top-featured results.

Google’s new Rich Cards

Over the past few years, we have seen the transformative impact of mobile on search and consumer behavior, including the shift towards the mobile-first algorithm. Voice search is the next major trend that brands will need to focus on to ensure they remain competitive. The more we understand about voice search and personal assistant devices, the easier it will be to optimize for them and ensure that your brand is represented across devices.

The role of personal assistants

As devices with artificially intelligent personal assistance software have become increasingly mainstream, so too has the use of voice search.

According to Google’s Gary Illyes, the number of voice queries in 2015 doubled from the number in 2014. Developers are now beginning to understand there are particular types of search queries people are more fond of using voice for, rather than text. For example “when is my meeting?” Users are 30 times more likely to use voice for these types of queries, rather than text.

These personal assistants, which have been put forth by several different brands, have empowered customers to remain even more connected to the internet at all times, even when engaging in hands-on activities like cooking or driving. Customers can ask about the cook time for chicken, for example, while in the middle of preparing the meat without having to remove themselves from their original task.

Mary Meeker’s Internet Trends Report looked at the reasons why customers use voice search, as well as which device settings are the most popular. The report indicated that the usefulness of voice search when a user’s hands or vision were otherwise occupied was the top reason that people enjoyed the technology, followed by a desire for faster results and difficulty typing on certain devices.

Where do users access voice search? It turns out that, more often than not, consumers are opting to use voice-activated devices is at home, followed by the car and on-the-go.

These personal assistants, along with voice search in general, are creating an increasingly connected world where customers expect search to be ever-present and capable of addressing their needs immediately.

How Artificial Intelligence powers voice search

Artificial intelligence powers personal assistance capabilities for mobile users. AI helps voice search and the associated algorithms to better understand and account for user intent. This intelligence, using semantics, search history, user proclivities and other factors, is able to process and understand the likely context of queries and provide results accordingly.

Natural language triggers, such as “who,” “what,” “where,” “when,” “why,” and “how,” for example, make it easier for AI to understand the user’s place on the customer journey and the likely goal of the search. Voice-activated devices can then direct users to where they most likely want to be on the web.

AI is essentially able to sift through voice search queries and identify the most important information, as well as the understand the intent regardless of an array of speech errors. For example, a query that changes direction mid-sentence, such as “How was the… what was the score to the White Sox game last night?” will be correctly answered. This enhances the conversational capabilities of the voice search, understanding the reason behind a query even if it is not asked in a precise way.

Voice search in practice

Voice search makes it even easier for customers to ask hyperlocal queries, which is significant in the context of a mobile-rich environment. Consider how users execute search queries differently when speaking to mobile devices rather than exploring the web via a desktop computer.

Voice searches tend to contain slightly different words, such as “close” or “nearby”, which are not commonly used on desktop computers. Why? Because people tend to use mobile devices to access personal assistance software, and mobile devices are most often employed to find businesses or other locations while on-the-go. The aforementioned language triggers, “who,” “what,” “when,” “where,” and “why,” are also common, setting the context for the query and what the user likely wants to find.

These queries are also most likely to contain longtail keywords, conversational phrasing, and complete sentences. All of these factors impact how brands should optimize their content to maximize its appearance in voice search.

Voice searches have also become increasingly complex. For example, users might ask, “Find a French restaurant near me” and then follow up with, “Call the first one.” The voice search algorithm is able to interpret the second query as related to the first and act appropriately. The ability of the voice search algorithm to understand the related context of these queries enhances user experiences and maintains the conversational tone.

Voice search and local search: How the SEO marketer can succeed

Knowing that voice search is an emergent technology that will impact marketing at large is one thing. Understanding how to take advantage of that fact is another. For that reason, marketers should develop an array of best practices to ensure success in the wake of this incoming trend. Here are some tips to get you started:

Tip 1. Use keyword and intent analysis to better understand the context of the queries. For marketers to be able to accurately create and optimize content for voice search, they need to know the replies that users expect when they make a particular voice search query. Then, tailor the content to meet the needs of the users. Remember to consider synonyms and alternate means of phrasing the same query, such as “How do I get to the store?” versus “Give me directions to the store.”

Tip 2. Incorporate important location keywords into the content that could impact voice search. For example, Fisherman’s Wharf, Pier 39, or Golden Gate Park might all be landmarks that people use to find a suitable restaurant in San Francisco. Incorporating these terms into your content will boost your hyperlocal presence and make it easier for you to rank for voice search.

Tip 3. Use markup to ensure that your content is ready to be displayed by Google rich results. Rich answer boxes, such as Google Quick Answers and the Local Three Pack, play a big role in providing rapid answers to user queries on-the-go. Making sure that all your content is marked up with schema will help ensure that your content is prepared to be displayed in any rich boxes that become available.

Tip 4. Make sure that each physical business location has its own site and that each site is individually optimized. This means you need to do more than just translate keywords to other languages or optimize all sites for the same terms. You need to optimize each site for the context and desires of their specific targeted audience. Learn what interests customers in that particular area through targeted keyword and intent research and make sure that each site is ready to compete within its own local sector.

Tip 5. Since a large part of succeeding with voice search is having a strong local presence, paid search and organic search teams can work together to maximize the brand’s presence. Research valuable keywords for the organization, intent, and how the brand ranks. Identifying the opportunities where having a paid ad would be the most beneficial and where organic search will be able to establish the brand can help organizations maximize their resources.

Tip 6. Do not neglect your apps. Remember that apps dominate a significant portion of the mobile experience. In fact, an estimated 90 percent of mobile minutes are spent on apps. Your data from your research about local search and natural language voice search will help you construct your app to maximize the user experience. Use deep linking within your app to ensure that customers who engage with you through voice search are able to find the content that originally interested them.

Source: Smart Insights

Voice search continues to become a dominant force in the world of digital marketing. Businesses need to be prepared to respond and keep their brands recognizable as people become more accustomed to immediate answers wherever they might be.

Four most interesting search marketing news stories of the week

We’re back with our weekly round-up of the most interesting search marketing news stories from around the web.

I hope you all enjoyed last Friday’s Easter search trivia quiz, and if you haven’t had a chance to test your knowledge yet, be sure to have a go and share your score with us on social media!

This week: a look at the newly-relaunched Google Earth and what it could mean for marketers, and a study has shown that 45% of marketers say their biggest difficulty with Schema.org markup is proving its value.

Plus, Google’s new “suggested clip” feature in search results shows how far its ability to search within videos has improved, and a new menu of Partner-only Features on Google’s Developer Blog hints at some exciting things to come.

Relaunched Google Earth introduces 3D local maps, visual storytelling opportunities

Google has just unveiled a stunning relaunch of Google Earth, with a wealth of new features and information to explore. On Search Engine Watch this week, Clark Boyd gave us a tour of the new Earth, including a look at how marketers can take advantage of the visual storytelling opportunities it presents, and what it means for local search, where “near me” searches will activate a 3D local map featuring business names, photographs and contact details.

45% of marketers have difficulty showing the value of Schema markup

A recent survey carried out by Schema App, a provider of tools to help marketers use Schema markup, has provided some insight into the difficulties that marketers encounter when using Schema markup.

Schema markup is often touted as a killer search tactic which is nevertheless seeing very little uptake among website owners. It can vastly improve the look of websites on the SERP with the addition of rich data, and it is integral to a number of Google features like featured snippets.

But according to Schema App’s survey, 45% of marketers say they have difficulty in “showing the value of doing Schema markup – reporting the impact and results”. Forty-two percent struggle with maintaining the ‘health’ of their markup when Google makes changes, while 40% cited difficulties in developing a strategy around what to mark up with Schema.

Meanwhile, nearly a quarter of respondents (24%) said they had difficulty understanding Schema markup vocabulary at all.

Google shows “suggested clip” feature in search results

Google is continually improving its ability to search within a video, and to surface a particular search result within the content of a video. In a previous search news roundup we reported on the fact that Google’s machine learning technology can now recognize objects within videos, as demonstrated at Google’s Next Cloud conference in early March.

Then this week, Ryan Rodden of Witblade reported that Google is now showing suggested video clips in search results for particular queries:

Image: Witblade

The suggested clip appeared in a query for “blur out text in imovie”, highlighting a suggested clip of 25 seconds in the middle of a how-to video. While it’s unknown how accurate this result was for the query, it shows that Google is making bold inroads into searching within video and is treating video like other kinds of content to be crawled, indexed and presented as a Featured Snippet.

Given the huge rise, and popularity, of video of all forms in marketing, social media and publishing at the moment, it’s a smart move and something we can probably expect to see more of in future.

Google adds extensive new menu of Partner-only Features

Google’s Partner-only Features are a forum for it to debut certain search features to a select group of approved and certified providers, before they are rolled out on a wider scale. Aaron Bradley noted in the Semantic Search Marketing Google+ group this week that Google has just added a huge new menu in the Partner-only Features section of its documentation.

The new menu features eight sub-sections including “Carousels”, “Indexing API”, “Jobs” and “Live coverage”.

All of the links currently lead to a 404 error, but it could be an interesting insight into what’s to come from Google.

How to use Google’s new demographic targeting for search ads

Through AdWords, Google has given advertisers a lot of control over when their ads are shown, by means of the different match types and using remarketing lists for search ads.

Until recently, however, you were unable to target users based on demographic – a function that has been available for a while now on both Facebook and Bing.

The new feature allows advertisers using Adwords to target users based on:

  • Age
  • Gender
  • Parental status

This feature will be particularly useful where user intent varies considerably based on these variables. For example if you were selling high-end investments or watches, it is unlikely that young people under the age of 25 would have the necessary capital to purchase them.

However when using this feature, it is important to make sure that your conclusions are based on data as opposed to your gut feelings. A study by Google has shown that some of our preconceived ideas about which demographics purchase which items may result in us missing out on a considerable proportion of buyers.

Image: Google

For example if you were running a campaign selling home improvement products and excluded women on mobile devices, you could lose 45% of your traffic.

One thing to bear in mind is that your customer might not always be your customer. For instance, the study by Google showed that 40% of baby products are purchased by households that do not contain parents.

Here you can see that a considerable proportion of some markets are not the consumers themselves, but people purchasing on behalf of consumers.

How to set up demographic targeting in AdWords

The demographic targeting options can be found within the audiences tab alongside your remarketing lists for search ads (RLSA) data. To add bid modifiers take the following steps:

STEP 1. Go to the “audiences” tab and then to the “demographics” sub-tab as shown below.

STEP 2. You can switch between demographic data for “age” and “gender” using the two sub-tabs that are located under the graph.

STEP 3. Bid modifiers can be set within the “bid adjustment” column by clicking on the dashed line.

Once you have done this you should see a popup like the one below where you can enter your bid modifier.

STEP 4. To calculate your bid modifier you should use the following formula: divide the age conversion rate by the ad group conversion rate, subtract one, and multiply by 100.

So for example if the conversion rate for people aged 25 – 34 is 3.52% and your conversion rate for the ad group overall is 2.76%, then your bid modifier would be 28%. Note that you need to round up your modifier to the nearest whole number.

When you are faced with “Unknown” data where Google is unable to match the user to their data, you will in most cases not want to exclude this audience.

In some cases we have found that Google can’t match data to a large chunk of your traffic, which can be frustrating, but if you exclude this you are likely to miss out on a considerable portion of your traffic.

Conclusion

Overall, demographic targeting for the search network gives advertisers another dimension with which to narrow down their audience to target the most relevant people.

Google’s example of baby products being bought by households that do not contain any parents is a perfect example of why it is necessary to follow the data as opposed to your gut feeling when using this feature. Otherwise you run the risk of losing a considerable portion of your audience.

Finally, when you are faced with the dreaded unknown column, think twice before excluding this data. In the vast majority of cases this will account for a considerable chunk of your traffic so it is best not to exclude it.

What’s new with Earth? First impressions of the relaunched Google Earth

Google has just re-designed, revamped and re-launched its Earth product, and it has certainly been worth the two-year wait.

Earth is now built into Chrome, so there is no longer a need to download a cumbersome desktop app to access this global repository of images, videos, and knowledge cards.

The Android app has been updated too, with support to follow soon for mobile browsers and iOS.

So what’s new?

A lot.

First impressions of Earth are simple: this is a hugely impressive feat, one that truly celebrates the world – both natural and man-made – by capturing its farthest corners in finite detail.

So, let’s get started. We begin with a zoomed-out view of the planet, before a short introduction from Google on some of Earth’s upgraded features.

These features work in a cumulative fashion, each adding to the last and building up to three-dimensional, customizable, multimedia experience of our planet.

First up, the search function. The foundation of any great Google product, this deceptively simple search bar leads to any location in the world:

This is given extra potency when combined with Google’s vast inventory of knowledge cards about cities, rivers, buildings, and basically just about any landmark you can think of.

These are typically pulled from Wikipedia and appear as a clickable carousel, although other resources are cited on infrequent occasions.

It is possible to zoom in to the level of Google Street View to get a closer look at the palace in this screenshot, as has been available via Earth and Maps for some time now. This is labeled the ‘Photo Sphere’.

Added to this is the “I’m Feeling Lucky” feature, which takes the user to a random point on the map and works like the button of the same name in traditional Google search.

My first trial of “I’m Feeling Lucky” took me from Lagos to Legoland in just one click. It can be quite a dizzying trip, depending on your screen size and propensity for motion sickness, but the speed of flight can be adjusted in the settings menu.

Layer by layer, this builds up to Voyager, the section most likely to keep users engaged with Google Earth.

Voyager contains a wealth of curated content from sources as diverse as Sesame Street and the BBC, but we should expect many more publisher partnerships in future.

This is significant, as it takes Google into the realm of visual storytelling and opens up a host of new opportunities for publishers willing and able to get on board.

There is already a good variety of content on here, including city guides, nature trails, and the work of specific architects like Frank Gehry. That said, this is an inexhaustible resource that will play host to a lot more experimentation soon.

One highlight is the ‘Revealing the Center of Life’ tour, which takes us on a journey underwater to explore coral reefs.

As an educational center, this offers unparalleled scope for exploration and will undoubtedly spark much healthy discussion. Some of these knowledge cards are accompanied by videos and behind-the-scenes features too, providing further context to the images.

The implications for marketers

Brands should really be thinking about how to avail of the storytelling possibilities that this brings. For travel and tourism companies the opportunity is perhaps a little more obvious than for other industries, but in truth there is an opening here for almost everyone.

The Argentinian artist Federico Winer has partnered with Google to create a photographic series on airports, for example.

There is also a history tour that traces the steps of characters in the novels of Charles Dickens, and another that visits some of Ernest Hemingway’s favorite haunts.

With consumer attention spans at an all-time low, Google Earth should now be viewed as an incredibly powerful, engaging tool, should publishers have the imagination to avail of its potential.

In perhaps more prosaic terms, local search remains just as vital as it has been for some time now – perhaps even more so.

Typical searches like the one below for [book store near me] will bring up an interactive 3D map of the local area with some options, so it is vital to have business names, addresses, opening hours, photos, and phone numbers up to date.

Customizing Google Earth

And it doesn’t stop there. Users can import KML (Keyhole Markup Language) files to overlay images and charts onto Google Earth. Google even provides an example of this in action, with an overlaid image of Mount Etna erupting.

KML is based on the XML standard and provides a few extra functionalities, like paths and polygons, that are particularly useful for Google Earth.

Google provides a sample file and comprehensive guide to get started, although this should be pretty familiar to anyone accustomed to creating custom Google Maps.

In summary

The new Google Earth is more than the sum of its features; at its best, it can both distort and inform our perception of space and time.

A historical echo of this project would be the eighteenth-century Encyclopédie, a Herculean effort by Denis Diderot, Jean le Rond d’Alembert, Voltaire, and many others, to catalogue and categorize all human knowledge.

Combine that persistent thirst for knowledge with the technology at Google’s disposal and the product is something as engrossing and enlightening as the new Google Earth.

It seems fitting to give the final word to Google: