WebPromo’s Q&A with Google’s Andrey Lipattsev [transcript]

PageRank toolbar

As we reported at the end of March, Andrey Lipattsev, the Search Quality Senior Strategist at Google Ireland revealed the three most important ranking signals used by Google.

These are content, links and RankBrain (although that last one is “hotly contested” and the list doesn’t have an order).

This information was uncovered during an excellent live Q&A from WebPromo, which also featured Rand Fishkin from Moz; Ammon Johns, the Managing Director at Ammon Johns & Co.); Eric Enge, CEO at Stone Temple Consulting; and the whole thing was hosted by WebPromo’s Anton Shulke.

We’ve partnered with Anton to bring you a transcript of the entire one-hour long Q&A.

As you can imagine, it’s a very lengthy read. We have trimmed it for any repetition or digressions, but there was so much brilliant chat between the group that we’ve decided to keep it in its 7,000+ word form.

The discussion covers:

  • Removal of PageRank toolbar
  • Click-through-rates as a ranking signal
  • Google’s top ranking signals
  • Machine learning vs spam
  • The state of SEO outside of the US and Western Europe,
  • New mobile friendly update

Now set aside half-an-hour, pour a cup of coffee and enjoy.

A massive thank you to our staff writer Rebecca Sentance, who spent a huge portion of her day transcribing this video.

Removal of the PageRank toolbar

Ammon Johns: Why remove the PageRank toolbar if PageRank is still a part of Google’s ranking algorithms – which of course, we all believe it is – even if it wasn’t used on its own?

Andrey Lipattsev: I get it. And it’s a really good question – I promise you I do have a couple of answers, which I hope are reasonable – but let me ask you a question back. Why do you think it was useful in the first place? Why was it a good thing to have?

Ammon: When it was still being updated it helped me give a rough idea of what the crawl priority of a site might be. And for that reason, it was useful to me because I could see that if I get a client comes to me and they’ve been following some SEO advice, they’ve had the blog, they’ve had the news feed, they’ve built up thousands and thousands of pages – but they’ve got a Toolbar PageRank of two, I’m already thinking…

‘With a PageRank of two, they’ll be lucky to get a thousand pages regularly – spidered and indexed, every month – given that there are so many news sites, there are so many things that Google has to pick up, all of the time; this just isn’t going to be a high enough priority to have this number of pages. They may be giving Google more than it can digest, and therefore detracting from where the value is – the core pages.’

So it had a use there. It had a second use, which was more financial. Lots and lots of bad SEOs used to base their strategy on Toolbar PageRank. As long as they’re doing a bad job, there was more work for people doing a good job like myself.

Andrey: By all means. And I think the second reason alone would have been sufficient to get rid of it. But I mean, if that was the only reason, we probably would have tried to keep working on that.

You know that it wasn’t so much removed as it died a natural death more than anything. Nobody was looking at it, nobody was developing it because it wasn’t bringing very much value internally. Essentially, it became so out of date, so when going back to Ammon’s first point about its usefulness, that kind of all went away, leaving only the second value there.

So it was no longer a valid benchmark for a site’s usefulness, for a site’s likelihood to be trawled more often, or ranked well, because a) it was out of date, and b) there were a lot of other things in place where – you’re saying Toolbar PageRank two would have made it less likely to be crawled? Not really.

There’s a lot of other stuff in place, and PageRank two could have been ranked above PageRank eight very easily depending on what else was going on.

Ammon: That’s ranking, though – I’m not talking about ranking, I’m just talking about indexing, and I found the correlation there, across thousands of sites, was very high.

Andrey: But that link to me is so tenuous, between Toolbar PageRank and indexing that I’m not even sure that thing ever really existed.

It was supposed to be a reflection of the actual PageRank of a page, and that has no bearing on how often the page gets crawled.

Ammon: If you gave us the actual PageRank, I’d have used that, believe me, but you wouldn’t share that. I did ask!

Andrey: I can understand, you know, the second-degree links between the page’s PageRank and how often we’d come back to it, but there’s a lot of other things in play there, that also need to be taken into account. So no matter what you’re thinking about, whether crawling or ranking… it a) has gradually become just one thing out of very many, and therefore not reflecting of the real picture; and b) has stopped reflecting what it was supposed to reflect in the first place. So it wasn’t very useful.

And as I said, c) it became something that it was never supposed to be, it became something of a currency for some SEOs. And I’m not saying everybody was doing it, but clearly, as yourself acknowledged, a lot of people started using it like that, and a lot of SEO contracts would be ‘I will get your PageRank to this’ which is kind of meaningless, really.

And so, what I was going to tell you beforehand is, this reduced meaningfulness and also, we are hoping that the improved stats we are providing now – for example to Search Console, the improved search analytics report – are the stats we’re looking at. They are the bots you’re looking for.

You know, your clicks, and your impressions, and the queries and the pages; that’s what you should be looking at, and as an owner, as an SEO, comparing it to other people if you have the data, and so on and so forth. And build your strategies, and your analysis on this kind of data, not just on one number, which was kind of neither here nor there.

Not to mention the fact that, last but not least, I have not seen that famous toolbar, in which it was supposed be a plugin, on anyone’s computer for a long time. Granted I am a very biased sample, because most people around me have Macs and Linux machines anyway, but I haven’t seen anyone with that toolbar with the thing in it.

Click-through rate as a ranking signal

Eric: I want to talk a little bit about click-through rate, and I want to talk about it from a couple of perspectives. Rand, at SMX Munich, I believe, ran a fresh test that showed at least a temporary movement of the ranking of an item that they were trying to promote by sending a lot of clicks to the page, where it kind of jumped in the rankings, and then over time it came back down.

And then in addition, Paul Haahr gave a keynote at SMX West in which he walked through ‘how Google Works’ and he talked for a while about click-through rate, more in the context of controlled tests by Google, so [Google] will roll out some algorithm change, and one of the things it might look at is the user click-through rate on the revised search results, in order to see whether that’s a better result.

And then he also explained that the reason why Google doesn’t use it as a general ranking factor is because it’s too gameable, but in this controlled test environment, it allows you to use user interaction with the result in the way of measuring search quality, to decide whether to roll out a new algorithm update.

And just to finish my rather complex question, the point of that is that it seems to me that if you’re using click-through rate as a main measurement of other ranking factors, to better measure search quality, it doesn’t really matter to me that much whether it’s a direct ranking factor or an indirect ranking factor. It still is used in evaluating search quality.

Andrey: To be honest, I think I’m kind of with you there, and what Paul said, I’m not going to disagree with Paul, but also what we’ve been saying before. I think, if you look through the majority of our past comments on this topic, you won’t find anything to the contrary.

We do, if you like, use that as a factor to assess our quality, and treat it as you like, fair enough, in that sense, if you’d like to… It’s just, it’s very important, because you know how headlines go. Tomorrow’s headlines from anybody who watches us today will be, ‘Google uses behavioural factors for ranking!’ And you know what people will interpret from that.

Because on the one hand, yes, in the sense that you described; on the other hand, no, in the sense that most people understand.


So it’s very important to kind of come in with a slightly more complicated explanation. Somebody commented to me the other day about being able to answer yes or no to questions, and I told them that sometimes there are questions that are not yes or no, even if you phrase them in a yes or no fashion; it doesn’t make it possible to give a yes or no answer.

So anyway, coming back to what you said, I think you described it pretty accurately.

Rand: Why is it the case that seven or eight times in the last two years, I’ve done something, just having a little fun, so I’ll be standing on a stage in front of 500 to a couple thousand people, and I’ll ask them ‘hey, can you all jump on your mobile phones, or on your laptops, and do a search? And I want you to click the seventh, eighth, ninth, tenth result, and then over the next 24 hours, let’s observe what happens to that query, and what happens to that page’s ranking for that query.’

I’ve had seven or eight of those that have been successful, and I’ve had four or five where the ranking did not change. And I’ve run a few of them over Twitter, again, a few where the ranking did change, and changed pretty quickly, and usually sticks around a day or two, and a few where nothing happened.

So in your opinion, what’s happening there that’s making the ranking for the page that gets clicked on change so rapidly, and then what’s happening when it falls back down, again relatively rapidly over the next day to two/three days?

Andrey: It’s hard to judge immediately without actually looking at the data in front of me. In my opinion and what my best guess here would be, is the general interest that you generate around that subject – by doing that, you generate exactly the sort of signals that we are looking out for. Mentions, and links, and tweets and social mentions – which are basically more links to the page, more mentions of this context – I suppose it throws us off, for a while. Until we’re able to establish that none of that is relevant, to the user intent.

Eric: So back to the other part of my question, I just want to acknowledge that, I agree that a lot of people might run off and say, ‘oh my god, click-through rate is a ranking factor in a more general sense!’


And personally, I understand why that would be bad, because it’s so gameable, but if you look at it more holistically, it seems to me that engagement signals and ways that users interact with content, using that as an indirect thing where you’re using it to qualify search quality, so you can pick other ranking signals that reflect that well…

As I internalise that, it gives me I think more ammunition to point people to make better websites and better webpages. And that’s the reason why I was asking the question, because to me I like to be able to show people why that’s so important and why they should think to themselves that this will help them, over time, with their SEO.

Not because you’re employing it directly in your algorithm, but because you’re using it to qualify your algorithm.

Andrey: Eric, I think you’re absolutely right, and your message to the people that you work with is absolutely spot on.

I think it is already significant enough that we’ll look at what users do on our search result pages, as we change them, to evaluate the usefulness of these changes, for people to take into account, you know, when Google changes the algorithm the next time and my page gets exposed, people like it or don’t like it, come to it or don’t come to it, I should probably pay attention to how people like or don’t like what I’m trying to offer them, as part of what it appears like in Google search results.

The disadvantages that I’ve most often seen described for this approach on a clear, pure ranking factor basis is that we’d need to have broad enough and reliable enough data about bounce rates, click-through rates, depth of view for the vast majority of pages and the vast majority of websites everywhere, in order to be able to make meaningful comparisons all the time.

That is impossible, because we don’t have the technical means to do it. Even when you think about Google Analytics, not everybody has a Google Analytics code by far, so we can’t use that.

If we don’t use that, what else are we gonna use? Start trying to come up with something we could use, but it’s always going to be a struggle.

Going back to the original idea of links, you can kind of reasonably say, ‘I see a page, I see the links on that page, I see where they’re going’ – if we can see them, granted they can be no-followed, they can be hidden, that’s like little things, but by and large, they’re here, we can see them, we can use them. The words on the page.

This stuff, any reasonable person with a bit of experience could think of ways how you could measure user behaviour for particular pages and particular websites, but measuring it web-wide…

Rand: Andrey, correct me if I’m wrong, but you don’t need it particularly web-wide, you only need it on the search queries, right? So Google sees that, on average…

Andrey: Right, but is that enough? Imagine we only ever measured the links for pages that appear in our results. Or, like, the top 10 of our results. So you end up with a very small subset of everything that’s out there, and the worst thing about it is you end up reviewing only that, because it’s the only thing that ever gets clicked on, because it’s the only thing that ever gets shown.

You can find ways to get out of it, I agree with you. You can find ways around it, and solutions to it, but this is the reason we’ve been saying it’s a tough challenge. It’s gameable on the one hand, and it’s a tough challenge to actually make a very strong signal out of it.

If we solve it, good for us, but we’re not there yet.

Rand: I mean, I think that you have filed some patents, have written some papers about using pogo-sticking, and pogo-sticking certainly seems like a very reasonable way to measure the quality of search results. If something gets a lot of clicks, and then people click ‘back’, or they click on something else, clearly that result didn’t fully satisfy them. So that seems like a very reasonable user approach.


Andrey: That is a reasonable approach; it is one of them, so as a patent goes, that’s an interesting idea, but take it into account, what if the nature of the query is such that you won’t go to that? You’re comparing things; you want to see this one, and you want to see another one, and then you want to make a decision.

So all these things come into account… We experiment and explore other ways, not just this. Can we look at what queries are? And can we just go back to the basics of what’s the content on the page? How can we understand that better? And how can we understand the entities on the page? And so on.

I think there can be more research done into user behaviour factors, and how we use them well. But there’s also like a million other avenues of research, and maybe some of them will be more promising.

What are the top signals Google uses for ranking?

Ammon: RankBrain has become this new keyword that everyone’s latched onto; I’m seeing already companies that are selling ‘We’re the SEOs that have taken into account the latest RankBrain upgrades’ – despite the fact they can’t possibly, because we’re all still examining what it does, what its limitations are, and there’s no way of knowing that from the outside completely, especially since it seems to combine with several others.

Now my understanding of Hummingbird is that it’s led to this, that Hummingbird was brought in more context, more idea that the meaning of the query was more important than the words of the query. And I think the natural consequence of that is, there’s times when it isn’t. There’s times when the way we’ve worded it is very specific, and it seems that RankBrain is one method of being able to spot this – Gary Illyes’ example was the word ‘without’. That one word was the most important word in the query. ‘Can I complete this without such-and-such?’ ‘Without’ couldn’t be changed.

So… Is that kind of where we’re going with this? We’ve heard that this is the third most important signal contributing to results now. Would it be beneficial to us to know what the first two is? Could webmasters build better sites if they know what the first two is?

Andrey: Yes; I can tell you what they are. It’s content, and links pointing to your site.

Ammon: In that order, or another order?

Andrey: There is no order. Third place is a hotly contested issue. I think… It’s a funny one. Take this with a grain of salt.

It’s obviously up to Greg the way he chose to phrase it when he was doing it, and I understand where he was coming from, being somebody who worked on that. The way I interpret his meaning is that if you look at a slew of search results, and open up the debugger to see what has come into play to bring about these results, certain things pop up more or less often. Certain elements of the algorithm come into play for fewer or more pages, in fewer or more cases.

And so I guess, if you do that, then you’ll see elements of RankBrain having been involved in here, rewriting this query, applying it like this over here… And so you’d say, ‘I see this two times as often as the other thing, and two times as often as the other thing’. So it’s somewhere in number three.

It’s not like having three links is ‘X’ important, and having five keywords is ‘Y’ important, and RankBrain is some ‘Z’ factor that is also somehow important, and you multiply all of that… That’s not how this works.

The way we can look at it in a useful way is that we are trying to get better at understanding natural language, and applying machine learning and saying ‘What are the meanings behind the inputs?’

It’s still early days; we cannot claim that typed queries, whether mobile or desktop, have reasonably subsided or are going away. But more and more so, people are interacting with their devices using voice. So we can expect the use of stop words, words like ‘without’ more often.


People still tend to be a lot more mechanic and overthink their queries a bit – as I tend to, anyway – and think, ‘Okay, so what is a query that is completely not human in nature, that sounds like what the machine would understand?’ I don’t know if you guys catch yourselves doing that. You don’t generally type a question to Google as you would ask a real person.

Ammon: I do structured queries; what’s the most important concept? Right, I’ll put that first… What’s the modifier to that?

Andrey: You’ve gotta admit that in that sense we are pretty advanced users; a little bit outside the norm in this sense. As a company I guess we’re not so much looking to support us guys, because I think we’ll kind of figure our way out, but to start supporting people who are just joining the net. And to whom, for example, the mobile experience is the first experience, and the Google search application is their first experience of interacting with the web and getting answers to their questions.

And they don’t know that you need to say robotic words and omit commas, dots and stop words, everything; they just speak. And they’ll say, ‘How can I complete Mario Karts without cheating?’ They’re not going to think about the word ‘without’ in that sentence, so it’s up to us to figure out what’s behind all of that.

I think there was a bit of conversation on Twitter, as well, with Gary involved, about ‘So what does this affect?’ Does this affect indexing, does this affect ranking, and Gary was trying to say ‘Well no, it doesn’t affect ranking, it’s not a ranking factor’… It becomes a very complex conversation when you get to that.

Ammon: How about ‘everything affects ranking. Otherwise there’s no point having it.’

Andrey: Well ultimately, I guess so, yes; ultimately, even your webpage’s accessibility affects ranking, because if we can’t access them, we can’t rank them, even at that level. It doesn’t affect the ranking of an individual page, but what it does affect is our understanding of a query.

So once our understanding of a query changes, we’re more likely to throw something different as a result. That’s the effect that it has. But it’s not the same effect as knowing there’s so-and-so many links pointing to that page, or knowing there’s such-and-such words on that page. That’s something else.

And I think you mentioned Hummingbird – the very initial roots of that are in the synonymisation attempts, understanding synonyms better and replacing them the old way back then; those were just libraries, to some extent static libraries. This is much more interesting. Also as Greg has said, and some of the other guys have said: we don’t know what it’s doing. We’re not supposed to know, I guess, because it’s machine learning; that’s the whole point. You throw it out there and it does its stuff.

But the ideas behind it is this, and they’re the same as when we started talking about Knowledge Graph and introduced it, and that became very much part and parcel of our search results; people expect it to be there, people expect it for entities.

Three years ago, I remember quite vividly – I think it must have been 2013; I was at a conference in Russia and we introduced this for the first time. It was a bit of a struggle to explain – ‘What is it that we’re doing, and what are we doing with all these new things? What is ‘entities’ and how is it different from words?’

Now people expect it. You’re looking for an actor, you’re looking for a city, you’re looking for an event; yeah, you want to see that card, or have a thing from a source, follow links, yeah – that’s the experience.

barack obama Google Search

So this is hopefully enhancing that experience, allowing us to understand what else has that potential, what else can be explained in such a way, what else can be understood better. Negative queries, as Gary called them, that’s definitely one thing that hopefully is going to work better from now on.

Eric: There’s been so much confusion that’s spun out of this, and there’s some things that I think we can perhaps help people understand a little better here. So, as you described it, Andrey, and there’s conversations I’ve had with Gary about it and other Googlers.

‘Better understand language, to help us better match queries with appropriate webpages, from a relevance perspective.’ Put a period, end of sentence. ‘Doesn’t take over other ranking factors’ – is that a fair top-level assessment?

Andrey: I suppose, yes? Probably have to see that in writing to make sure I fully understand every single word of it and how they fit together, but yes, generally speaking, I’m there with you.

Rand: Can I slightly disagree with that last sentence? This is just based on what Gary and I tweeted back and forth the other day, which is – I think the idea of RankBrain is that it can re-order or re-weight ranking elements, in order to produce more relevant results.

Andrey: But I mean, at the very simple level, you know, in the example that Gary gave – Mario Kart without cheats – it’s not the weight you assign to the word ‘without’ in that query, and its presence on the pages you’re going to show for this, or not going to show for this.

It’s hard to kind of structure this answer clearly… When you look at it like that, you’re almost prone to saying ‘Yeah, it is a ranking factor because now we’re going to grab on to those pages that do have this word’ but it’s not just about the word ‘without’, it’s about the context of the whole discussion around this page.

It’s looking at what’s on the page but also how well it met the expectations of people who then interacted with that. Did us guessing that this particular word indicated a particular intent – was that a right kind of guess? And gathering that data continuously and then saying, ‘You know what, fair enough. Now we’re kind of confident that these words, or these combinations of words, indicate a particular intent, so we need to pay attention to such-and-such content.’

And it doesn’t necessarily mean that all of a sudden, links are more or less important for this page or that page. They’re still as important as they were, but now there’s also this other thing, on top of everything we know, we also know that for this query, this is particularly important.

Machine learning vs spam

Rand: Obviously RankBrain is one of the places where you’re most public about it, I think another place where Google is very public about using machine learning is in image search, and it’s really cool what you’ve been able to do, identifying locations of images and features of images.

A few years ago there was some talk about whether to use, or would Google use machine learning in webspam, and would they use it in other ranking factors, like around links or content? Are there any of those elements that you can answer and say, ‘Yes, we have been using machine learning in other factors like spam, or links, or content, or something else, and that’s been useful to us because XYZ’?

Andrey: I can say that we’ve definitely been looking at using machine learning across the board. Including webspam and other areas.

In webspam, I can’t point towards any particular huge success we’ve had, when using this immediately, but I’m quite confident that we’ll continue exploring. I’m quite confident that even the effects of trying to apply this have been very beneficial, just learning from that.

SEO outside of the Western market

Rand: One of the things that frustrates a lot of SEOs and marketers – I know a lot of folks who, for example, speak about SEO in the US, or in the UK, will travel – I’ll travel, for example, to Eastern Europe, and I’ll speak at an event there, and people will come up to me, and they’ll say, ‘Oh yeah, all that stuff about focusing on quality and content, that sounds really nice, but in fact buying links here is still really effective. And that’s what works for me, and for my clients, so… This whole ‘quality content’ thing you’re talking about, that hasn’t made its way to our country, our region yet.

Andrey: I think that Anton knows very well what you’re talking about, and I do too. I’m going to Ukraine in like a week and a half, I’m going to hear a lot of it. And then I’m going to be in Belarus afterwards, and I’m going to have the full kind of… I get it every time. We try to be as connected as we possibly can to the majority of our markets; we’re definitely connected to the Eastern European ones.


So thank you for this question, because I’ve actually got quite a bit to say about this. Let me try to get through as much as I can, and actually maybe hear from you guys as well.

First of all it’s what people say – what works and what doesn’t work. What is spam, per se, and what isn’t spam? Sometimes there are techniques which are borderline, and which we haven’t begun to understand fully yet, and maybe are not fully globally aware of their application or use.

And therefore they might be applied topically somewhere, and working there, and we just haven’t gotten around to saying ‘Okay, you know what, we should probably take a look at that and stop it from working because it doesn’t make any sense.’

So that’s number one, right? Sometimes things have just not yet been defined as spam, or sometimes you look at stuff, for example, where speech comes into play, and us being a reflection of the internet comes into play, and you kind of go you know what, we do have to draw a line somewhere. At a certain point, yes, it is our responsibility to say ‘this is beyond low quality, this is harming our users, we’ll take active measures’. Whilst that is kind of weird, kind of not nice, but there’s nothing else, it’s not really harming anybody, so you know, let it be.

Great example, you were talking about buying links? I’m kind of hoping that in Eastern Europe, particularly the further East you go, where we have really strong competition. The fact that our competitors are also our colleagues as far as webspam fighting is concerned, and they’re taking active measures against a lot of this stuff as well; hopefully this message will get reinforced that really this doesn’t work, and definitely doesn’t work long term, and the concept of investing into short-term gains just don’t pay off, even if you win for a couple of days, or even a month, it doesn’t make any sense for anybody serious about their business.

This is the second thing I was going to say: As much as we’d like – and when I say ‘we’, I mean both us and maybe other search engines as well – you can’t control the macro-economics of some of the markets where we operate. And especially in the more volatile ones, people will still go in for the short-term gain, because they’re not able to plan beyond a couple of months, because they have no idea what’s going to happen. Not to the internet or Google, but the economy, in a couple of months. You need to like, ‘rank now, make money now, who cares afterwards’.

And it’s really tough to operate in that environment. It’s not less tough, and there are other challenges, in more developed markets, and the financial crisis also happened, but still, you know, it’s easier to work with content that’s there for the long run, with creators that are there for the long run, with whom you can have a meaningful dialogue and try things, and see what works, and everybody thinks that you’re in this for the long haul, and more or less try to make it work for everybody.

It’s more difficult when people are forced into making short-term decisions. And then you have to kind of react in a measured way as well, because you need to realise that there are businesses, and livelihoods, and organisations behind the websites, so you need to measure ‘Okay, so what do we do in this situation?’

The other one is a famous one – I don’t know if that quote ever made it to the English part of the internet, but some of you might know him – he was involved most recently in our Twitter integration efforts. So, when he was speaking at, I think an interview with one of the Russian publishers, years ago, he gave a very funny example, that there are no results for a particular query – which is really funny, it was something like, buying a bucket in a small rural city somewhere, in Russia – something like that, something that you wouldn’t expect anybody to sell there. And there are no results for it. I think it became a bit of a meme, the whole thing.

But the point behind this is that in a lot of Eastern European – but not just Eastern European markets, I think it’s an issue for the majority of the BRIM countries, for the Arabic-speaking world – there just isn’t enough content, as compared to the percentage of the internet population that those regions represent.

I don’t have up-to-date data; I know that a couple of years ago we looked at Arabic for example, and there the disparity was enormous. If I’m not mistaken, the Arabic-speaking population of the world is something like 5-6%, maybe more, correct me if I’m wrong; but very definitely the amount of Arabic content in our index is several orders below that. So that means we just don’t have enough Arabic content to give to our Arabic users, even if we wanted to.

New Google Tool Assists with Arabic Script Translations

And you can exploit it amazingly easily; if you create a bit of content in Arabic, whatever it looks like, we’re gonna go, ‘Well, we don’t have anything else, we’ll serve this’. And it ends up being horrible. And people will say, ‘You know, this works! I curate stuff, the hell out of this page, bot some links, there it is. Number one. There’s just nothing else to show. So, yeah, you’re number one. The moment someone actually goes out and creates high-quality content that’s there for the long-haul, you’ll be out, and that thing will be in.

One of the things we try to do, in particular this is also the reason why I’m travelling there, is really work with those markets, and provide the tools and the means to those webmasters and SEOs and marketing agencies; the tools, the guidelines, the best practices; here’s what you can do, here are the tools that Google gives, and so on and so forth.

Rand: So as those of us from this Hangout and many others are going over and chatting with folks in regions outside of our home countries, where maybe this sentiment exists, would it be your assertion that generally speaking, the algorithms that reward content types here in the US, or in Western Europe, will make their way to other countries as well, other language groups, over time?

Andrey: I think it’s fair to say that the algorithms are already there, it’s just that the content we’re seeing is… I don’t want to offend anybody, I don’t want to say that the kind of content that these algorithms now exist for, that kind of content doesn’t exist.

The technologies used, and the approaches used, they’ve moved on so much, as you guys know, in some parts of the world and – again, ‘English-speaking’ is a very broad name, but in the Western English-speaking world they’ve moved on so much, and we’ve adapted to them, and that that would open up what you might call holes, or exploits, for other types of content and other types of behaviours. And we’ll patch them, and we’ll plug them, it’s just that obviously they’re less of a priority, because we want to keep pace with the new developments, and the cool stuff.

Everyone was very excited a year ago when we started thinking about introducing mobile friendliness, and of course we were obsessed with how we’re going to present this, and the reaction that people will have, and thank you guys, some of you, for creating memes about that as well.


By and large, it kind of had a positive reception; by and large, it worked out the way we wanted it to work out, and we try to make sure that it does. But it was funny how some of us – me included – the first thought was, ‘This is awesome’, second thought was, ‘Okay, so how are they gonna game this?’ What is going to be the first page that cloaks mobile-friendliness instead of being mobile-friendly, and the thing is, other people will go, ‘Yeah, but that’s like, more work.’

Andrey: It depends, because it depends what you’re going for. Because I can think of scenarios where it makes sense to invest into some sort of crazy self-learning algorithm that just generates not really mobile-friendly pages but then shows them somewhere. And just doing this normally, for normal clients, and so on.

It’s like this with everything – now we are looking with a lot of interest into what Accelerated Mobile Pages will be like, and what’s the update on that going to be – are they going to be successful or not? We’re banking on them, we wish them success and we want to make them successful, but it’s going to be like that again. There’s a lot of big publishers globally are porting, trying to do something interesting here, see how it works, and then part of my brain is going, ‘Okay, so does this mean that AMP-ified spam is going to rank better now?’ And what are we going to do about this, you know?

amp pages copy

It’s like that with everything, and as I said, we’re facing the fundamental difficulties that we’re facing, the volatile economic system which forces people into short-term decisions, and generally speaking – and you guys are posting in the chat that ‘money speaks English in the Arabic world’ – to a certain extent I guess it does, and you can say the same for China, or Russia… Yeah, clearly it speaks English, but if it only spoke English then all of us would also only speak English. The other languages still exist, and clearly people are interested in creating and maintaining that content, and I wish that this balance that exists right now corrected itself a little bit. And that would definitely solve some of the spam issues as well.

Ammon: It’s been a personal bug-bear of mine for years – this isn’t the technical code side of SEO, but I really wish more people would realise that, y’know, they’re creating a self-fulfilling prophecy in doing this. The number of emails I get that are very poorly-written English where they are determined to write content in English for an English market, because it gets more market share, or it’s… Look, if you’re writing in your own language, you’ve got a much less competitive environment, and you are lifting the local economy. How will that situation change if you play to this situation?

It’s a bit like the fact that India – Fantastic programmers, all working for other countries, making other countries rich. India isn’t really in a much better position than it was 15 years ago.

Andrey: We’ve never lost focus on the developing world, but this year we are really gunning for the BRIM countries, and countries like that, in the process of development; across all the departments of Google, but as far as search is concerned, hopefully throughout this year you’ll see some really interesting developments there. Initiatives that we launch, and the projects that we launch, that are hopefully going to enable creators to create content in their language, for their local users, and – you just mentioned India; they’ve got more languages there, probably – I can’t even count, really – but probably on a par with Europe.

Nobody thinks of it that way – ‘India, yeah, they just speak English’. They don’t. Like a fifth of the population speaks English. And the rest have a lot of stuff that they could share with each other, but they don’t. Or at least they don’t to the extent that they could. And we’re gonna try to do as much as we can to help change that.

Eric: Right; it’s remembering, too, that in search, which – you basically can enter an arbitrary query of any kind – how difficult it is to have a ready answer for every possible query a human might put in, in their native language, when there aren’t that many people creating content in that language. It’s a fascinating way of thinking about the whole problem.

Rand: Yeah, for sure. Real quick on this front – You basically said, Andrey, that there’s sort of a global algorithm, that it considers inputs the same, universally – Is it the case that there are, especially when it comes to webspam, specialised things that fight particular kinds of spam in different regions or languages or countries? Or is that a myth, that when something’s applied, it’s applied universally?

Andrey: As far as algorithmic solutions are concerned, I don’t think so. Nothing springs to mind; as far as our algorithmic webspam solutions are concerned, all of them that I can think of are global. Which is sometimes part of the problem, because that’s the thing – some of the problems are very local. In fact, it could sometimes be a bit of a vulnerability, that the solutions need to be global, and you just have to algorithmically ignore some of the problems.

It doesn’t mean we ignore them completely, which is why the many webspam fighting teams exist, which is why we have them covering the vast majority of the big markets that we work in, and that is the solution that we tell. The things that we do for example in Russia, or Poland, given the environment that exists there, are not the same as the things we do in France or Germany. And so on.

New mobile update

Eric: Any comment on the plans for this upcoming mobile update? Anything that you might be able to share; I’m not going to pick on anything particular.

Andrey: I don’t have a number to put on that – it’s kind of happening already, we have said we are now going to treat mobile-friendly pages even better; we already did. Some of you have commented, ‘I didn’t see the results!’ Well, hopefully you’ll see them now. Again, very often, and especially in non-English markets, people comment, ‘Hey, I still see a lot of non-mobile-friendly pages in results’. Part of that is because there’s not enough content that has become mobile-friendly, so you’d need to kind of show whatever else is available.

But with the growth in that content, we’re kind of saying, ‘Okay, we can bank on mobile-friendliness a bit more now, because we’ve seen a lot more pages come online, and become available for mobile devices, let’s do that.’ The other exciting thing is that, you know, we have a new smartphone user agent that will start running around the web from the 18th. I personally like the fact that we’re kind of keeping up with that and updating that as well.

Eric: Yup. And that’s supposed to start rolling out in May, is that right?

Andrey: Mid-April.

Best practice tips for developing an enterprise mobile app


Analysts are predicting strong growth in enterprise mobile apps as businesses focus on “mobilizing” their workforces to improve productivity, business efficiency and customer service.

The pundits have been forecasting growth in enterprise mobile app market for some time. But surveys (CSS Insight) over the last year suggest real interest in business mobility driven by employees and business lines, rather than IT departments.

And this demand is growing five times faster than internal IT organizations’ capacity to deliver them (Gartner).

This column will examine what developing enterprise mobile apps involves, what types of businesses can benefit and what has been holding businesses back.

We also share some best practice tips from practitioners and the top 10 common mistakes made by enterprise apps developers.

What are enterprise mobile apps?

Enterprise mobile apps are developed by companies to be used by employees, particularly employees in the field, including sales, delivery or maintenance personnel.

Such apps are also developed for business partners, such as suppliers, distributors, retailers, advisors and maintenance providers. These apps are not intended for customer use, but may impact customer service, directly or indirectly.

The purpose is to allow people to perform all the tasks they need to, digitally, while away from the office, via a mobile device. The goal is to improve productivity, communication, access to information and automation. The user may be in the car, at home, or at a customer, supplier or retailer site.

At the front-end apps may be native (a download app developed for each type of smartphone), web or browser-based or hybrid (a web app with added native functionality).

There are three types of enterprise app:

  • Consumer apps that are used (sometimes without corporate sanction) for business purposes e.g. Dropbox, Skype.
  • The extension of corporate systems e.g. messaging, sales, logistics, marketing, finance, HR, to allow access from mobile devices.
  • Corporate apps developed to meet a specific requirement (or business opportunity) of the mobile workforce. This is an emerging category of apps that interface with corporate systems, but aren’t simply replicating desktop service.
  • At present 50% of apps fall into the first category, according to Nicholas McQuire,VP, enterprise research, CCS Insight.

    The remaining 50% are bespoke corporate apps, falling largely into the second category. CCS research finds that mobile access to back end systems grew 40% in 2015.

    Clearly, in the latter two cases, there is a lot more to this than building a few mobile apps. This process – often referred to as “mobilization” of business or “enterprise mobility” – can require considerable changes to IT systems and business operations.

    Does enterprise mean big business?

    Enterprise mobile apps are being adopted by large businesses, such as Ottawa Hospital, which allows doctors to access and update patient records from the bedside (see: case study), but are equally applicable to small businesses.

    London beauty stylist Blow LTD has a mobile-based booking system that takes orders from clients for a hair, nail or make-up appointment, via the client app, then automatically allocates the job to the appropriate stylist, according to their skills, availability and location.

    Ned Hasovic, Interim CTO, Blow LTD explains:

    Our approved and vetted stylists use our Stylist App to set their availability date & time and set their skills and qualifications (which we approve). All significant stylist activity is managed in the app, such as reporting to us that they are ‘Out to Serve’ when they are on their way to customer.

    We use the notification services of iOS and Android devices to alert the relevantly skilled stylists, leading to the job approval screen where they see all the details they need. Most of the time the customer services do not need to interact at all with any stylist or customer, as everything is automated and pre-empted regarding matching stylist to a customer and getting the job done.

    Best practice tips

    Before you start:

    • Consult with stakeholders – ensure you develop apps that employees need, want and will use. Conduct stakeholder interviews and initiate feedback mechanisms.
    • IT Infrastructure – establish a foundation (or platform) and process that facilitates the development and deployment of numerous enterprise mobile apps – not just the ones demanded today.
    • Consider development alternatives – enterprise mobile apps do not need to be native. If you do not have the skills, time or money to develop native apps for all the devices used by your employees, consider web or browser-based or hybrid (a web app with added native functionality). Nor does all development need to be bespoke. Vendors, such as IBM, are developing a library of apps specifically for industry verticals.

    On design, Blow’s Hasovic offers six best practice tips:

    • Clear-focused content – the purpose needs to remain simple and obvious to first time user of the app.
    • Simple menu navigation – the menus must be as simple as humanly possible. All services must justify their existence to reduce information overload.
    • Fluid layouts – develop UX with visually scalable components which can scale up to iPad/Desktop and down to our app.
    • Keep forms to minimum – use app sensors such as phone location services to reduce form filling as much as possible and to make the experience as natural and intuitive as possible.
    • Use phone sensor advances – such as Compass, Gyro and GPS to generate smart notifications to maximize benefit to customer and business.
    • Optimize app size – megabytes matter. Ensure your app is not taking up too much space on the user’s phone. Optimize images and clean up the app to remove unused resources before release.

    And on deployment, Steven A Watt, chief information officer at University of St Andrews, Scotland, based on his experience of deploying a mobile app to allow field workers access to job management system:

    • Staff training – don’t underestimate the amount of training required for staff who have lower levels of digital literacy.

    What is driving growth in enterprise mobile apps?

    The number of enterprise mobile apps will double over the next two years predicts McQuire at CCS Insight. This growth is being driven by employees and business lines, rather than IT.

    80% of employees surveyed by CCS in February 2015 said mobile technology was critical to getting their job done and 41% of employees said mobile apps had changed how they work.

    However the lack of suitable apps available from employers means employees commonly use packaged consumer or SAAS (software as a service) apps, including the file-sharing app Dropbox.

    Employees using Dropbox for work has contributed to it becoming home to 35 billion Office documents, spreadsheets, and presentations, according to company.

    Despite their keenness to use of mobile apps at work, 80% of employees have never requested their IT department for an app, according to the CCS survey. The majority expect the IT department to ignore or refuse the request if they did submit one.

    The danger with business driving mobility

    Investment in bespoke enterprise apps is often driven by business departments, on a case-by-case basis, rather than as a corporate wide mobility initiative, led by IT, or a focused digital transformation or mobile team.

    69% of company spending on mobility is funded from outside IT departments, coming from the marketing/customer service, sales and operations budget, according to another CCS survey.

    This business focus can sometimes cause a problem for app performance and thus adoption, explains Martin Wrigley, executive director, App Quality Alliance (AQuA), an industry body that organization offers free testing criteria for mobile apps.

    Often when people are developing a solution for a mobile device, the people doing it are experts in that particularly expertise, e.g. salesforce management, or coding core functions, such as getting the weather or stock prices; but not necessarily in the mobility or the behavior of mobile devices.

    They’re very good at testing the functionality – i.e. that it gets the weather updates or the stock prices in a nanosecond, or gets the mileage calculation right, but they can forget that it is running on a mobile device in a shared environment with a restricted resources. The typical failings that people forget is what happens when you have an incoming phone call or text or what happens when you lose connectivity.

    Top 10 failures in enterprise mobile apps

    According to AQuA, the same quality assurance problems come up time and again for enterprise apps – regardless of app type, of development method (native, hybrid or browser-based):

  • User interface inconsistency – failure to keep menu options, button labels, soft keys, menus etc. consistent, clear and conform to company standards.
  • Lack of clarity of graphics and text – text is unreadable, unclear, runs off screen or overlaps other items.
  • App browsing confusion – navigation is not intuitive.
  • Language inconsistency and spelling errors – evident lack of proof-reader or spell checker. Multi-language apps often make translation errors or miss labels in original language.
  • Omission of privacy and/or data security policy – no excuses.
  • Hidden features – failure to alert user to what is happening behind the scenes, whatever the motive, will be unpopular.
  • App crashing – often simple, common occurrence on the device, memory card, keyboard or attachments will cause an app to crash.
  • No help – lack of clear, written, step-by-step help.
  • Network connection – app freezes, or fails to notify user, when the network connection drops or the device switches network, e.g. from carrier to WIFI.
  • Screen orientation distortion – images distort when changing from portrait to landscape and vice-versa.
  • What is the difference between enterprise mobile apps and a) enterprise applications and b) consumer apps?

    The enterprise mobile app is a misleading amalgamation of two terms: enterprise applications – the monolithic software systems that run the finance, planning, sales, human resources etc. of large companies – and mobile apps – the small (usually) consumer-orientated software tools for smartphones. But there are important differences from both.

    Enterprise applications are used for multiple tasks, by multiple types of user; while enterprise mobile apps are usually designed for one or a few specific purposes (like a consumer app).

    Enterprise applications are usually server-based accessed via a fixed network from a desktop, while enterprise mobile apps need to be able to operate semi-independently, so they continue to work if the mobile connection become unavailable.

    Consumer mobile apps are often standalone (with the notable exception, such as commerce apps) with little integration into backend systems; while enterprise mobile apps often require more extensive integration, involving adaptation of middleware and opening up of APIs (application programming interfaces) into corporate systems.

    As employees and partners have diverse requirements, companies may need to develop considerably more enterprise mobile apps than they have consumer apps or enterprise applications.

    Phil Buckellew, vice president of enterprise mobile at IBM explains:

    The key difference between consumer and enterprise apps is that most businesses typically only have a single mobile app for consumers. Think about a mobile app for an airline. The consumer sees a single app that allows them to check in, change seats and book a flight.

    That same airline would have many more internal enterprise mobile apps each with a specific function to streamline productivity such as airplane maintenance, supply chain, baggage tracking and selling items to passengers on board.

    What’s holding companies back from developing enterprise mobile apps?

    CCS’ McQuire identifies four challenges holding development of enterprise apps:

  • Proving ROI – it is difficult to quantify return on investment for mobilizing many parts of the business. This is why enterprise apps tend to focus on customer experience and sales enablement e.g. allowing customer to sign for deliveries digitally, as it is easier to prove ROI.
  • Skills gap – the lack of availability and expense of native development skills – even banks complain struggle retain developers.
  • Building out the infrastructure – it’s not just about front-end app development, modernizing the middleware and creating the APIs to allow apps to interact with back end systems can be much more of a challenge.
  • Changing the way the business operates – it is essential to examine how employees will and want to work in the future. Companies such as the UK’s National Rail have methodology in place to capture the requirements of their workers.
  • Todd Anglin, chief evangelist, VP technology & developer relations, Telerik:

    One of the most costly mistakes is building the wrong app. Studies suggest that most enterprise apps fail because users never open them. The apps don’t solve the real problems users have in the context of their device, will end-up in the growing app “dust bin” we all have on our phones and tablets.


    • The App Quality Alliance (AQuA) Online Testing Criteria tool
    • AT&T Application Resource Optimizer
    • The Open Web Application Security Project (OWASP) Testing Guide

    This is Part 14 of the ClickZ ‘DNA of mobile-friendly web’ series.

    Here are the recent ones:

    • Assessing the technical and operational feasibility of your mobile project
    • Show me the money: proving your mobile site or app will deliver ROI
    • Formulating the go-to market strategy for your mobile project
    • How to market your mobile site or app without spending a fortune on ads
    • The pros, cons and politics of hybrid mobile apps
    • Digital transformation: what it is and why it was the unofficial theme at MWC
    • Connected cars offer valuable opportunities for marketing your brand today
    • Everything you need to know about building apps for connected cars

    How organic search might influence the 2016 US election

    elections candidates

    Search data has always been powerful, but it becomes even more significant when we’re able to understand what people search for in relation to the 2016 US elections and how this may affect the final outcome.

    Every search query reflects a voter’s question regarding the candidates or specific topics and this could eventually affect the sentiment (and the preference) towards the elections.

    Linkdex has released a search report named the United States of Search and it offers interesting interactive data by comparing the candidates, the topics and even the websites, both across all the states and divided state-by-state.

    According to the analysis of the search results, Donald Trump is the most searched candidate among all states, occupying 31.59% of all the searches, with Bernie Sanders being second with 21.19%, while the other candidates are much further behind the first two.

    Still on a country-wide level, the most searched topics are: education (20.33%), religion (13.46%), foreign policy (11.81%) and gun control (8.58%).

    Search data allows candidates to measure the interest, but also the variations from state to state and this leads to useful conclusions regarding the opportunities they may explore, both by state, but also by topic in order to boost their favourability.

    elections states

    For example, according to the results below:

    • 44.6% of candidate-related searches in Georgia are about Donald Trump
    • 44.7% of candidate-related searches in Vermont are about Bernie Sanders.
    • there are 23.5% more searches for Hillary Clinton in Vermont than the national average
    • there are 14.9% fewer searches for Donald Trump in Vermont than the national average

    elections sanders

    elections trump

    What’s more, it has been observed that voters turn to Wikipedia as their primary source of information regarding the presidential election, with Facebook being their second choice. This proves that people either turn to what they consider a trusted source, or prefer to rely on their filtered timeline on Facebook (and the relevant Facebook pages) to seek more details.

    elections sites external

    Still, if they really have to visit a candidate’s site for further information, they are most likely to visit berniesanders.com, with donaldjtrump.com being their second choice and tedcruz.org third.

    There’s also the search for trump.com, which highlights the voters’ need to seek more information about Donald Trump (or they simply might have been confused between the two sites).

    elections sites

    Finally, if we measure the candidates’ search performance during the past 30 days with Google Trends analysing their recent searches, we would note once again a clear advantage for Donald Trump, as his campaign seems to be unstoppable in terms of media domination, although Bernie Sanders is still stable as the second most searched candidate.


    Can search results predict the winner?

    An analysis of the search queries leads to very useful conclusions, both about the candidates’ popularity, but also about the voters and the way they use online search to stay informed about the presidential elections.

    A state-by-state comparison showcases the different sentiments among voters and the varying topics they are mostly interested in, which helps us understand how people ultimately vote for their favourite candidates.

    However, it is uncertain yet whether the search results can predict the big winner of the US elections, as Donald Trump seems to have significantly “disrupted” the game, winning the media impressions with his constant presence in the news, which eventually leads to the increase of his search results.

    Yes, Donald Trump is the big winner of the search results up to now, but yet there’s Bernie Sanders who is constantly second in the search results (which is a win judging by the disruption Trump has caused), while we can’t ignore Hillary Clinton and her well-rounded digital campaign.

    2016 state of link building survey: five important takeaways

    Respondent job break down

    Links are still one of the most important ranking factors in Google search.

    This was confirmed as recently as March, when Andrey Lipattsev, a Senior Quality Senior Strategist at Google, revealed links, content, and RankBrain as the top three most important ranking factors.

    This wasn’t surprising information (at least to SEOs), but it was an important confirmation. Links and content are still the fundamental building blocks of the web.

    But what is the state of link building? We know links matter greatly to search, and that competitive search requires links. Brian Dean of Backlinko analyzed 1 million Google Search results and found “the number of domains linking to a page correlated with rankings more than any other factor.”

    And BuzzSumo and Moz also found that within a sample of 750,000 well-shared articles over 50% had zero external links.

    Links matter, but it’s not as simple as hitting publish—or even promoting and achieving recognition on social platforms.

    With that in mind, my fellow Page One Power colleague Nicholas Chimonas teamed up with John Doherty of Credo to create a 2016 Link Building Survey which was published on Moz on Tuesday 5 April.

    Nicholas, John, and Moz all used their considerable influence to reach as many people as possible.

    Nicholas, John, and the editors at Moz (namely Felicia Crawford) spent time digging into the data and pulling out the pertinent information. It’s a long, thorough post and I highly recommend you go read it once you’re done here.

    But instead of rehashing that information, I’m going to pull out my own top five takeaways from the 2016 Link Building Survey.

  • Many SEOs bundle link building, and integration seems to be increasing.
  • Link building is done mostly in small teams.
  • Time spent building links is greater than the typical budget for link acquisition.
  • Content-based link building is the most popular tactic, and viewed as the most efficient.
  • Most respondents still identify with the term link building.
  • The data from the 2016 link building survey

    The survey had 435 respondents.

    Of those 435 respondents, 180 were agency, 118 in-house SEO, 51 consultants and freelancers, 41 business owners, 39 in-house content marketers, and 6 in-house PR.

    These respondents serve a variety of clients, with a majority selecting SMBs and a smaller sampling (106 respondents) highlighting enterprise level clients.

    Respondent client information

    As you can see, the sample size is fairly varied.

    Let’s get into the takeaways…

    1. Many SEOs bundle link building as a service. Integration is increasing

    Selling link building services

    There are two competing truths in the SEO world: every website needs links to perform well in search, and it’s hard to put a price on links.

    On one hand, it’s difficult to guarantee links. Every SEO is armed with a wide-variety of diverse tactics which can result in links, but what’s effective varies greatly from client to client and niche to niche.

    Securing links is often time consuming, difficult, and requires specialized knowledge. That inherently makes it a valuable skill.

    On the other hand, links are extremely hard to put a monetary price on, especially per individual link. As cited before, there are many studies showing the value of links, Google continues to stand behind links as a key ranking signal, and SEOs all have first-hand experience with the value of links in ranking.

    But those all look at the larger picture. It’s near impossible to say definitively how many links, what type of links, and how long after you build those links, will lead to specific rankings.

    This means that link building requires a custom campaign, necessitates an ongoing project, and needs to look at the big picture.

    In my opinion, link building is becoming more and more ingrained in online marketing practices. In fact, I predicted that links would gain value as a KPI in online marketing in 2016.

    I can’t be sure that’s happening, but I am happy to say that the majority of respondents reported more integrated link building practices.

    Integrated link building

    2. Link building takes place mostly in small teams

    link building team size

    Link building is a niche within a niche. Often the responsibility of securing links falls to SEOs and SEOs alone (although as mentioned before, I believe that might be changing).

    This means that link building is often done either alone (33% of respondents) or in teams of 2-5 (48% of respondents).

    Even at the agency level, link building seems to take place mostly in small teams. Very rarely is it done in large teams, although 5% did report building links in teams of 16 or greater.

    Considering the fact that SMBs were the largest percentage of clients reported in this survey, small teams makes sense. Link building is a very specialized task, and SMBs don’t need large dedicated teams. A small team—or even just a single person—can get the job done.

    3. Time spent building links is greater than SEO budget allocation

    This information is gleaned from two similar but different questions.

    What percentage of your typical client’s overall SEO budget is dedicated to link building?

    precentage budget allocation to links

    What percent of your SEO work/campaigns is focused on link acquisition?

    percentage SEO work on links

    Plainly stated, respondents report more time spent into link acquisition than the budget allocations suggest. SEOs are either working for less when building links, or using budget from other activities to cover time spent building links.

    This disparity, in my opinion, goes back to the charging-for-links conundrum. Links are a core responsibility for SEOs and portion of our job, but clients aren’t overeager to spend budget for links.

    Links can be somewhat intangible as a singular product.

    The disparity between budget allocation and percentage time spent building links is a conversation SEOs need to have.

    4. Content-based link building reigns supreme.

    Content-based link building was picked as both the most effective tactic and the most popular.

    Link building tactics used

    Link building tactics most effective

    What’s interesting about these results isn’t just that content is the most popular, it’s how much more effective SEOs view content to build links than other tactics.

    I believe there are a few reasons for this:

  • Convincing others to link to you requires some sort of USP/value for the site linking. Content is the one of the surest methods to create value.
  • Creating content, or being involved in the process, gives SEOs more control over creating linkable assets, no matter the client. It’s fairly rare in my opinion to have a client start a campaign with a plethora of linkable assets which lead to an effective campaign. More often it’s easier to create new content designed to help secure links.
  • Content marketing is on the rise, which is creating ample opportunities for SEOs to build links. Link building in the wake of content marketing has proven effective for my company.
  • 5. Most respondents still identify with the term “link building”

    link building terminology

    Whether or not to call the practice “link building” has been somewhat contentious in the last few years.

    In fact, I wrote about why I will continue to use the term “link building” less than a year ago here on Search Engine Watch.

    The truth is, I actually believe we should use all these terms (aside from content marketing – I believe they’re two very different practices, with different focuses, that can occasionally result in some similarities).

    I believe that to build good links that matter, you have to deserve them. You have to earn them – even if the potential linking site isn’t aware of it. Then it’s just a matter of letting them know about why you deserve the link. And a natural part of that equation involves building relationships.

    Regardless, it was nice to see the majority of people respond with “link building” – it demonstrates that even though the practice has shifted greatly in the last four years, it hasn’t fundamentally changed enough to really necessitate a rebrand.

    Let’s have our links and build them too.

    When is a search engine not a search engine? When it’s Google, says the EU

    An image of a computer delete key.

    The European Union’s Network and Information Security (NIS) Directive is a piece of legislation, due to be adopted this spring, which lays out the first set of EU-wide cyber security rules.

    The final text was agreed on in December 2015, setting out a range of strategies for tackling and preventing cyber attacks and disruptions. Its proposals have been met with a lukewarm reception from the industries it covers, which include everything from digital technologies to social networks, financial institutions and ecommerce sites.

    Now, it seems like search engines might have the biggest bone to pick with the EU’s Directive – because according to the legislation, they aren’t search engines at all.

    The NIS Directive has settled on a definitive definition for an ‘online search engine’, what it is and what it does. According to the Directive:

    “‘Online search engine’ is a digital service that allows users to perform searches of in principle all websites or a geographical subset thereof, websites in a particular language on the basis of a query on any subject in the form of a keyword, phrase or other input; and returns links in which information related to the requested content can be found.”

    I’ve bolded the key part of that quotation – “in principle all websites”. Because Google, although it indexes the vast majority of websites on the internet, draws a number of lines about what it will include in search results. It doesn’t index Tor websites, and complies with ‘robots.txt’ requests from website owners who don’t want Google to index their pages.

    Google also complies with the EU’s own Right to Be Forgotten ruling, which allows users to request that outdated or irrelevant content be removed from Google’s listings. Other types of web pages that Google de-indexes include copyright infringing content, revenge porn and ‘mugshot extortion’ websites.

    Google de-lists a range of websites from its search results, including those removed under the EU’s Right to Be Forgotten ruling. (Image by Ervins Strauhmanis via Flickr, some rights reserved)

    So Google doesn’t, in fact, index all websites in principle. But does any search engine? So far, no – and that’s the biggest irony of the EU’s new ‘search engine definition’; there is no such engine that actually complies with it. Bing, Yahoo, DuckDuckGo – all of these have policies which mean that they don’t in principle index all websites.

    Does the new definition actually mean anything for the search engine industry? I highly doubt it. The current players in the search engine market have spent years refining their methods, and aren’t going to suddenly change tack because of an already fairly unpopular EU directive.

    At most I suppose the legislation might affect Google and cohort’s ability to legally use the term ‘search engine’ within the EU to describe their business, but any challenge to such a well-established term is likely to provoke a huge backlash.

    If, of course, a search engine enters the arena and decides that it wants to be the first and only search engine to index any and all websites, best of luck to it. It will definitely be an interesting venture. But it probably won’t have anything to do with the EU.

    Improve your PPC in just 25 minutes this week

    adwords dashboard

    Been ignoring your PPC account for a while? Or perhaps you’re onboarding a new PPC client that needs some immediate love? I’ve got a plan for you…

    In just 25 minutes, you can start mending the health of any PPC account, starting this week.

    These tips will help you save wasted ad spend and boost efficiencies immediately while you work on a long-term plan in the background to re-launch a PPC strategy.

    Let’s have a closer look at the five steps in your new PPC regimen – each of which should only take about five minutes.

    Step 1: slash wasteful ad spend

    In your AdWords dashboard via the search terms tab, set your date range for the past 30 days, then sort terms by the highest spenders. Do you see any irrelevant or very broad terms that are spending tons of dough? If so, negate those.

    We audited a PPC account once that was spending $1,000 a day on broad ‘jeans’ themed keywords, and it had a terrible conversion rate, as you might imagine. Search terms coming in included many instances of jeans.

    So we negated the exact match for ‘jeans’ but let the long-tail keywords like ‘women’s black skinny jeans’ still work their magic – and voila! Instant budget saver.

    Step 2: audit conversions

    This one is especially important for inherited PPC accounts. First, go to the tools tab of your AdWords dashboard, then go to the webpages link to view website pages that are reporting conversions.

    As part of your PPC revamp strategy, you’re going to want to follow the money, and that means first understanding if the conversion tracking is set up correctly on those pages.

    (Note: getting this right can turn into a big task quickly, but this five-minute version is simply for verification, then you can fix as needed).

    One account we inherited was showing a ton of conversions, but the conversion tracking setup was incorrect in that it was tracking visits to the home page versus tracking conversions on thank-you pages and/or checkout confirmation pages.

    Step 3: stop non-performers

    In the next five minutes, pause the poorly performing campaigns, ad groups or keywords. First, start at the campaign level and set your date range for 30 days, then sort by the highest cost per acquisition.

    You might see the highest CPA campaign is, say, $1,000, and from there, you’d start investigating. Dig deeper by clicking into the campaign and seeing if it’s a runaway ad group causing the problem, and within that ad group, it may be just a few keywords that are the culprit.

    Make sure you have all the information before you pause anything; it could be a higher priced item causing the higher CPA, which would make sense. Then, at your discretion, start pausing those high CPAs, and start making a plan to optimize.

    Step 4: find the top ad creative

    In AdWords, you can easily find and sort your ad copy creative to identify those that performed well, and then carry those into your new strategy.

    Let’s say you ran a spring sale, and wanted to see which ad copy brought in the most clicks. You’d simply head over to your AdWords dashboard and go to the ads tab, then search for ‘spring sale’ and see which ads had the highest CTR.

    adwords search

    Or, you can use the filter dropdown to further search the ad messaging:

    adwords filter

    If something is working well, then you don’t have to reinvent the wheel in your new PPC strategy.

    Step 5: review ad extensions

    In your last five minutes, perform an account-wide extensions review. As you may know, ad extensions factor into how your ads perform via AdWords’ Ad Rank, and we’ve come across some accounts that have no extensions at all.

    To remedy that, you can quickly set callouts and structured snippets at the account level, as well as setting up general sitelinks and associating them to all campaigns.

    (For more on rethinking your ad extensions, check out an earlier post I wrote, here.)

    Now that you’ve boosted your PPC fitness

    So you’ve completed your 25-minute PPC regimen, and now you’re ready to focus on implementing your comprehensive strategy to improve the long-term health of your PPC account.

    As a reminder, this regimen is meant as a quick fix to get things moving while you strategize or wait on client or stakeholder approvals.

    Taking those first, proactive steps helps you turn ideas into action so you can move the needle now while you wait sometimes months on approvals for your new PPC plan.

    Pauline Jakober is CEO of Group Twenty Seven and a contributor to Search Engine Watch. You can follow Pauline on Twitter.

    24 slightly depressing stats on the fall of Twitter


    Despite its die-hard loyal user base, Twitter has been doing nothing but rubbing its fans up the wrong way for the last 12 months.

    Whether its been the adoption of a dreaded algorithm, a mooted 10,000 character increase, the killing of its share counts, the pointlessness of ‘Moments’, heck even changing its ‘favourite’ button from a star to a heart was a massive thing, there’s a good chance that if you log-in to Twitter you’ll find people kicking-off about Twitter.

    Much of this is down to the very fact that its users, especially its long-term ones who rode the crest of its initial popularity, are so loyal to the channel and don’t want to see it mutated beyond its unique purpose: serving concise sound-bites and sharing links in real-time.

    But as Twitter deals with a stagnant user growth, partly due to a trend away from public social to private messaging, partly because Twitter is difficult to master for new adopters, the company has to try new things in order to stay relevant.

    Tomorrow sees our second ever weekly #ClickZChat, where the good people of SEW and ClickZ take to Twitter to ask our expert friends and followers about a particularly burning digital marketing related issue.

    Last week, we discussed whether or not brands have reached peak content and is it possible to cut through the noise? You can read everyone’s comments in the round-up post: have we reached peak content?

    Tomorrow we’ll be talking about social media and the ‘death of Twitter’ in particular, so please join us at 12pm EST (5pm UK) on Wednesday 6 April.

    As preparation for the discussion, I’ve pulled together as many stats relating to Twitter as I could possibly find, paying particular attention to its ‘rise and fall’.

    Please note: many of these stats were published in a ClickZ article by Leighann Morris from last year, and I have updated the numbers wherever possible.

    Active users

    If you go by Twitter’s official numbers, by December 2015 Twitter had 320m monthly active users.
    80% of Twitter are active users on mobile.
    79% of Twitter accounts exist outside the US.

    [source: Twitter]

    However as reported by CNN, in February 2016 Twitter announced that it has lost 2 million users in the last three months of 2015.

    This compared to a year earlier when Twitter’s customer base only grew by just 6%.

    So Twitter ended 2015 with 305 million active users. By contrast, Facebook has 1.6 billion and Instagram surpassed Twitter in September, growing to 400 million users.

    There are 391 million Twitter accounts with no followers.

    Tweets sent

    500m tweets are sent per day.

    From Twitter’s launch in 2006 and until 2009, the volume of tweets approached a 1,400% gain in daily volume year to year and around 1,000% gain in yearly volume.

    By mid 2010 the rate of growth slowed down to below 100% gain in yearly volume in 2012.

    Today, the volume of tweets is growing at around 30% per year.

    [source: internetlivestats.com]


    23% of all online adults use Twitter (a proportion identical to the 23% of online adults who did so in September 2014).

    Internet users living in urban areas are more likely than their suburban or rural counterparts to use Twitter. Three-out-of-10 online urban residents use the site, compared with 21% of suburbanites and 15% of those living in rural areas.

    Twitter is more popular among younger adults — 30% of online adults under 50 use Twitter, compared with 11% of online adults ages 50 and older.

    Twitter’s proportion of daily users remains unchanged from 2014 at 38%.


    The number of online adults who say they use Twitter has grown 7% since 2012 – but compared with other platforms, this is pretty slow growth.


    [source: Pew Research Centre]

    In July 2015, Twitter had 87.89 million unique visitors from the United States, down from 89.2 million visitors in April 2015:


    [source: Statista]

    Twitter’s value

    Twitter’s annual revenue in 2015 amounted to more than 2.22 billion US dollars, up from 1.4 billion U.S. dollars in the previous year.


    Twitter’s worldwide advertising revenue this year is valued at $2,444 million.

    twitterstat14Twitter’s stock price hit its first ‘all-time low’ when Statista reported that shares slumped 5.6% to $29.27 in August 2015, pushing Twitter’s market value below $20 billion.


    [source: Statista]

    Analysts believed it will have to fall a further $10 billion for those tech giants to seriously consider any kind of approach. And that may not be far off…

    Twitter’s stock price has slumped further from $29.06 in October 2015 to just over $17 (as of April 2016).

    October 2015:


    April 2016:

    twitter s value Google Search

    Twitter hit its new record all-time-low share price in February 2015 with just $14.31.

    As Chris Lake suggested in his 17 things Twitter can do to transform its fortunes, Twitter is now valued at around the $12bn mark and that, “For a company that should post revenue of at least $2.3bn for 2015, its market price will be whetting the appetites of prospective acquirers.”

    And finally…

    The most retweeted tweet on Twitter is still the Ellen Degeneres Oscars selfie from the beginning of 2014, which has now been retweeted more than 3 million times:

    If only Bradley’s arm was longer. Best photo ever. #oscars pic.twitter.com/C9U5NOtGap

    — Ellen DeGeneres (@TheEllenShow) March 3, 2014

    Do you believe Twitter has a future? We’d love to hear your opinions so please join us for #ClickZChat.

    How to fix discrepancies in your web analytics data

    Transaction data comparison

    Google Analytics, like every web analytics tool, does not deliver accurate data.

    There are many reasons behind this including:

    • Visitors don’t allow JavaScript
    • Visitors block cookies
    • Pages missing code
    • Location of the code on the page

    There are certain actions though that can be exactly measured, as they are recorded in back end systems.

    We know exactly how many transactions are placed, leads are generated, contact forms submitted, etc. For these actions, we can audit the accuracy of the web analytics and identify errors in the tracking.

    This is critically important as these actions are nearly always macro conversions and, if they are not tracking correctly, we cannot evaluate the performance of marketing campaigns or the impact of website features.

    The problem

    These actions can never be recorded 100% accurately in any web analytics tool (you should not try and report your revenue to the tax office using web analytics data) but they should only be 2%-3% off reality.

    If the difference is more than 5% (with a large enough sample size), you have an issue somewhere in your tracking. The code works, or no data would be collected at all, but it is not correctly submitting measurements to the web analytics tool in all cases.


    To simplify the language within this blog post, I will be using transactions on an Magento ecommerce website compared to Google Analytics data as an example for the remainder of the post.

    The first step is to check if there is a discrepancy.

    Extract daily orders for 8 to 12 weeks for both Magento and Google Analytics and then compare performance at a daily level. As long as order volumes are high enough, the discrepancy should be fairly consistent for each day.

    Ideally Magento should report transactions slightly higher than Google Analytics but no more than 5%. If the difference is more than that, you have an issue.

    The key reasons for a difference in transactions recorded within Google Analytics (or any analytics tool) and Magento are:

  • Orders placed over the phone/offline are recorded in Magento but are not captured in Google Analytics
  • Orders placed on computers using internal IP addresses are recorded in Magento but not in the Google Analytics View (as exclusion filters applied)
  • There is a certain device/browser/browser version where the Google Analytics transaction tag does not fire
  • There is a payment method where the visitor doesn’t return to the Order Confirmation page, therefore never triggering the Google Analytics transaction tag
  • There are certain variable values that break the code e.g. a product name that contains “ or ;
  • The amount of the information included within the tag is too long (e.g. lots of product information is captured, exceeding the limit of 8,192 bytes
  • The visitor leaves the Order Confirmation page before the Google Analytics transaction tag can be fired
  • Identifying the cause

    The challenge is to identify which one (or more) of these apply to your business. The first two reasons can be identified through an internal investigation into what data is being recorded in each of Magento and Google Analytics.

    For the second, check into what filters are applied and/or create a new Google Analytics View with no filters applied to see if that changes the data.

    The third reason requires some analysis within Google Analytics.

    Check the conversion rate for each device and browser version. If it is 0% for a certain option (with a decent number of sessions), you may have identified the culprit/s. Check more into the data or even check through making a transaction on that device/browser to confirm the transaction code isn’t fired correctly.

    For reasons four to six, extract a list of the transactions from Magento and Google Analytics for three non-sequential days during the previous period (make sure these days contain the typical discrepancy) including the transaction ID. Compare the two lists using the transaction IDs and identify the transactions which are not recorded within Google Analytics.

    Review these transactions for patterns of payment methods, particular products or just very large transactions. The challenge is that some missing transactions were just not recorded while others should fit the pattern of one or more of the above reasons.

    For the final reason, check the location of your Google Analytics code on the page. If it is lower in the page than immediately below the tag, that could be the cause.


    Once the cause of the discrepancy has been identified, it should naturally suggest the solution. These solutions include:

    • Ensuring you are comparing apples with apples e.g. transactions placed by external visitor on the website
    • Adjusting the process for a payment method so that the visitor is returned to the Order Confirmation page
    • Fixing the transaction code so that it works for all browsers and devices
    • Removing or escaping all special characters within product variable names
    • Ensuring that tags don’t exceed the 8,192 byte limit
    • Changing the location on the Transaction code on the Order Confirmation page

    Once you apply these fixes, the discrepancy should immediately reduce. Continue checking and making improvements until the discrepancy between your back end numbers and your web analytics numbers reduces to under 5%.

    One final note, web analytics data can also be higher than that recorded in back end systems. This would be the case if duplications are recorded in GA but automatically excluded in back end systems (e.g. for transactions) or if data has been cancelled out of the backend systems e.g. cancelled orders, fake leads.

    Have we reached peak content? Insights and issues highlighted by #ClickZChat

    We’ve spoken a lot about the problem of peak content recently.

    With so many more businesses now adopting publishing models to reach their audience and focus on inbound, it is becoming harder for both users and distributors to cut through the noise and uncover the really useful information out there.

    Of course, it’s fairly easy for us to spout opinion on this issue, but we wanted to know first hand how this is affecting marketers, so we decided to kick off our inaugural #ClickZChat on Twitter by asking our followers about the issues and possible solutions.

    We’re kicking off #ClickZChat here at noon EST today – get ready to tell us what you think about Peak Content. pic.twitter.com/qWvIyPOM7d

    — ClickZ (@ClickZ) March 30, 2016

    We decided to start by asking: is there really an issue here? Do you believe that we’ve reached (or are heading for) peak content? The point when there is so much information available that it becomes effectively useless?

    Q1: Do you believe we have reached ‘peak content’? Why or why not? A big question to kick off #ClickZChat! pic.twitter.com/m5UVSLNEKa

    — ClickZ (@ClickZ) March 30, 2016

    Emma_SEO weighed in on this, asking if ‘peak content was simply part of the eternal marketing search for the most relevant customer channel

    @sewatch A1. To say we have, first we must understand what is “Peak content”,are we not simply looking for new and better comms? #ClickZChat

    — Emma P (@Emma_SEO) March 30, 2016

    With so much happening, it can be difficult for businesses to gain attention. Is the focus now too heavily focused on broadcast and moving away from genuine interaction? CatalystSEM’s SEO Director Paul Shapiro agreed that we seem to be concentrating on volume rather than value:

    A1: There’s definitely a bounty of poor quality content out there–which sucks. #clickzchat

    — Paul Shapiro (@fighto) March 30, 2016

    Q2: How can brands cut through all the content noise? Let us know your top tips, tools and tricks #ClickZChat pic.twitter.com/JCxRaTycCC

    — ClickZ (@ClickZ) March 30, 2016

    This search for audience may have left marketers feeling the need to ‘be everywhere’ however, often spreading themselves too thin across multiple channels. Agency Director Kate Bogda summed the issue up nicely:

    Everyone seems to expect quantity AND quality. #contentmarketing #clickzchat https://t.co/Z78vrcwSE3

    — Katie Bogda (@ktbogda) March 30, 2016

    While Search Engine Watch’s own Christopher Ratcliff pointed out the need for publishing organisations – whether traditional or those adopting the ‘brands as publishers’ model – need to find new ways to reduce volume and provide insight:

    @ClickZ I like what The Times is adopting. Stop chasing breaking news – instead concentrate on more insightful commentary #ClickZChat

    — christopher ratcliff (@Christophe_Rock) March 30, 2016

    Wayne Schilstra Team followed up here, pointing out that it wasn’t just about creating great content, but focusing on user intent. When and why do people need this content? Relevance almost always trumps volume:

    A2: Ask: At what point in their online journey will they look at our content? #ClickZChat https://t.co/Q6VrIkHDkq

    — Wayne Schilstra Team (@wayneschilstra) March 30, 2016

    Finally, with so much happening, can content still make waves? We wanted to know which creative examples had inspired you recently.

    Q3: Who is doing #contentmarketing really well? What makes you stop and pay attention? #ClickZChat pic.twitter.com/g5eukAb81H

    — ClickZ (@ClickZ) March 30, 2016

    We had a huge range of examples here, from Denny’s personalised tweets, O2 urging us all to ‘be more dog’ and movie marketing that can still make an impact a decade after it was originally conceived:

    @ClickZ Say what you will about iffy monsters, but Cloverfield is one of the best examples, across multiple mediums https://t.co/AUc8swyQxQ

    — Chris Williams (@christentive) March 30, 2016

    To finish on a lighter note, I’m going to big myself up at this point as I think Netflix has been doing some excellent work over the past year… and this conversation resulted in a fully-functioning House of Cards PollyHop search site.

    .@ClickZ A3: Loved what @netflix did for House of Cards last year – the fake election site is still up: https://t.co/tbcZABa4hW #ClickZChat

    — Matt Owen (@lexx2099) March 30, 2016

    Nice to see that social interaction can still take us in unexpected directions and provide standout creative: https://twitter.com/themick79i/status/715470399706771457

    @DFLovett I did it 😉 https://t.co/4qTkYmqXck @lexx2099 @ClickZ @netflix

    — Michele De Paola (@themick79i) March 31, 2016

    Key takeaways:

    Overall it seems that marketers believe that too much content is becoming an issue.

    The key here is to focus on intent and extraordinary value, rather than desperately hunting for updates to fill every social channel, creating hub content and building spin-off micro-content by channel can be a far more effective method.

    By cutting down on volume, content creators also free themselves up to spend more time creating something truly useful.

    Thanks for all your #ClickZChat answers today – some great insights and examples! pic.twitter.com/UUfItC1zX9

    — ClickZ (@ClickZ) March 30, 2016

    Thanks to everyone who participated in #ClickZChat. We’ll be holding our next session over on Twitter at noon EST on Wednesday, April 6th when we’ll be talking about social media and Twitter in particular.

    Do you believe Twitter has a future? We’d love to hear your opinions so do join us then.

    21 quick ways to find inspiration for creating content

    fb saved

    There is an increasing need for online content, and both content writers and content marketers know the struggle of constantly seeking the next great post.

    It’s not always easy to come up with a content idea that will lead to an interesting post, especially when there’s time pressure or, simply, lack of inspiration.

    In order to maintain your productivity and feel inspired for your next post, it is helpful to get organised in advance and create a list of resources and tools that will help you find an idea for your next post.

    I’ve compiled just such a list to help you next time you run out of inspiration…

    Social media and trending topics

    Social media may turn into your primary source of inspiration for your next post’s topic, especially when you know how to organise what you come across from time to time.

    Facebook’s saved posts

    Facebook introduced the ability to save posts, links, pages, videos back in 2014 and this has significantly changed the way we consume content, both personally and professionally.

    By the time you come across an interesting article, all you have to do is save it for future reference.

    What makes it special? The fact that we’re all using the most popular social network daily, which means that there are more chances to come across an interesting post (provided that you are already following the right sources that will inspire you accordingly).

    Twitter lists

    This is probably one of my favourite ways to consume content and it may be very helpful when you’re trying to follow the latest trends in the industries you’re interested in. It serves both as a source of inspiration, but also as an easy way to analyse what your competitors, or other influencers, are writing about.

    It might take a while until you create the right lists, but you won’t regret it once you realise how easy it is to keep up with the news.

    If you’re still unsure whether this may be helpful for you, feel free to check my list on content marketing to see how the sources (and their tweets) are displayed.

    Twitter chats

    Twitter chats offer useful insights, while they also allow you to join a discussion with other people sharing the same interests with you. Whether you follow your brand’s Twitter chats or join other popular discussions, you might be closer thank you think to your next great topic.

    Thanks for all your #ClickZChat answers today – some great insights and examples! pic.twitter.com/UUfItC1zX9

    — ClickZ (@ClickZ) March 30, 2016

    Twitter likes

    Twitter likes, formerly known as ‘favourites‘ are an indication of your approval of other people’s posts. However, they may also be used as a way to organise interesting posts, since your Twitter account displays all your likes in one useful list.

    If you find this organisation useful, you might improve it with the use of IFTTT. What if you organise your likes on Twitter and save them to Pocket, Evernote, or any other tool that helps you go back to the content you liked?

    twitter favourites

    (More on IFTTT below)

    Pinterest boards

    Pinterest is all about organisation and curation and that’s what makes it useful when looking for your next topic. Whether it’s a group board on your favourite topics (full of new links and infographics) or a custom-made board from you to collect interesting articles, Pinterest may be surprisingly useful for content discovery.


    Pulse by LinkedIn

    LinkedIn’s own publishing platform, Pulse, attracts many influencers posting about several different topics and of course, everyone can be part of it.

    It may not be our main source of information during the day, but there’s an impressive quantity of great content, which could educate you on a topic, inspire you to write about it, or even challenge you to differentiate and approach from a different angle.

    linkedin pulse 1

    Trending topics

    Both Facebook and Twitter display the trending topics that users discuss in real time and this has turned out very useful for writers and marketers that focus on the news industry and want to analyse what’s popular at every moment.

    twt trends1

    Ask a question


    Yes, Quora, Yahoo Answers and the rest may help you beat the writer’s block and you can use them to ask a question on your own, or join the discussion in the popular question and answer community. After all, inspiration may be found in the least expected platforms.

    quora content inspiration

    News aggregators

    News aggregators have been helpful for many years for content discovery and although they are not always among our top choices for daily use, they never seem to disappoint us.

    Feedly, PopUrls, The Old Reader

    There are many sites and tools that help you organise your favourite sources and topics, with Feedly, PopUrls and The Old Reader being only a few of your available options. It’s up to you to try them out and find the one that works better for you.


    Google Alerts

    As we are not able to follow all our favourite trends during the day, Google Alerts allows us to discover interesting content, by compiling it according to the terms we’re choosing to monitor.

    Following the trends

    It’s useful to have an idea of what’s being talked about during the day and except for social networks, there are also many other ways to monitor the latest trends.

    Google Trends

    Google offers you interesting insights on the most popular topics, but it also allows you to compare up to five search terms and analyse their popularity.



    Except for Google Trends, there are also many sources to analyse the latest trends and Trendsmap helps you monitor what people talk about, depending on their location. This is another way to understand the viral topics of the day and write for any of them that are relevant to your industry.

    Buzzsumo and Ruzzit


    If you’re interested in monitoring the most popular posts depending on their social performance, then Buzzsumo and Ruzzit will be very useful.

    Buzzsumo is a very powerful platform for content analysis and you can even use its free version to find the most shared topics in many categories, or even a few of the most popular topics from a site of your choice. You can even use its plugin for your browser and analyse the performance of your own posts on social media, in order to focus on topics that your audience is more willing to share.

    Ruzzit compiles for you the most shared content on the web and it’s very useful that you can divide the content depending on its type, videos, images, or articles.

    Bookmarking sites

    Reddit, Digg and the rest


    Reddit won’t disappoint you with content discovery, provided that you’re willing to search for all the right sub-categories until you feel inspired enough to write about what you came across. Be careful, it’s addictive and there are many chances you end up on completely different sub-categories from the ones you expected. There’s more information on how to do this here: How to use Reddit for content ideas.

    Digg is “what the Internet is talking about right now”, while you can also use StumbleUpon and Delicious for similar purposes, depending on what works better for you.

    Visual search

    If you’re looking for visual content to inspire you for your next post, then here are two sources you might need to consider.



    Imgur showcases the most popular images on the web, which may be useful when looking for the next meme that fits your content marketing strategy.


    As GIFs get more popular day by day (and we can’t stop using them), Giphy is the right resource to find the perfect GIF to support your article, or your social posting. It’s also useful to monitor the most popular GIFs from time to time, as a way to be aware of what’s trending in the particular type of content.

    GIPHY homepage

    Analyse your content

    Inspiration may be closer than you think, that’s why you need to have a closer look at the performance of your posts to measure popularity. This will help you decide what your audience likes most from you, leading you to the right direction for your next topics.

    Google Analytics

    google analytics

    Using Google Analytics even at beginner level allows you to learn more about your content, your audience and your niche industry and this is extremely useful when writing another great post.

    Have a look at your most popular articles, the search terms, the average time spent on each post and decide how the data will help you find the inspiration you’ve been seeking.

    Repurpose your content

    When you feel like you’re running out of topics, go back to your existing content and think of new ways to use it. After examining the performance of your posts in Google Analytics, pick the most popular articles and see whether they can be expanded into new posts, serving as a follow-up to the existing content.

    What’s more, how about using your existing content to create new articles in different forms?

    For example, a list article with 10 ways to improve your content marketing could become a very appealing infographic that could be shared in new platforms, also linking back to your existing content. Furthermore, you could create a video, or a podcast from an old post and then discuss the topic, adding new ideas to it. This still counts as newly found content inspiration.

    How to make content discovery easier

    Write down your content ideas

    Get ahead of your writer’s block and keep track of all your ideas right when you feel inspired. Even the simplest note-taking may be useful. Find the platform, or the app that works better for you, with Google Drive, Dropbox, Evernote, Trello, Notes being only a few of your choices.

    giphy (1)

    Save posts to read them later

    Yes, that’s what we were discussing about Facebook and saved articles, but there are also many other platforms that help you go back to the content you previously discovered and read it at your own pace (or use it as a resource for your post).

    Pocket, Instapaper, Flipboard and many other apps facilitate this very useful habit and the easy mobile reading makes them even better.

    Automate content organisation

    If you still feel that you’re not organising the content you’re reading, then IFTTT will help you automate the process, by mixing your favourite sources into powerful recipes.

    Here are some examples of the combinations you could use to facilitate the content discovery:

    Listen to your audience

    Your readers, but also your social followers may be the right source of inspiration for your next post, with questions, comments, or even complaints helping you understand their preferences, in order to provide them better content on your future posts.

    Finally, this is the process that usually works for me when looking for content inspiration (and I’d love to hear about yours):

    • Extensive reading and browsing (mostly through Twitter lists and Buzzsumo)
    • Saving the most interesting topics to Pocket
    • Writing down titles inspired by the posts I read on my content spreadsheet on Google Drive
    • Going back to the content spreadsheet to decide upon the titles that make the best posts
    • Search for the relevant resources that initially inspired me on Pocket to re-read the articles
    • Close all tabs and start writing