As we reported at the end of March, Andrey Lipattsev, the Search Quality Senior Strategist at Google Ireland revealed the three most important ranking signals used by Google.
These are content, links and RankBrain (although that last one is “hotly contested” and the list doesn’t have an order).
This information was uncovered during an excellent live Q&A from WebPromo, which also featured Rand Fishkin from Moz; Ammon Johns, the Managing Director at Ammon Johns & Co.); Eric Enge, CEO at Stone Temple Consulting; and the whole thing was hosted by WebPromo’s Anton Shulke.
We’ve partnered with Anton to bring you a transcript of the entire one-hour long Q&A.
As you can imagine, it’s a very lengthy read. We have trimmed it for any repetition or digressions, but there was so much brilliant chat between the group that we’ve decided to keep it in its 7,000+ word form.
The discussion covers:
- Removal of PageRank toolbar
- Click-through-rates as a ranking signal
- Google’s top ranking signals
- Machine learning vs spam
- The state of SEO outside of the US and Western Europe,
- New mobile friendly update
Now set aside half-an-hour, pour a cup of coffee and enjoy.
A massive thank you to our staff writer Rebecca Sentance, who spent a huge portion of her day transcribing this video.
Removal of the PageRank toolbar
Ammon Johns: Why remove the PageRank toolbar if PageRank is still a part of Google’s ranking algorithms – which of course, we all believe it is – even if it wasn’t used on its own?
Andrey Lipattsev: I get it. And it’s a really good question – I promise you I do have a couple of answers, which I hope are reasonable – but let me ask you a question back. Why do you think it was useful in the first place? Why was it a good thing to have?
Ammon: When it was still being updated it helped me give a rough idea of what the crawl priority of a site might be. And for that reason, it was useful to me because I could see that if I get a client comes to me and they’ve been following some SEO advice, they’ve had the blog, they’ve had the news feed, they’ve built up thousands and thousands of pages – but they’ve got a Toolbar PageRank of two, I’m already thinking…
‘With a PageRank of two, they’ll be lucky to get a thousand pages regularly – spidered and indexed, every month – given that there are so many news sites, there are so many things that Google has to pick up, all of the time; this just isn’t going to be a high enough priority to have this number of pages. They may be giving Google more than it can digest, and therefore detracting from where the value is – the core pages.’
So it had a use there. It had a second use, which was more financial. Lots and lots of bad SEOs used to base their strategy on Toolbar PageRank. As long as they’re doing a bad job, there was more work for people doing a good job like myself.
Andrey: By all means. And I think the second reason alone would have been sufficient to get rid of it. But I mean, if that was the only reason, we probably would have tried to keep working on that.
You know that it wasn’t so much removed as it died a natural death more than anything. Nobody was looking at it, nobody was developing it because it wasn’t bringing very much value internally. Essentially, it became so out of date, so when going back to Ammon’s first point about its usefulness, that kind of all went away, leaving only the second value there.
So it was no longer a valid benchmark for a site’s usefulness, for a site’s likelihood to be trawled more often, or ranked well, because a) it was out of date, and b) there were a lot of other things in place where – you’re saying Toolbar PageRank two would have made it less likely to be crawled? Not really.
There’s a lot of other stuff in place, and PageRank two could have been ranked above PageRank eight very easily depending on what else was going on.
Ammon: That’s ranking, though – I’m not talking about ranking, I’m just talking about indexing, and I found the correlation there, across thousands of sites, was very high.
Andrey: But that link to me is so tenuous, between Toolbar PageRank and indexing that I’m not even sure that thing ever really existed.
It was supposed to be a reflection of the actual PageRank of a page, and that has no bearing on how often the page gets crawled.
Ammon: If you gave us the actual PageRank, I’d have used that, believe me, but you wouldn’t share that. I did ask!
Andrey: I can understand, you know, the second-degree links between the page’s PageRank and how often we’d come back to it, but there’s a lot of other things in play there, that also need to be taken into account. So no matter what you’re thinking about, whether crawling or ranking… it a) has gradually become just one thing out of very many, and therefore not reflecting of the real picture; and b) has stopped reflecting what it was supposed to reflect in the first place. So it wasn’t very useful.
And as I said, c) it became something that it was never supposed to be, it became something of a currency for some SEOs. And I’m not saying everybody was doing it, but clearly, as yourself acknowledged, a lot of people started using it like that, and a lot of SEO contracts would be ‘I will get your PageRank to this’ which is kind of meaningless, really.
And so, what I was going to tell you beforehand is, this reduced meaningfulness and also, we are hoping that the improved stats we are providing now – for example to Search Console, the improved search analytics report – are the stats we’re looking at. They are the bots you’re looking for.
You know, your clicks, and your impressions, and the queries and the pages; that’s what you should be looking at, and as an owner, as an SEO, comparing it to other people if you have the data, and so on and so forth. And build your strategies, and your analysis on this kind of data, not just on one number, which was kind of neither here nor there.
Not to mention the fact that, last but not least, I have not seen that famous toolbar, in which it was supposed be a plugin, on anyone’s computer for a long time. Granted I am a very biased sample, because most people around me have Macs and Linux machines anyway, but I haven’t seen anyone with that toolbar with the thing in it.
Click-through rate as a ranking signal
Eric: I want to talk a little bit about click-through rate, and I want to talk about it from a couple of perspectives. Rand, at SMX Munich, I believe, ran a fresh test that showed at least a temporary movement of the ranking of an item that they were trying to promote by sending a lot of clicks to the page, where it kind of jumped in the rankings, and then over time it came back down.
And then in addition, Paul Haahr gave a keynote at SMX West in which he walked through ‘how Google Works’ and he talked for a while about click-through rate, more in the context of controlled tests by Google, so [Google] will roll out some algorithm change, and one of the things it might look at is the user click-through rate on the revised search results, in order to see whether that’s a better result.
And then he also explained that the reason why Google doesn’t use it as a general ranking factor is because it’s too gameable, but in this controlled test environment, it allows you to use user interaction with the result in the way of measuring search quality, to decide whether to roll out a new algorithm update.
And just to finish my rather complex question, the point of that is that it seems to me that if you’re using click-through rate as a main measurement of other ranking factors, to better measure search quality, it doesn’t really matter to me that much whether it’s a direct ranking factor or an indirect ranking factor. It still is used in evaluating search quality.
Andrey: To be honest, I think I’m kind of with you there, and what Paul said, I’m not going to disagree with Paul, but also what we’ve been saying before. I think, if you look through the majority of our past comments on this topic, you won’t find anything to the contrary.
We do, if you like, use that as a factor to assess our quality, and treat it as you like, fair enough, in that sense, if you’d like to… It’s just, it’s very important, because you know how headlines go. Tomorrow’s headlines from anybody who watches us today will be, ‘Google uses behavioural factors for ranking!’ And you know what people will interpret from that.
Because on the one hand, yes, in the sense that you described; on the other hand, no, in the sense that most people understand.
So it’s very important to kind of come in with a slightly more complicated explanation. Somebody commented to me the other day about being able to answer yes or no to questions, and I told them that sometimes there are questions that are not yes or no, even if you phrase them in a yes or no fashion; it doesn’t make it possible to give a yes or no answer.
So anyway, coming back to what you said, I think you described it pretty accurately.
Rand: Why is it the case that seven or eight times in the last two years, I’ve done something, just having a little fun, so I’ll be standing on a stage in front of 500 to a couple thousand people, and I’ll ask them ‘hey, can you all jump on your mobile phones, or on your laptops, and do a search? And I want you to click the seventh, eighth, ninth, tenth result, and then over the next 24 hours, let’s observe what happens to that query, and what happens to that page’s ranking for that query.’
I’ve had seven or eight of those that have been successful, and I’ve had four or five where the ranking did not change. And I’ve run a few of them over Twitter, again, a few where the ranking did change, and changed pretty quickly, and usually sticks around a day or two, and a few where nothing happened.
So in your opinion, what’s happening there that’s making the ranking for the page that gets clicked on change so rapidly, and then what’s happening when it falls back down, again relatively rapidly over the next day to two/three days?
Andrey: It’s hard to judge immediately without actually looking at the data in front of me. In my opinion and what my best guess here would be, is the general interest that you generate around that subject – by doing that, you generate exactly the sort of signals that we are looking out for. Mentions, and links, and tweets and social mentions – which are basically more links to the page, more mentions of this context – I suppose it throws us off, for a while. Until we’re able to establish that none of that is relevant, to the user intent.
Eric: So back to the other part of my question, I just want to acknowledge that, I agree that a lot of people might run off and say, ‘oh my god, click-through rate is a ranking factor in a more general sense!’
And personally, I understand why that would be bad, because it’s so gameable, but if you look at it more holistically, it seems to me that engagement signals and ways that users interact with content, using that as an indirect thing where you’re using it to qualify search quality, so you can pick other ranking signals that reflect that well…
As I internalise that, it gives me I think more ammunition to point people to make better websites and better webpages. And that’s the reason why I was asking the question, because to me I like to be able to show people why that’s so important and why they should think to themselves that this will help them, over time, with their SEO.
Not because you’re employing it directly in your algorithm, but because you’re using it to qualify your algorithm.
Andrey: Eric, I think you’re absolutely right, and your message to the people that you work with is absolutely spot on.
I think it is already significant enough that we’ll look at what users do on our search result pages, as we change them, to evaluate the usefulness of these changes, for people to take into account, you know, when Google changes the algorithm the next time and my page gets exposed, people like it or don’t like it, come to it or don’t come to it, I should probably pay attention to how people like or don’t like what I’m trying to offer them, as part of what it appears like in Google search results.
The disadvantages that I’ve most often seen described for this approach on a clear, pure ranking factor basis is that we’d need to have broad enough and reliable enough data about bounce rates, click-through rates, depth of view for the vast majority of pages and the vast majority of websites everywhere, in order to be able to make meaningful comparisons all the time.
That is impossible, because we don’t have the technical means to do it. Even when you think about Google Analytics, not everybody has a Google Analytics code by far, so we can’t use that.
If we don’t use that, what else are we gonna use? Start trying to come up with something we could use, but it’s always going to be a struggle.
Going back to the original idea of links, you can kind of reasonably say, ‘I see a page, I see the links on that page, I see where they’re going’ – if we can see them, granted they can be no-followed, they can be hidden, that’s like little things, but by and large, they’re here, we can see them, we can use them. The words on the page.
This stuff, any reasonable person with a bit of experience could think of ways how you could measure user behaviour for particular pages and particular websites, but measuring it web-wide…
Rand: Andrey, correct me if I’m wrong, but you don’t need it particularly web-wide, you only need it on the search queries, right? So Google sees that, on average…
Andrey: Right, but is that enough? Imagine we only ever measured the links for pages that appear in our results. Or, like, the top 10 of our results. So you end up with a very small subset of everything that’s out there, and the worst thing about it is you end up reviewing only that, because it’s the only thing that ever gets clicked on, because it’s the only thing that ever gets shown.
You can find ways to get out of it, I agree with you. You can find ways around it, and solutions to it, but this is the reason we’ve been saying it’s a tough challenge. It’s gameable on the one hand, and it’s a tough challenge to actually make a very strong signal out of it.
If we solve it, good for us, but we’re not there yet.
Rand: I mean, I think that you have filed some patents, have written some papers about using pogo-sticking, and pogo-sticking certainly seems like a very reasonable way to measure the quality of search results. If something gets a lot of clicks, and then people click ‘back’, or they click on something else, clearly that result didn’t fully satisfy them. So that seems like a very reasonable user approach.
Andrey: That is a reasonable approach; it is one of them, so as a patent goes, that’s an interesting idea, but take it into account, what if the nature of the query is such that you won’t go to that? You’re comparing things; you want to see this one, and you want to see another one, and then you want to make a decision.
So all these things come into account… We experiment and explore other ways, not just this. Can we look at what queries are? And can we just go back to the basics of what’s the content on the page? How can we understand that better? And how can we understand the entities on the page? And so on.
I think there can be more research done into user behaviour factors, and how we use them well. But there’s also like a million other avenues of research, and maybe some of them will be more promising.
What are the top signals Google uses for ranking?
Ammon: RankBrain has become this new keyword that everyone’s latched onto; I’m seeing already companies that are selling ‘We’re the SEOs that have taken into account the latest RankBrain upgrades’ – despite the fact they can’t possibly, because we’re all still examining what it does, what its limitations are, and there’s no way of knowing that from the outside completely, especially since it seems to combine with several others.
Now my understanding of Hummingbird is that it’s led to this, that Hummingbird was brought in more context, more idea that the meaning of the query was more important than the words of the query. And I think the natural consequence of that is, there’s times when it isn’t. There’s times when the way we’ve worded it is very specific, and it seems that RankBrain is one method of being able to spot this – Gary Illyes’ example was the word ‘without’. That one word was the most important word in the query. ‘Can I complete this without such-and-such?’ ‘Without’ couldn’t be changed.
So… Is that kind of where we’re going with this? We’ve heard that this is the third most important signal contributing to results now. Would it be beneficial to us to know what the first two is? Could webmasters build better sites if they know what the first two is?
Andrey: Yes; I can tell you what they are. It’s content, and links pointing to your site.
Ammon: In that order, or another order?
Andrey: There is no order. Third place is a hotly contested issue. I think… It’s a funny one. Take this with a grain of salt.
It’s obviously up to Greg the way he chose to phrase it when he was doing it, and I understand where he was coming from, being somebody who worked on that. The way I interpret his meaning is that if you look at a slew of search results, and open up the debugger to see what has come into play to bring about these results, certain things pop up more or less often. Certain elements of the algorithm come into play for fewer or more pages, in fewer or more cases.
And so I guess, if you do that, then you’ll see elements of RankBrain having been involved in here, rewriting this query, applying it like this over here… And so you’d say, ‘I see this two times as often as the other thing, and two times as often as the other thing’. So it’s somewhere in number three.
It’s not like having three links is ‘X’ important, and having five keywords is ‘Y’ important, and RankBrain is some ‘Z’ factor that is also somehow important, and you multiply all of that… That’s not how this works.
The way we can look at it in a useful way is that we are trying to get better at understanding natural language, and applying machine learning and saying ‘What are the meanings behind the inputs?’
It’s still early days; we cannot claim that typed queries, whether mobile or desktop, have reasonably subsided or are going away. But more and more so, people are interacting with their devices using voice. So we can expect the use of stop words, words like ‘without’ more often.
People still tend to be a lot more mechanic and overthink their queries a bit – as I tend to, anyway – and think, ‘Okay, so what is a query that is completely not human in nature, that sounds like what the machine would understand?’ I don’t know if you guys catch yourselves doing that. You don’t generally type a question to Google as you would ask a real person.
Ammon: I do structured queries; what’s the most important concept? Right, I’ll put that first… What’s the modifier to that?
Andrey: You’ve gotta admit that in that sense we are pretty advanced users; a little bit outside the norm in this sense. As a company I guess we’re not so much looking to support us guys, because I think we’ll kind of figure our way out, but to start supporting people who are just joining the net. And to whom, for example, the mobile experience is the first experience, and the Google search application is their first experience of interacting with the web and getting answers to their questions.
And they don’t know that you need to say robotic words and omit commas, dots and stop words, everything; they just speak. And they’ll say, ‘How can I complete Mario Karts without cheating?’ They’re not going to think about the word ‘without’ in that sentence, so it’s up to us to figure out what’s behind all of that.
I think there was a bit of conversation on Twitter, as well, with Gary involved, about ‘So what does this affect?’ Does this affect indexing, does this affect ranking, and Gary was trying to say ‘Well no, it doesn’t affect ranking, it’s not a ranking factor’… It becomes a very complex conversation when you get to that.
Ammon: How about ‘everything affects ranking. Otherwise there’s no point having it.’
Andrey: Well ultimately, I guess so, yes; ultimately, even your webpage’s accessibility affects ranking, because if we can’t access them, we can’t rank them, even at that level. It doesn’t affect the ranking of an individual page, but what it does affect is our understanding of a query.
So once our understanding of a query changes, we’re more likely to throw something different as a result. That’s the effect that it has. But it’s not the same effect as knowing there’s so-and-so many links pointing to that page, or knowing there’s such-and-such words on that page. That’s something else.
And I think you mentioned Hummingbird – the very initial roots of that are in the synonymisation attempts, understanding synonyms better and replacing them the old way back then; those were just libraries, to some extent static libraries. This is much more interesting. Also as Greg has said, and some of the other guys have said: we don’t know what it’s doing. We’re not supposed to know, I guess, because it’s machine learning; that’s the whole point. You throw it out there and it does its stuff.
But the ideas behind it is this, and they’re the same as when we started talking about Knowledge Graph and introduced it, and that became very much part and parcel of our search results; people expect it to be there, people expect it for entities.
Three years ago, I remember quite vividly – I think it must have been 2013; I was at a conference in Russia and we introduced this for the first time. It was a bit of a struggle to explain – ‘What is it that we’re doing, and what are we doing with all these new things? What is ‘entities’ and how is it different from words?’
Now people expect it. You’re looking for an actor, you’re looking for a city, you’re looking for an event; yeah, you want to see that card, or have a thing from a source, follow links, yeah – that’s the experience.
So this is hopefully enhancing that experience, allowing us to understand what else has that potential, what else can be explained in such a way, what else can be understood better. Negative queries, as Gary called them, that’s definitely one thing that hopefully is going to work better from now on.
Eric: There’s been so much confusion that’s spun out of this, and there’s some things that I think we can perhaps help people understand a little better here. So, as you described it, Andrey, and there’s conversations I’ve had with Gary about it and other Googlers.
‘Better understand language, to help us better match queries with appropriate webpages, from a relevance perspective.’ Put a period, end of sentence. ‘Doesn’t take over other ranking factors’ – is that a fair top-level assessment?
Andrey: I suppose, yes? Probably have to see that in writing to make sure I fully understand every single word of it and how they fit together, but yes, generally speaking, I’m there with you.
Rand: Can I slightly disagree with that last sentence? This is just based on what Gary and I tweeted back and forth the other day, which is – I think the idea of RankBrain is that it can re-order or re-weight ranking elements, in order to produce more relevant results.
Andrey: But I mean, at the very simple level, you know, in the example that Gary gave – Mario Kart without cheats – it’s not the weight you assign to the word ‘without’ in that query, and its presence on the pages you’re going to show for this, or not going to show for this.
It’s hard to kind of structure this answer clearly… When you look at it like that, you’re almost prone to saying ‘Yeah, it is a ranking factor because now we’re going to grab on to those pages that do have this word’ but it’s not just about the word ‘without’, it’s about the context of the whole discussion around this page.
It’s looking at what’s on the page but also how well it met the expectations of people who then interacted with that. Did us guessing that this particular word indicated a particular intent – was that a right kind of guess? And gathering that data continuously and then saying, ‘You know what, fair enough. Now we’re kind of confident that these words, or these combinations of words, indicate a particular intent, so we need to pay attention to such-and-such content.’
And it doesn’t necessarily mean that all of a sudden, links are more or less important for this page or that page. They’re still as important as they were, but now there’s also this other thing, on top of everything we know, we also know that for this query, this is particularly important.
Machine learning vs spam
Rand: Obviously RankBrain is one of the places where you’re most public about it, I think another place where Google is very public about using machine learning is in image search, and it’s really cool what you’ve been able to do, identifying locations of images and features of images.
A few years ago there was some talk about whether to use, or would Google use machine learning in webspam, and would they use it in other ranking factors, like around links or content? Are there any of those elements that you can answer and say, ‘Yes, we have been using machine learning in other factors like spam, or links, or content, or something else, and that’s been useful to us because XYZ’?
Andrey: I can say that we’ve definitely been looking at using machine learning across the board. Including webspam and other areas.
In webspam, I can’t point towards any particular huge success we’ve had, when using this immediately, but I’m quite confident that we’ll continue exploring. I’m quite confident that even the effects of trying to apply this have been very beneficial, just learning from that.
SEO outside of the Western market
Rand: One of the things that frustrates a lot of SEOs and marketers – I know a lot of folks who, for example, speak about SEO in the US, or in the UK, will travel – I’ll travel, for example, to Eastern Europe, and I’ll speak at an event there, and people will come up to me, and they’ll say, ‘Oh yeah, all that stuff about focusing on quality and content, that sounds really nice, but in fact buying links here is still really effective. And that’s what works for me, and for my clients, so… This whole ‘quality content’ thing you’re talking about, that hasn’t made its way to our country, our region yet.
Andrey: I think that Anton knows very well what you’re talking about, and I do too. I’m going to Ukraine in like a week and a half, I’m going to hear a lot of it. And then I’m going to be in Belarus afterwards, and I’m going to have the full kind of… I get it every time. We try to be as connected as we possibly can to the majority of our markets; we’re definitely connected to the Eastern European ones.
So thank you for this question, because I’ve actually got quite a bit to say about this. Let me try to get through as much as I can, and actually maybe hear from you guys as well.
First of all it’s what people say – what works and what doesn’t work. What is spam, per se, and what isn’t spam? Sometimes there are techniques which are borderline, and which we haven’t begun to understand fully yet, and maybe are not fully globally aware of their application or use.
And therefore they might be applied topically somewhere, and working there, and we just haven’t gotten around to saying ‘Okay, you know what, we should probably take a look at that and stop it from working because it doesn’t make any sense.’
So that’s number one, right? Sometimes things have just not yet been defined as spam, or sometimes you look at stuff, for example, where speech comes into play, and us being a reflection of the internet comes into play, and you kind of go you know what, we do have to draw a line somewhere. At a certain point, yes, it is our responsibility to say ‘this is beyond low quality, this is harming our users, we’ll take active measures’. Whilst that is kind of weird, kind of not nice, but there’s nothing else, it’s not really harming anybody, so you know, let it be.
Great example, you were talking about buying links? I’m kind of hoping that in Eastern Europe, particularly the further East you go, where we have really strong competition. The fact that our competitors are also our colleagues as far as webspam fighting is concerned, and they’re taking active measures against a lot of this stuff as well; hopefully this message will get reinforced that really this doesn’t work, and definitely doesn’t work long term, and the concept of investing into short-term gains just don’t pay off, even if you win for a couple of days, or even a month, it doesn’t make any sense for anybody serious about their business.
This is the second thing I was going to say: As much as we’d like – and when I say ‘we’, I mean both us and maybe other search engines as well – you can’t control the macro-economics of some of the markets where we operate. And especially in the more volatile ones, people will still go in for the short-term gain, because they’re not able to plan beyond a couple of months, because they have no idea what’s going to happen. Not to the internet or Google, but the economy, in a couple of months. You need to like, ‘rank now, make money now, who cares afterwards’.
And it’s really tough to operate in that environment. It’s not less tough, and there are other challenges, in more developed markets, and the financial crisis also happened, but still, you know, it’s easier to work with content that’s there for the long run, with creators that are there for the long run, with whom you can have a meaningful dialogue and try things, and see what works, and everybody thinks that you’re in this for the long haul, and more or less try to make it work for everybody.
It’s more difficult when people are forced into making short-term decisions. And then you have to kind of react in a measured way as well, because you need to realise that there are businesses, and livelihoods, and organisations behind the websites, so you need to measure ‘Okay, so what do we do in this situation?’
The other one is a famous one – I don’t know if that quote ever made it to the English part of the internet, but some of you might know him – he was involved most recently in our Twitter integration efforts. So, when he was speaking at, I think an interview with one of the Russian publishers, years ago, he gave a very funny example, that there are no results for a particular query – which is really funny, it was something like, buying a bucket in a small rural city somewhere, in Russia – something like that, something that you wouldn’t expect anybody to sell there. And there are no results for it. I think it became a bit of a meme, the whole thing.
But the point behind this is that in a lot of Eastern European – but not just Eastern European markets, I think it’s an issue for the majority of the BRIM countries, for the Arabic-speaking world – there just isn’t enough content, as compared to the percentage of the internet population that those regions represent.
I don’t have up-to-date data; I know that a couple of years ago we looked at Arabic for example, and there the disparity was enormous. If I’m not mistaken, the Arabic-speaking population of the world is something like 5-6%, maybe more, correct me if I’m wrong; but very definitely the amount of Arabic content in our index is several orders below that. So that means we just don’t have enough Arabic content to give to our Arabic users, even if we wanted to.
And you can exploit it amazingly easily; if you create a bit of content in Arabic, whatever it looks like, we’re gonna go, ‘Well, we don’t have anything else, we’ll serve this’. And it ends up being horrible. And people will say, ‘You know, this works! I curate stuff, the hell out of this page, bot some links, there it is. Number one. There’s just nothing else to show. So, yeah, you’re number one. The moment someone actually goes out and creates high-quality content that’s there for the long-haul, you’ll be out, and that thing will be in.
One of the things we try to do, in particular this is also the reason why I’m travelling there, is really work with those markets, and provide the tools and the means to those webmasters and SEOs and marketing agencies; the tools, the guidelines, the best practices; here’s what you can do, here are the tools that Google gives, and so on and so forth.
Rand: So as those of us from this Hangout and many others are going over and chatting with folks in regions outside of our home countries, where maybe this sentiment exists, would it be your assertion that generally speaking, the algorithms that reward content types here in the US, or in Western Europe, will make their way to other countries as well, other language groups, over time?
Andrey: I think it’s fair to say that the algorithms are already there, it’s just that the content we’re seeing is… I don’t want to offend anybody, I don’t want to say that the kind of content that these algorithms now exist for, that kind of content doesn’t exist.
The technologies used, and the approaches used, they’ve moved on so much, as you guys know, in some parts of the world and – again, ‘English-speaking’ is a very broad name, but in the Western English-speaking world they’ve moved on so much, and we’ve adapted to them, and that that would open up what you might call holes, or exploits, for other types of content and other types of behaviours. And we’ll patch them, and we’ll plug them, it’s just that obviously they’re less of a priority, because we want to keep pace with the new developments, and the cool stuff.
Everyone was very excited a year ago when we started thinking about introducing mobile friendliness, and of course we were obsessed with how we’re going to present this, and the reaction that people will have, and thank you guys, some of you, for creating memes about that as well.
By and large, it kind of had a positive reception; by and large, it worked out the way we wanted it to work out, and we try to make sure that it does. But it was funny how some of us – me included – the first thought was, ‘This is awesome’, second thought was, ‘Okay, so how are they gonna game this?’ What is going to be the first page that cloaks mobile-friendliness instead of being mobile-friendly, and the thing is, other people will go, ‘Yeah, but that’s like, more work.’
Andrey: It depends, because it depends what you’re going for. Because I can think of scenarios where it makes sense to invest into some sort of crazy self-learning algorithm that just generates not really mobile-friendly pages but then shows them somewhere. And just doing this normally, for normal clients, and so on.
It’s like this with everything – now we are looking with a lot of interest into what Accelerated Mobile Pages will be like, and what’s the update on that going to be – are they going to be successful or not? We’re banking on them, we wish them success and we want to make them successful, but it’s going to be like that again. There’s a lot of big publishers globally are porting, trying to do something interesting here, see how it works, and then part of my brain is going, ‘Okay, so does this mean that AMP-ified spam is going to rank better now?’ And what are we going to do about this, you know?
It’s like that with everything, and as I said, we’re facing the fundamental difficulties that we’re facing, the volatile economic system which forces people into short-term decisions, and generally speaking – and you guys are posting in the chat that ‘money speaks English in the Arabic world’ – to a certain extent I guess it does, and you can say the same for China, or Russia… Yeah, clearly it speaks English, but if it only spoke English then all of us would also only speak English. The other languages still exist, and clearly people are interested in creating and maintaining that content, and I wish that this balance that exists right now corrected itself a little bit. And that would definitely solve some of the spam issues as well.
Ammon: It’s been a personal bug-bear of mine for years – this isn’t the technical code side of SEO, but I really wish more people would realise that, y’know, they’re creating a self-fulfilling prophecy in doing this. The number of emails I get that are very poorly-written English where they are determined to write content in English for an English market, because it gets more market share, or it’s… Look, if you’re writing in your own language, you’ve got a much less competitive environment, and you are lifting the local economy. How will that situation change if you play to this situation?
It’s a bit like the fact that India – Fantastic programmers, all working for other countries, making other countries rich. India isn’t really in a much better position than it was 15 years ago.
Andrey: We’ve never lost focus on the developing world, but this year we are really gunning for the BRIM countries, and countries like that, in the process of development; across all the departments of Google, but as far as search is concerned, hopefully throughout this year you’ll see some really interesting developments there. Initiatives that we launch, and the projects that we launch, that are hopefully going to enable creators to create content in their language, for their local users, and – you just mentioned India; they’ve got more languages there, probably – I can’t even count, really – but probably on a par with Europe.
Nobody thinks of it that way – ‘India, yeah, they just speak English’. They don’t. Like a fifth of the population speaks English. And the rest have a lot of stuff that they could share with each other, but they don’t. Or at least they don’t to the extent that they could. And we’re gonna try to do as much as we can to help change that.
Eric: Right; it’s remembering, too, that in search, which – you basically can enter an arbitrary query of any kind – how difficult it is to have a ready answer for every possible query a human might put in, in their native language, when there aren’t that many people creating content in that language. It’s a fascinating way of thinking about the whole problem.
Rand: Yeah, for sure. Real quick on this front – You basically said, Andrey, that there’s sort of a global algorithm, that it considers inputs the same, universally – Is it the case that there are, especially when it comes to webspam, specialised things that fight particular kinds of spam in different regions or languages or countries? Or is that a myth, that when something’s applied, it’s applied universally?
Andrey: As far as algorithmic solutions are concerned, I don’t think so. Nothing springs to mind; as far as our algorithmic webspam solutions are concerned, all of them that I can think of are global. Which is sometimes part of the problem, because that’s the thing – some of the problems are very local. In fact, it could sometimes be a bit of a vulnerability, that the solutions need to be global, and you just have to algorithmically ignore some of the problems.
It doesn’t mean we ignore them completely, which is why the many webspam fighting teams exist, which is why we have them covering the vast majority of the big markets that we work in, and that is the solution that we tell. The things that we do for example in Russia, or Poland, given the environment that exists there, are not the same as the things we do in France or Germany. And so on.
New mobile update
Eric: Any comment on the plans for this upcoming mobile update? Anything that you might be able to share; I’m not going to pick on anything particular.
Andrey: I don’t have a number to put on that – it’s kind of happening already, we have said we are now going to treat mobile-friendly pages even better; we already did. Some of you have commented, ‘I didn’t see the results!’ Well, hopefully you’ll see them now. Again, very often, and especially in non-English markets, people comment, ‘Hey, I still see a lot of non-mobile-friendly pages in results’. Part of that is because there’s not enough content that has become mobile-friendly, so you’d need to kind of show whatever else is available.
But with the growth in that content, we’re kind of saying, ‘Okay, we can bank on mobile-friendliness a bit more now, because we’ve seen a lot more pages come online, and become available for mobile devices, let’s do that.’ The other exciting thing is that, you know, we have a new smartphone user agent that will start running around the web from the 18th. I personally like the fact that we’re kind of keeping up with that and updating that as well.
Eric: Yup. And that’s supposed to start rolling out in May, is that right?
Source:: Search Engine Watch RSS