cheka.
25th April 2017, 11:17 AM
holycost gets multiple mentions. must.stop.free.speech/thought
http://searchengineland.com/googles-project-owl-attack-fake-news-273700
Project Owl & problematic content
Project Owl is Google’s internal name for its endeavor to fight back on problematic searches. The owl name was picked for no specific reason, Google said. However, the idea of an owl as a symbol for wisdom is appropriate. Google’s effort seeks to bring some wisdom back into areas where it is sorely needed.
“Problematic searches” is a term I’ve been giving to a situations where Google is coping with the consequences of the “post-truth” world. People are increasingly producing content that reaffirms a particular world view or opinion regardless of actual facts. In addition, people are searching in enough volume for rumors, urban myths, slurs or derogatory topics that they’re influencing the search suggestions that Google offers in offensive and possibly dangerous ways.
These are problematic searches, because they don’t fall in the clear-cut areas where Google has typically taken action. Google has long dealt with search spam, where people try to manipulate its results outside acceptable practices for monetary gain. It has had to deal with piracy. It’s had to deal with poor-quality content showing up for popular searches.
Problematic searches aren’t any of those issues. Instead, they involve fake news, where people completely make things up. They involve heavily-biased content. They involve rumors, conspiracies and myths. They can include shocking or offensive information. They pose an entirely new quality problem for Google, hence my dubbing them “problematic searches.”
Problematic searches aren’t new but typically haven’t been an big issue because of how relatively infrequent they are. In an interview last week, Pandu Nayak — a Google Fellow who works on search quality — spoke to this:
“This turns out to be a very small problem, a fraction of our query stream. So it doesn’t actually show up very often or almost ever in our regular evals and so forth. And we see these problems. It feels like a small problem,” Nayak said.
But over the past few months, they’ve grown as a major public relations nightmare for the company. My story from earlier this month, A deep look at Google’s biggest-ever search quality crisis, provides more background about this. All the attention has registered with Google.
“People [at Google] were really shellshocked, by the whole thing. That, even though it was a small problem [in terms of number of searches], it became clear to us that we really needed to solve it. It was a significant problem, and it’s one that we had I guess not appreciated before,” Nayak said.
Suffice it to say, Google appreciates the problem now. Hence today’s news, to stress that it’s taking real action that it hopes will make significant changes.
Improving Autocomplete search suggestions
The first of these changes involves “Autocomplete.” This is when Google suggests topics to search on as someone begins to type in a search box. It was designed to be a way to speed up searching. Someone typing “wea” probably means to search for “weather.” Autocomplete, by suggesting that full word, can save the searcher a little time.
Google’s suggestions come from the most popular things people search on that are related to the first few letters or words that someone enters. So while “wea” brings up “weather” as a top suggestion, it also brings back “weather today,” or “weather tomorrow,” because those are other popular searches beginning with those letters that people actually conduct.
Since suggestions come from real things people search on, they can unfortunately reflect unsavory beliefs that people may have or problematic topics they are researching. Suggestions can also potentially “detour” people into areas far afield of what they were originally interested in, sometimes in shocking ways.
This was illustrated last December, when the Guardian published a pair of widely-discussed articles looking at disturbing search suggestions, such as “did the holocaust happen,” as shown below:
More emphasis on authoritative content
The other and more impactful way that Google hopes to attack problematic Featured Snippets is by improving its search quality generally to show more authoritative content for obscure and infrequent queries. It’s a change that means all results, not just the snippets, may get better.
Google started doing some of this last December, when it made a change to how its search algorithm works. That was intended to boost authoritative content. Last month, it added to that effort by instructing its search quality raters to begin flagging content that’s upsetting or offensive.
Today’s announcement is about republicizing those changes, to give them fresh public attention. But will they actually work to solve Google’s search quality issues in this area? That remains to be seen.
Is the authority boost working?
A search for “did the Holocaust happen” today sees no denial sites at all in the first page of Google’s results. The results had been dominated by them last December, when the issue was first raised. In contrast, at the time of this writing, half of Google rival Bing’s top 10 results are denial listings.
Success for Google’s changes! Well, we don’t really know conclusively. Part of the reason that particular search improved on Google is that there was so much written about the issue in news articles and anti-denial sites that sprang up. Even if Google had done nothing, some of that new content would have improved the results. However, given that Bing’s results are still so bad, some of Google’s algorithm changes do appear to have helped it.
For a similar search of “was the holocaust fake,” Google’s results still have issues, with three of the top 10 listings being denial content. That is better than Bing, where six of the top 10 listings contain denial content, or eight if you count the videos listed individually. At least with both, no denial listing has the top spot:
For example, if you were looking for something very specific, such as a solution to a weird computer error, an obscure forum discussion about that error might be a better match than a page from a popular computer site that’s talking about errors generally.
Why an authority boost can help
How’s Google learning from the data to figure out what’s authoritative? How’s that actually being put into practice?
Google wouldn’t comment about these specifics.
Unfortunately, that same approach might be bad when it comes to problematic searches. It might be why pages trying to argue that the Holocaust was faked or a hoax would come up over more general pages about the Holocaust — because those denial pages were more contextually related to the exact search.
With the change, my guess — and it remains only my guess — is that Google is boosting the ability for authoritative content to rank better against contextually explicit content. That means a page from Wikipedia about Holocaust denial, as well as other authoritative pages about the Holocaust generally, might perform better.
http://searchengineland.com/googles-project-owl-attack-fake-news-273700
Project Owl & problematic content
Project Owl is Google’s internal name for its endeavor to fight back on problematic searches. The owl name was picked for no specific reason, Google said. However, the idea of an owl as a symbol for wisdom is appropriate. Google’s effort seeks to bring some wisdom back into areas where it is sorely needed.
“Problematic searches” is a term I’ve been giving to a situations where Google is coping with the consequences of the “post-truth” world. People are increasingly producing content that reaffirms a particular world view or opinion regardless of actual facts. In addition, people are searching in enough volume for rumors, urban myths, slurs or derogatory topics that they’re influencing the search suggestions that Google offers in offensive and possibly dangerous ways.
These are problematic searches, because they don’t fall in the clear-cut areas where Google has typically taken action. Google has long dealt with search spam, where people try to manipulate its results outside acceptable practices for monetary gain. It has had to deal with piracy. It’s had to deal with poor-quality content showing up for popular searches.
Problematic searches aren’t any of those issues. Instead, they involve fake news, where people completely make things up. They involve heavily-biased content. They involve rumors, conspiracies and myths. They can include shocking or offensive information. They pose an entirely new quality problem for Google, hence my dubbing them “problematic searches.”
Problematic searches aren’t new but typically haven’t been an big issue because of how relatively infrequent they are. In an interview last week, Pandu Nayak — a Google Fellow who works on search quality — spoke to this:
“This turns out to be a very small problem, a fraction of our query stream. So it doesn’t actually show up very often or almost ever in our regular evals and so forth. And we see these problems. It feels like a small problem,” Nayak said.
But over the past few months, they’ve grown as a major public relations nightmare for the company. My story from earlier this month, A deep look at Google’s biggest-ever search quality crisis, provides more background about this. All the attention has registered with Google.
“People [at Google] were really shellshocked, by the whole thing. That, even though it was a small problem [in terms of number of searches], it became clear to us that we really needed to solve it. It was a significant problem, and it’s one that we had I guess not appreciated before,” Nayak said.
Suffice it to say, Google appreciates the problem now. Hence today’s news, to stress that it’s taking real action that it hopes will make significant changes.
Improving Autocomplete search suggestions
The first of these changes involves “Autocomplete.” This is when Google suggests topics to search on as someone begins to type in a search box. It was designed to be a way to speed up searching. Someone typing “wea” probably means to search for “weather.” Autocomplete, by suggesting that full word, can save the searcher a little time.
Google’s suggestions come from the most popular things people search on that are related to the first few letters or words that someone enters. So while “wea” brings up “weather” as a top suggestion, it also brings back “weather today,” or “weather tomorrow,” because those are other popular searches beginning with those letters that people actually conduct.
Since suggestions come from real things people search on, they can unfortunately reflect unsavory beliefs that people may have or problematic topics they are researching. Suggestions can also potentially “detour” people into areas far afield of what they were originally interested in, sometimes in shocking ways.
This was illustrated last December, when the Guardian published a pair of widely-discussed articles looking at disturbing search suggestions, such as “did the holocaust happen,” as shown below:
More emphasis on authoritative content
The other and more impactful way that Google hopes to attack problematic Featured Snippets is by improving its search quality generally to show more authoritative content for obscure and infrequent queries. It’s a change that means all results, not just the snippets, may get better.
Google started doing some of this last December, when it made a change to how its search algorithm works. That was intended to boost authoritative content. Last month, it added to that effort by instructing its search quality raters to begin flagging content that’s upsetting or offensive.
Today’s announcement is about republicizing those changes, to give them fresh public attention. But will they actually work to solve Google’s search quality issues in this area? That remains to be seen.
Is the authority boost working?
A search for “did the Holocaust happen” today sees no denial sites at all in the first page of Google’s results. The results had been dominated by them last December, when the issue was first raised. In contrast, at the time of this writing, half of Google rival Bing’s top 10 results are denial listings.
Success for Google’s changes! Well, we don’t really know conclusively. Part of the reason that particular search improved on Google is that there was so much written about the issue in news articles and anti-denial sites that sprang up. Even if Google had done nothing, some of that new content would have improved the results. However, given that Bing’s results are still so bad, some of Google’s algorithm changes do appear to have helped it.
For a similar search of “was the holocaust fake,” Google’s results still have issues, with three of the top 10 listings being denial content. That is better than Bing, where six of the top 10 listings contain denial content, or eight if you count the videos listed individually. At least with both, no denial listing has the top spot:
For example, if you were looking for something very specific, such as a solution to a weird computer error, an obscure forum discussion about that error might be a better match than a page from a popular computer site that’s talking about errors generally.
Why an authority boost can help
How’s Google learning from the data to figure out what’s authoritative? How’s that actually being put into practice?
Google wouldn’t comment about these specifics.
Unfortunately, that same approach might be bad when it comes to problematic searches. It might be why pages trying to argue that the Holocaust was faked or a hoax would come up over more general pages about the Holocaust — because those denial pages were more contextually related to the exact search.
With the change, my guess — and it remains only my guess — is that Google is boosting the ability for authoritative content to rank better against contextually explicit content. That means a page from Wikipedia about Holocaust denial, as well as other authoritative pages about the Holocaust generally, might perform better.