«Information, Privacy, and the Internet: An Economic Perspective Susan Athey Stanford Graduate School of Business1 Contents Introduction 5 1. ...»
Even though the basic fact that the prominence of results is important is widely known throughout the Internet ecosystem— anyone who designs a Web page and compares alternatives will quickly discover it—it is still worthwhile to quantify just how important it is. Thus, I conducted an experiment (Athey, 2013) to evaluate the impact of ranking. Prior to my experiment, most of the evidence that was available in the public domain was based on nonexperimental evidence, which therefore did not give the causal effect of position. Did the top link get clicked because it was the best link for the search query, or because it was in the top position?
What would have happened if the links in the top position and a lower position were reversed?
With a randomized experiment, where different users see different rankings of links or different layouts on the screen, it is possible to address this question more definitively, and to avoid trying to generalize from specific examples that may not be representative.
Search engines regularly run experiments to test out the performance of new algorithms. In these experiments, user searches are randomly assigned to either receive the “control” treatment—the baseline search experience—or one of a number of experimental “treatments,” where results are ranked or presented differently.
In order to answer the question about the potential effects of manipulation, I worked with the Bing team at Microsoft to design a special experiment, analyzing the impact of several “treatments” in which we moved the best search result—the one that our algorithms would otherwise place first—to various lower positions on the search results page. The test ran for a few weeks, in the United States and overseas.
The data spoke very clearly about the impact of the treatments: A search engine can divert traffic from one website to another by manipulating the order of search results. In particular, moving the best result down just two positions (from first to third) reduced traffic to that site by half. The diversion effect becomes much more pronounced as a site is moved further down the page. A site that is moved from the first position to the tenth position typically will lose about 85 percent of its traffic. A site that is moved from the second position to the ninth loses about 75 percent of its traffic.
And the results were similar for all users, regardless of the amount of time they spent searching on the site.
If you look at the same results from the perspective of the site that gets promoted from a lower position to first in the rankings, the effects are even more pronounced. (This is because the site appearing further down the page has so few clicks to start with).
A site promoted from fifth to first gets a 340 percent increase in visitors from search, and the results are similar when you focus only on users who go to the site and stick around for a period of time. Imagine telling a business that they can more than quadruple their customer base overnight! That is a very tempting thing to do for a search engine if the site it is promoting is its own affiliated website.
Joaquin Almunia, the European Commission vice president responsible for competition policy, stated that this is not just an academic concern: “Google displays links to its own vertical search services differently than it does for links to competitors.” Vice President Almunia explained that “[w]e are concerned that this may result in preferential treatment compared to those of competing services, which may be hurt as a consequence.” The impartiality of search results will become all the more important in the years to come given that screen sizes on smartphones and tablets are smaller than on traditional PCs.
Smaller screens mean there is even less room for competing services to appear in mobile search results.
Search Engine Manipulation Manipulation is an important issue for competition policy. Imagine that you have created an amazing new travel website. But, the website needs to attract consumers, and it is based in a country where a single search engine has 90% or more market share. How will you attract users who are interested in travel? At the moment the user does a search query about travel, the user is ready to do travel research. If the website is terrific, perhaps the search engine will send the user to the site! Or perhaps, the user will click on the search advertisement you have purchased. Imagine your dismay if you wake up one morning and see that the search engine has taken the most prominent part of the screen and embedded its own travel content, and your link falls off the page, and traffic plummets. The years of hard work and investment to make an innovative and brilliant product do not matter, nor does it matter whether your website is better than the content that replaced your link.
Simultaneously, the prices for your search advertisements skyrocket, ending up above the profit you can make from a click on the links. Scenarios like this have been described by a variety of websites in different industries throughout Europe, and the European Commission filed a complaint against Google for engaging in this type of behavior.
In response to the EC’s concerns about manipulation, Google proposed a set of commitments in October 2013, which included the addition of a Rival Links box that would purportedly restore traffic to rivals. The EC rejected these commitments for their failure to end the preferential treatment of, and traffic diversion to, Google’s own specialized results. In January 2014, Google proposed a revised set of commitments. The EC is expected to make a final decision as to the merit of these commitments by the end of summer 2014.
In May 2014, the Open Internet Project (representing 400 companies including major German and French publishers) announced they were suing Google for anti-trust violations. The group demanded a “ban of Google’s manipulative favouring of its own services and content.” In addition, issues of search manipulation are often brought up by regulatory bodies who investigate monopolistic behaviors and the appropriateness of vertical and horizontal mergers.3 There are not very many academic studies of the relationship between internet search and consumer welfare. Much of the existing theoretical work focuses on the interaction between search technology and the prices charged to consumers by advertisers.
Examples of such papers include Chen and He (2006), Armstrong, Vickers and Zhou (2009), and White (2008). White (2008) analyzes the tradeoff between high quality search results and paid search profit for the firm, where paid search profit depends on the profits that advertisers generate from consumers.
E.g., the UK Office of Fair Trading’s investigation into Google’s acquisition of BeatThatQuote, a provider of consumer finance comparison services Taylor (2010) develops a model where high quality algorithmic results divert clicks away from advertisements, creating an incentive for the search engine to degrade algorithmic search quality. He additionally finds that when consumers exhibit search engine loyalty, the incentive to manipulate leads to a ceiling on equilibrium search engine quality.
Athey and Ellison (2011) build a model of consumer search that analyzes a consumer’s decision to click on each successive link, formalizing the feedback effect between the quality of ads and the propensity of consumers to click on ads. Consumers are rational, understanding that higher quality firms bid more aggressively to attain the top positions. They use this model to analyze how market design decisions such as the level of reserve prices affect welfare for advertisers, consumers and the search engine, showing that a profit-maximizing search engine selects reserve prices in a way that sacrifices welfare in favor of extracting revenue from advertisers.
Building on Athey and Ellison’s (2011) model, Athey, Kuribko, and Richards (2014) analyze the incentives of a search engine to manipulate the rankings of algorithmic links. The model features rational consumers who recognize the possibility of manipulation and respond optimally in their search patterns. Despite the rational response of consumers, the paper shows it is still profitable for a search engine to manipulate. Consumers respond to manipulation by clicking on more links, which lowers their welfare but avoids harming the search engine, so long as not too many consumers give up. The incentive to manipulate is enhanced if the firm enjoys a quality advantage over other options, and thus can manipulate without losing too many consumers.
News Aggregators and News Consumption Another context in which the way information is displayed matters a great deal is in news. Internet search is one of the most important sources of referrals to news. Imagine for a moment that there was only one search engine, and it had a political bias. As
Timberg (2013) reported:
Google’s motto is “Don’t be evil.” But what would it mean for democracy if it was?
That’s the question psychologist Robert Epstein has been asking in a series of experiments testing the impact of a fictitious search engine — he called it “Kadoodle” — that manipulated search rankings, giving an edge to a favored political candidate by pushing up flattering links and pushing down unflattering ones. Not only could Kadoodle sway the outcome of close elections, he says, it could do so in a way most voters would never notice.
“Elections are won among low-information voters,” said Eli Pariser, former president of MoveOn.org and the author of “The Filter Bubble: What the Internet Is Hiding From You.” “The ability to raise a negative story about a candidate to a voter. . . could be quite powerful.” With my coauthor Markus Mobius, I conducted another study (Athey and Mobius, 2012) to analyze the impact of news aggregators on the content and diversity of content users read.
The research was inspired by the popular debate about the role of news aggregators, which we will define here to include sites that do not produce much original content, but rather curate content created by others using a combination of human editorial judgement and computer algorithms. The results are presented with a few sentences and perhaps photos from the original article;
to read the full article, users can click through and go to the web site of the original content creator. Pure “aggregators,” such as Google News, generally do not make any payments or have any formal relationship with the original authors of the news content;
rather, they create their page by “crawling” the web and then using statistical algorithms together with editorial judgements to organize and rank the content. (Observe, then, that this is another context where having more data leads to better rankings, all else equal.) Only in a few cases does Google News have a direct relationship with the outlets (e.g. Google News had a relationship with the Associated Press, as analyzed by Goldfarb and Tucker (2011)). In contrast, sites like Yahoo! News and MSN primarily show content from contractual partners. Sites like the Huffington Post may use a hybrid strategy of curating blogs and aggregating news from other sources.
Why are aggregators so controversial? Only about half of page views on the Google News home page result in visits to any online newspapers; thus, users may read their news from Google News without ever generating any page views or revenues for any of the content creators. Clearly, this undermines the incentive of newspapers to invest in journalism. In addition, news aggregators can substitute for the home page of an online news outlet like the New York Times. The aggregator can index not just the content of the New York Times but all other news outlets, giving it an advantage in coverage. It may then replace the “curation” function that gives the New York Times its reputation.
The question of how aggregators impact consumption was first studied by Chiou and Tucker (2011). They study a “natural experiment” where Google News had a dispute with the Associated Press, and as a result, did not show Associated Press content for about seven weeks. The paper has aggregate data about page views to Google News as well as the sites visited immediately after Google news. They use views to Yahoo! News as a control. The paper finds that Google News is a complement to news outlets: taking the Associated Press content away from Google News lead to fewer visits to news outlets (where Associated Press articles are featured).
Athey and Mobius (2012) consider a different application, when Google News added local content in France in late 2009. The paper uses internet browsing data from a subset of internet users to analyze how the content of user consumption changes. They find, similar to Goldfarb and Tucker (2011), that introducing local content increases the consumption of news. The interpretation is that by exposing users to local news that they might not have otherwise read, they increase users’ interest in news and thus their news consumption. However, the paper also finds that users become much less loyal in their local news consumption, and that the role of the newspapers’ home pages (and thus their editorial contribution to news curation) decreases.
Taken together, these studies reinforce the finding that the way news is presented, and whether it is presented at all, matters a great deal. The information intermediaries, such as search engines and aggregators, are enormously powerful in picking winners and losers on the internet, and in determining the informedness of the population, broadly defined.