Facebook & the Way Forward
--
“One of the biggest challenges to our democracy is the degree to which we don’t share a common baseline of facts. We are operating in completely different universes. Your biases become reinforced over time. That’s what’s happening with Facebook. At a certain point you just live in a bubble, and that’s part of why our politics is so polarized right now.”
- Barack Obama [1]
I. Introduction
Democracy functions on the precept that truth is maintained through freedom of expression and thought.[2] Democratic nations have enacted laws that protect freedom of expression by keeping public discourse — a society’s means of communication — from state interference. These laws were intended to protect the free-flowing exchange of ideas in the hope of arriving at a sense of truth that strengthened the exercise of democracy. As truth is essential to democracy, we have historically entrusted the power of speech — as it scales, free from government control — to bring it to bear. The power of speech is called into question, however, by the prevalence of beliefs based on objectively incorrect information, or “misinformation.” Research confirms that those “who possess inaccurate information may form opinions that differ substantially from the opinions they would have formed were they correctly informed…[and] prevent them from seeking new information.”[3] The democratic value of our freedom to speak, as it reflects the autonomous will, requires that we are not blinded to a meaningful presentation of what is.
As a democratic forum, the public sphere transforms individual opinions into public opinion and in so doing, links politics to society.[4] The deliberation of the public sphere — which provides arguments and proposes solutions — now takes place in significant part through platforms like Facebook online. As such, Facebook’s architecture sets the conditions under which individual wills combine to create public opinion. Since its inception, Facebook’s architecture has prioritized a strict construction of freedom of speech, as the principle has been upheld for its democratizing potential. Yet insofar as it maintains an unrestrained to promote anything online, this construction of speech has created a vehicle for illiberal actors to promote anti-democratic outcomes. Bad actors have systematically manipulated the platform’s social position to amplify hate, exacerbate polarization and promote the decline of meaningful media consumption under cover of existing theories of freedom of speech.
II. Evolving out Understanding of Free Speech
How is it that in the context of free speech — the value we have prioritized as the ultimate precept for a healthy democracy — false narratives are nonetheless able to so brutally persist? Charlatans and megalomaniacs have sought time immemorial to build their fortunes on promises of inclusivity to vulnerable people seeking a place in the world. The difference is that today, these anti-democratic forces benefit from widespread levels of vulnerability, access to powerful channels of distribution and perhaps most critically the declining impact of a free and informed press.
Prior to the rise of news consumption through social media, a free and vibrant press played a critical role in facilitating the exercise of speech to reveal truth. Critically, news media outlets competed with one another for distribution based on credible reporting. False or even questionable stories would be challenged through competitive reports and a distrusted publication would lose its readership over time. Although law functioned as a floor to prevent reporting on certain types of unverified content, the scope of liability was considerably narrow in favor of promoting free speech.
As people began to consume news as a part of participating in the information economy online, traditional news media lost the power of its distribution. In keeping with the pace of digital distribution, people also developed new patterns for immediately processing information shared — and thus psychologically validated — by their networks that increased resistance to counter narratives. In addition, news was now being disseminated through algorithms that were engineered for a very different purpose, to show people content that they wanted to see; this process, which can be thought of as “information matching,” is directly in contrast with the role of a free press. By presenting facts irrespective of ideological affiliation, the press forces citizens to negotiate with an understanding of events that might not map onto their existing worldview. This process of exchange, or the “marketplace” in which ideas are forced to barter with new configurations of facts, is essential to preserving the value of free speech as an engine for truth.[5] If, under the guise of free speech, we are instead allowed to become both buyer and seller of our own recycled beliefs, this construction of free speech no longer holds its value in enabling the democratic principle.
Freedom of speech, which was once a check on the exploitative proclivities of the ego-driven man has since been co-opted to sew chaos to the contrary. Facebook can no longer hide in the world of false equivalency, where facts are negotiable and all opinions present equal claim to truth. The company has not arrived at this juncture in vacuum, but rather as a product of our larger political environment and technological evolution. For example, it has served political interests to maintain a world in which politicians are free to augment their positions with money and persuasive rhetoric — which can be manipulated — and unencumbered by facts, which cannot. Similarly, the demands of mobile connectivity in the information-heavy pace of modern life position us to seek new heuristics to make meaning of the world. This creates a vulnerability to short-form which in turn reduces our capacity to consider counter-narratives and poses an ongoing threat to truth.
A strict, myopic construction of speech that allows for content and more content to battle it out in the quest for truth is outdated and ineffective in the digital world and has resulted in carnage without victory. Anti-democratic forces have become too strong and empowered to play dirty to entrust truth to today’s treacherous marketplace of ideas. Rehabilitating the forum for truth requires a structural intermediary to level the playing field, and Facebook is one of the entities best positioned to lay this groundwork by restructuring how its platform will be used.
The task for Facebook going forward is to assume the responsibility of its position and design its platform to introduce a baseline of fact as a non-negotiable part of this new media environment. The company’s leaders have expressed reluctance to act as “arbiters of truth” in the interest of speech, yet while truth is an on-going process, facts can be known. As with matters of agency in general, it is not a question of fault but rather, of responsibility; contrary to media narratives, Facebook does not shoulder the blame for where we are, which is a thornier issue and hardly productive to pursue in retrospect. However, it continues to profit from platform misappropriation that harms users while under operating as part of a structure that limits the fact-driving power of the press, and is thus responsible for taking steps to mitigate the harm.
III. Facebook and Choice Architecture
Facebook can no longer ride out the claim that it is a “neutral” platform for all ideas, when this alleged demonstration of neutrality is ultimately a choice that skews outcomes. Adopting from a theory first set forth by scholars in the field of behavioral law and economics, Facebook is a choice architect that is responsible for organizing the context in which people make decisions.
A critical implication of choice architecture is that there is no such thing as “neutral” design.[6] Choice architecture determines structure, which impacts outcomes one way or another, for better or for worse.[7] Recognizing its role in determining outcomes, Facebook must strive to critically engineer its platform for the better.
This concept is further applied in the context of the Internet by Lawrence Lessig’s seminal argument that “code is law.”[8] In addition to being a private business, Facebook is ultimately a core communication infrastructure of the digital and economy. As such its design can either embed or displace our democratic values, as it zones access to information and determines who sees what. Given its position in public life, Facebook’s coding decisions regulate to either implement our values or work against them. These decisions depend on its incentives. This paper attempts to frame Facebook’s incentives in terms of its mission statement, to give people the power to build community and bring the world closer together, based on the underlying position that democracy is the fundamental condition that enables this mission.[9]
Despite an earnest albeit haphazard attempt to maximize the democratic ideals of liberty, expression and choice, Facebook’s architecture has served to undercut these values and destabilize our democratic core. If Facebook is incentivized to enhance access to democracy, it will require coding for an environment that best enables people to exercise their human rights — most notably, the right to free expression based a careful understanding of true of freedom of speech. An optimal choice architecture for News Feed would improve the ability of users to consume truthful information, based on the normative determination that truth is essential to put democracy into practice.
IV. Data Monetization: Advertising
Facebook operates as a multi-sided exchange. On one side, it provides a service for its users, and on the other side, it sells attention supplied by its users (collected and organized as various forms of data) to advertisers. Its relationships on either side are separate, which enables it to offer a free service to its users and maintain a dominant market position by monetizing access to the market for user attention.[10] One of the reasons platforms dominate markets is their ability to match advertisers — those seeking to pay for data-driven insights — with users — those providing the data — based on the economics of large data sets: large data sets yield better insights for those in the market for attention.[11] As an intermediary, Facebook’s algorithm processes the data it collects to target ads towards users who have demonstrated an interest in the products and services being sold. There are several points of vulnerability in this process that Facebook can take steps to address.
First, sham operations can take advantage of the algorithm to market fraudulent or non-existent products. Facebook’s algorithm will run trial and error as it figures out how to target the ad to similar users based on those who buy the product, after which sales shoot up. Fabricated celebrity endorsements and news reports are two of the most effective methods that scammers use to make false ads appear credible; these tactics can be so effective, for example, that even family members of the celebrity in question are liable to fall for the ads.[12] To address this problem, Facebook should introduce a system that requires further verification for any ads that promote a celebrity endorsement. After the algorithm flags one of any number of pre-coded celebrity and influencer names, it should then require the advertiser to submit an additional document of verification before targeting the ad. In practical consideration, public figures rarely offer unsolicited endorsements to protect the integrity of their brand and paid sponsor relationships. As such, real endorsements are always verifiable through contract or verified content. Celebrities most often take to Instagram — owned by Facebook — to promote both endorsed and non-endorsed products and services, the latter of which they share organically to add value for their followers. In addition, Instagram has a strict process that verifies high profile accounts. Facebook’s algorithm could cross-check Instagram posts made by verified celebrity/influencer accounts to confirm celebrity endorsements before allowing them to appear in ads on Facebook.
A separate vulnerability in the ad targeting process is less direct. In an advertising exchange, users see ads based on the highest bidders for categories that they are associated with (i.e., commuter, millennial, sports) relative to the number of spots that are available for ads. Yet if users are only shown ads based on the highest bid for their pre-existing preferences, this process risks enhancing myopic viewpoints and hardening biased ways of thinking in ways that are likely to have a tangible impact over time. It also increases the potential for the system to be used by special interest groups to target people who are vulnerable to their messages by outbidding. These externalities both tax information diversity. In order to promote diversity, and the value it portends for truth and counteracting bias, Facebook might introduce a “diversity floor” into its advertising algorithm. It could do this by creating a mechanism that allows a lower priced set of bids on category to alternately be placed every few days in the place of top bids. In so doing, Facebook could operate to systematically expose users to a wider range of content that extends beyond their most easily monetized proclivities, which in turn has a positive impact on the environment for truth.
Although the primary impact is in the market for goods and services rather than ideas, a diversity floor also provides a back-end check on the impact of false and misleading political ads. By ensuring that users are exposed to a wider range of content, this algorithmic device could assist in creating better incentives for ad markets and creating more favorable conditions within the information ecosystem for truth.
V. News Feed and A Democracy in Peril
News Feed uses an algorithm that prioritizes content in people’s feed based on Facebook’s data-driven assessment of what each user most wants to see and is likely to engage with. Algorithmic selection influences not only what we think about, but how we think about it and consequently how we act.[13] The core issue with News Feed is that by prioritizing content for user engagement, it places a premium on the emotional response that is generated by false and misleading content. Researchers have found that because false information is more sensational and novel than the truth it energizes people to share more frequently and faster, including to gain attention for being perceived of as “the know” on a hot topic.[14] As more people experience outrage and respond to false content, News Feed’s algorithm systematically works to more prominently display this content to affiliated users, a process through which false narratives are consumed as fact.
In the United States, News Feed vulnerabilities are enhanced in the context of democratic elections. To address this problem, Facebook has announced several changes. One is a partnership with the Associated Press (AP) during the upcoming 2018 midterm elections, to assist in identifying and debunking false and misleading stories.[15] Although the details of how this partnership will integrate into the platform have yet to be revealed, some suggestions are as follow. First, Facebook could update its Terms of Service to provide that as a part of being able to share and interact with wide range of political-affiliated content in the interest of promoting freedom of speech, users agree to see verified AP content on related topics that will be integrated with their feed throughout the platform. Although this is not necessary to protect the company against any legal liability, it lends the partnership an air of legitimacy with both users and the media.
Problematic content that contributes to false narratives is often presented in insidious and ambiguous terms. It is difficult to remove this type of content outright, as it would place too heavy a burden on Facebook to make determinations absent hard facts. However, News Feed can take steps to systematically inhibit the cascade of this type of information. For example, content shared by users who have been flagged for posting false stories could enter the algorithm with a demerit, thereby requiring more in terms of traditional patterns engagement for the content to rise. It could combine this design with a feature that places a premium on content shared by credible news organizations or by users within networks that are consistently identified as sharing content from these sources. This interaction between demerits and premiums could, especially in the case of the most low-information and vulnerable users work to display more credible content from a few outliers in their networks, even while many of their friends and family may be pushing disreputable sources to the contrary. Although this does not directly ensure that users will believe the more fact-based content, it does structure the environment to promote truth, which, as demonstrated by the theory of choice architecture has an impact on the content and character of beliefs held over time.
Zuckerberg has indicated that out of a desire to make people feel more connected through using Facebook, it is making changes to News Feed to show users more content from their inner circles and less public content from outside sources.[16] This goal fails to account for the fact that most false information spreads directly as a result of being shared between close community members — family and friends — who live and work in the same circles and often tend to be predisposed towards the same ideas and beliefs. Designing News Feed for “more meaningful social interactions” would enhance the process by which false and biased information spreads and becomes taken as fact. It is this process of amplifying the in-group to the exclusion of all else that Facebook needs to deliberately counteract, rather than inadvertently enhance. This requires finding a way to prevent echo chambers by reaching people with an understanding of events that goes beyond the mechanics of tribal persuasion and is not maximized for what is most savory, easy or validating to see. Facebook is not just a platform for connection and must act from an awareness of this greater role.
Under this framework, content that is initially allowed — as questionable but not demonstrably false, and in giving a wide latitude to freedom of speech — could later be subject to takedown if hard facts emerge that disprove the rumor or theory originally circulated. All users who shared, liked or commented on this content could receive a notification directing them to a “Takedown Reflection” (TR). TR content would need to be directed through a non-profit organization focused on enhancing truth in media. This organization could be funded by a consortium of Internet companies, either directly or through a multi-stakeholder intermediary and offer similar services to multiple platforms that deal in news.
VI. Accounting for AI: Looking Ahead
The AP sidebar proposed as a short-term measure to preserve election integrity in the month leading up to the 2018 midterm election could be used as a test ground to develop a permanent overlay feature that appears atop News Feed as users scroll their feeds. So as not to be intrusive, the overlay could pop up as users hover over an icon that provides access to the screen. The overlay could ultimately be developed through AI to outline and link to rich trajectories of fact. Users could come to see this as a valuable feature that adds to their experience and better positions them to contribute to the conversation on Facebook. As Facebook has indicated a desire to promote meaningful social interactions in its platform, elevating the conversation by informing its users arguably works to enhance the quality of their interactions and the basis upon which they are connecting with one another. Relationships that withstand and enrich the quality of life beyond surface-level interactions require communication. Ensuring richer prospects for communication therefore promotes the underlying quality of relationships and helps Facebook to achieve this goal.
Amazon has implemented a structurally similar overlay called “X-Ray” for movies and shows on its Amazon Video service. With X-Ray, users can hover with the mouse on a browser, or tap the screen on a smartphone or tablet device to prompt extra information about the actors, location, music, or other bits of information that provide a richer context for the content they are watching. The transparent design of the overlay allows the content to remain visible underneath. X-Ray draws its information from IMDB and adapts as the scenes progress to be relevant to the content that is on-screen. IMDB is an extensive online database of information related to media content and personalities with more than 250 million active monthly users that is now owned by but operates independently from Amazon.[17]
Facebook could draw from the Amazon/IMDB model for X-Ray to develop a similar product that seeks to enhance how people consume news through its platform. As users read posts from their networks that reference world events, they could simultaneously have access to a fact-driven overlay that draws from a database to provide context for the story. Although it might be initially jarring for a user to tap or scroll and see an overlay appear, after time and with the advancing capabilities of AI it has the potential to becomes a value-add feature that users can rely on to learn more and engage on a deeper level with the content on their screens.
Facebook could develop this product in partnership with a non-profit or another company that — similar to IMDB — maintains an extensive database chronicling events. A dual benefit of this feature is that it is pro-active in correcting for misinformation — as required by its difficulty to correct for ex post — and it also strives to add functionality beyond linear content that users might welcome and enjoy. Facebook has received positive feedback on tests of its “Related Articles” feature for making it easier for users to get context on the information that they see.[18] Building out an overlay that strives to help people make informed decisions about what to believe and share could increase Facebook’s value proposition and go far in preventing misinformation from being adopted into beliefs. The sooner misinformation can be corrected for in the modern media environment before an attitude becomes ingrained, the more likely the correction is to be successful.
VII. Leading to Define a New Media Paradigm
Publishers make critical determinations about news stories based on their reporting efforts and are thus liable to demonstrate a level of truth in their reporting, depending on the nature of the content. This is not what Facebook does. Facebook is a platform for people to share their ideas and thoughts with one another. That the company should take steps to ensure a wider latitude for the impact of facts within this communicative process should not expose it to liability for content-based determinations. Although Facebook now finds itself responsible for a certain watchdog role that news media no longer has the distribution to carry out, its responsibility is to provide greater practical access to the non-disputed elements of physical reality that drive persuasive narratives in the news media, rather that moderating access to those narratives themselves.
While this may seem like a fine distinction under the existing regulatory framework, Facebook is the position to crystallize this distinction into a new custom which can in turn drive future applications of and amendments to the law. It is unbefitting of its position as an innovator or goals for connectivity for the company to continue catastrophizing the prospect of the billions in publisher liability that could result if it seen as making content-based determinations. As a qualifying matter, First Amendment jurisprudence in the United States is very generous towards publishers and would fall even less stringently on an entity that rises to a publisher-intermediating role, at best. This would be true even if the company was unable to avail itself of the safe harbor under Section 230 of the Communications Decency Act for “providers of an interactive computer service.”[19] Courts have so far accepted that algorithmic intermediation does not make a platform a publisher and that the same processes of mediation deserve speech-like protections in their own rights.[20] Under this framework, Facebook is in a strong position to reconfigure its platform for fact-based consumption while remaining outside the scope of publisher liability. Ultimately, Facebook has the leverage, capacity and incentive to drive the legal distinction between a media publisher that creates and distributes content and a platform that seeks to facilitate access to the world of content based on fact. Given the reality of our new media landscape, our institutions will be required to negotiate new expectations for the relationship between platforms and truth, and Facebook is in the best position to “move fast and break things” in leading the charge.
VIII. Conclusion
In its effort to operate as a “neutral” conduit in facilitating free expression, Facebook has overlooked the consequences of allowing unfiltered channels of speech based on misinformation and lies. It has mistakenly assumed that neutrality is possible when it comes to designing the content-based infrastructure for communication and news in a digital age. This position is no longer tenable to create the platform that Facebook indicates it wishes to build. Facebook cannot wait for the law to implement changes that are required to enhance prospects for democracy. Instead, it needs to act to solidify customs that can serve as changes in the law. Its focus should be on designing its platform — how user-generated content and advertising is organized, shared and appears in front of users — to establish a much-needed baseline of fact in the sphere of information-consumption online. As to what people make of facts is, as always, a matter of opinion, and the latitude to express those opinions in the public sphere should remain wide. Drawing a line at facts will always draw criticism from those it serves to maintain a false equivalency between all presentations of truth under the guise of free speech. It is not the least controversial or complex position, but the most courageous and forward-looking.
When a society’s operative construction of freedom of speech as integrated with communications infrastructure fails to uphold truth, it becomes necessary reevaluate how to infuse democratic meaning back into the speech principle. Although this is not a role that Facebook asked for, and the consequences of the past few years are far from what it intended, it has found itself in this position and is responsible to implement changes that will both uphold democracy and revitalize its prospects to do business in the world that it aspires to set forth.
[1] Obama, Barack. Interview with David Letterman. My Next Guest Needs No Introduction. Netflix, 2018.
[2] See Carmi, G.E. (2007) “Dignity — The Enemy From Within: A Theoretical and Comparative Analysis of Human Dignity as a Free Speech Justification.” Journal of Constitutional Law, Vol. 9:4, 970.
[3] Bode, L. & Vraga, E.K. (2015) “In Related News That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media.” Journal of Communication, 65, 622.
[4] Rassmussen, T. (2008). “The internet and differentiation in the political public sphere. Nordicom Review, 2008, 80.
[5] See generally Cohen-Almagor, R. (2017). “J.S. Mill’s Boundaries of Freedom of Expression: A Critique.” The Royal Institute of Philosophy, 92, 565–595 — for an overview of John Stuart Mill’s philosophy on the relationship between the “marketplace of ideas” and freedom of speech.
[6] Thaler, R. & Sunstein, C. (2009). Nudge. New Haven, CT: Yale University Press, 3.
[7] Id. at 96.
[8] Lessing, L. (2000). “Code is Law: On Liberty in Cyberspace.” Harvard Magazine. Retrieved from https://harvardmagazine.com/2000/01/code-is-law-html
[9] Zuckerberg, M. (2017, June 22). “Bringing the World Closer Together” [Blog post]. Retrieved from https://www.facebook.com/zuck/posts/10154944663901634
[10] Cohen, J.E. (2017). “Law for the Platform Economy.” UC Davis Law Review, Vol 51, 146.
[11] Id.
[12] Faux, Zeke. (2018, March 27). “How Facebook Helps Shady Advertisers Pollute the Internet.” Bloomberg. Retrieved from https://www.bloomberg.com/news/features/2018-03-27/ad-scammers-need-suckers-and-facebook-helps-find-them
[13] Just, N. & Latzer, M. (2017). Governance by Algorithms: Reality Construction by Algorithmic Selection on the Internet.” Media, Culture & Soc’y, 39, 245.
[14] Smith, J., Jackson, G. & Raj, S. (2017, Dec. 20). “Designing Against Misinformation” [Blog post]. Retrieved from https://medium.com/facebook-design/designing-against-misinformation-e5846b3aa1e2
[15] Bach, N. (2018, March 8). “Facebook Has Enlisted the Help of This News Agency to Debunk Fake News During Midterm Elections.” Fortune. Retrieved from http://fortune.com/2018/03/08/ap-associated-press-fact-checkers-facebook-fake-news-midterm-elections/
[16]Mosseri, A. (2018, Jan. 11). “News Feed FYI: Bringing People Closer Together” [Blog post]. Retrieved from https://www.facebook.com/facebookmedia/blog/news-feed-fyi-bringing-people-closer-together/
[17] Perez, S. (2017, Feb. 3). “One of the worst comments sections on the internet is shutting down.” TechCrunch. Retrieved from https://techcrunch.com/2017/02/03/one-of-the-worst-comments-sections-on-the-internet-is-shutting-down/
[18] See Smith et al., supra Note 14.
[19] Section 230 of the Communications Decency Act provides that “no providers of an interactive computer service shall be treated as the publisher of any information provided by another information content provider,” which has allowed platform companies to avoid any exposure to civil liability for content disseminated on their platforms as they have grown over the last twenty years. See Communications Decency Act (1996), 47 U.S.C. §230.
[20] Cohen, supra Note 10 at 164.