City Law Tutors

View Original

Fake News and UK Regulation

With the rise of fake news is the current UK media regulatory framework, suitable for the new media landscape, in regards to misinformation?

Introduction

This essay will discuss the current state of UK media regulation, and what implications new media and “fake news” pose for the area, and how this might be tackled. The first part of the substance of this essay will provide some background on the current landscape of traditional media regulation and provide a base level academic critique on its functioning. The second part will set out what is meant by new media, and provide a detailed assessment of what “fake news is” from a development and academic perspective. Following this, the challenges of the types of new media and how this culminates in fake news will be analysed in depth, considering arguments as to how fake news on the internet and on social media ought to be considered, in relation to media and news publication. Finally, different proposals for regulation of the new media space to tackle fake news in the formats identified will be critically evaluated on their merits, taking into consideration academic views on the direction the UK is heading. A conclusion will summarise what has been said throughout this essay, and what the UK’s next step should be.

What is new media

The term new media usually refers to forms of digital media that have arisen over the past decade, including many news websites solely operated on the internet, and specifically circulated to millions of people through social media. There are also organisations and individuals who’s entire operation exists solely on social media, such as individual YouTube content creators, or teams of creators who publish their own news videos on YouTube. YouTube’s viewer base is over 1.8 billion regular users, which gives these creators a platform with access to billions of people globally. These videos are not fact checked, and anyone can create a YouTube account with no vetting or identification required. The same goes for other social media and blog sites like Facebook, Twitter, Reddit, etc.[1] The users of these sites are treated as exercising their own speech as individuals by the law, and not classed as journalists. They are not regulated, and neither are the social media companies in general.

The term fake news seemed to arise into the public conscious in 2016, during the US presidential election. Donald Trump commonly used the phrase to denounce news sources critical of him[2]. Different definitions exist, but generally it refers to fabricated information presented as news, with the effect of deceiving and misleading others into believing falsehoods. Media scholar Nolan Higdon defined the term as “misleading content presented as news and communicated in formats spanning spoken, written, printed, electronic, and digital communication”. In 2017, Wardle denounced the phrase arguing that it was “woefully inadequate” at identifying the issue correctly, instead using "information pollution. She categorizes three forms of this. Mis-information, being false information distributed without harmful intent, Dis-information created and shared by those with harmful intent, and Mal-information, sharing genuine information with harmful intent[3]. For the purposes of this essay, “fake news” will refer to all three of these. Since its popularization, this type of fake news has been primarily shared over social media, specifically Facebook, and twitter and YouTube to a noticeable extent to.

As this is a new area, and primarily an epidemic developing on a novel and unregulated medium as stated, the UK does not have any targeted legislation or enforceable regulation of any kind policing fake news on the internet.

What challenges does new media present

Facebook, YouTube, Twitter and Reddit are companies as new as they are big, with a combined market cap of over 3 trillion dollars (most of which comes from the former two). These companies operate as websites on the internet, creating platforms for users to create accounts and interact with other users around the world. The main argument as for why social media companies should be held to account as publishers, is due to their massive influence of people’s media and news consumption. There are currently 2.85 billion monthly active users on Facebook, and on YouTube over a billion hours of content are watched daily. 45% of US adults and 33% of UK adults get their news almost exclusively from Facebook, making Facebook come in third after only the BBC and ITV[4]. Therefore, logic would state that if they do in fact provide people with news, and they do so as one of, if not the most used provider of news (in the US), they must be held to account as a news media source and by extension a publisher[5].

The clear argument against this however is the fact that the people at companies themselves do not “publish” any news information, in the sense that the video or written content created, the opinions espoused, and information created has not been created by the company[6]. The companies claim to be mediums for people to voice their own opinions and takes, personal or political. The point then becomes whether they have a say in the news their sites provides, and whether they take any kind of active role in it. Accordingly, Facebook has multiple times neglected to take down false information, which would mean they are not purporting to publish fact checked news.

However, Facebook has in more recent times attempted to take down false information from their site, albeit with little success. Detractors and those advocating for more strict regulation have argued that this attempt, notwithstanding their inadequacy, signals them taking responsibility, getting involved and thus assuming a duty of care over the content on their websites. Following, would then implicate them in the fallout of any “fake news” circulating their sites. Furthermore, many sites including Facebook, Twitter, and YouTube now have their own criteria or community guidelines upon which to ban users or take down certain posts[7]. The companies maintain that these guidelines are put in place mainly to regulate the spread of misinformation and monitor and prohibit any hateful or offensive content or speech, and they have political neutrality. However, many political voices right of centre have recently accused these guidelines of being highly politically biased, primarily on the basis that many right wing speakers have been banned from platforms like Facebook twitter and YouTube, often all at one time in seemingly “targeted attacks”[8]. This contrasted with many opposing political commenters having published content contrary to the guidelines without recourse. If this were the case, and these companies did have politically based ethos’s and guidelines it would demonstrate that they are controlling the direction and ideological leaning of the content of their sites, and deciding what and what not to publish. This would be contrary to the idea of un interfered free speech.

In the current landscape, there exist multiple upcoming grassroots collaborations and start-up news businesses that use social media sites to advertise their content, and in some cases use them as their only medium for publishing content (for example YouTubers). In the midst of accusations of biased censorship, YouTube has made the decision to hire managers to work with both progressive political publishers as well as right wing political for all political content, hoping to aim a crackdown on supremacist content and conspiracy theories while allowing rounded political discussion and reliable information to flourish[9]. These managers will focus on “advising partners on YouTube channel development strategies and representing the political publisher landscape within the organization”. They would also work on “bringing issues to resolution” and “organiz[ing] programs and events to help political publishers best utilize YouTube”.

If they were to be regulated this way, this would “damage people’s right to free speech on the basis that big social media is now the standard and medium through which all people voice their opinions”[10][11]. The key question of whether there is a public interest requirement in keeping these companies publishers comes up. For the majority of people, getting their opinions to others in society, which is the accustomed more at this point, would be diminished if these sites were to be publishers[12]. The other alternative would be to treat the individuals or organisations posting on the sites as publishers, although these are regular people without training, qualifications, or experience in publishing. It may not be conducive to free speech and a peaceful society to hold everyone’s every thought and statement up to journalistic standards.

Proposed reforms to media regulations

Social media companies are as of now self-regulated through their terms and conditions of use and policies as stated. The larger companies like Facebook are already signatories to a variety of self-imposed codes and industry collaborations. The UK government published a Code of Practice for providers of social media platforms in keeping with section 103 of the Digital Economy Act 2017. This however is merely guidance that asks these companies to put in place internal processes for the reporting of harmful conduct, and be transparent regarding its response actions. It should be noted that the principles set out in the code mostly just advised the pre-existing practices of larger platforms.

A possible answer to the issue of regulation is social media and of the fake news problem may be a similar approach to IPSO, “whereby self-regulation and the empowerment of users is implemented to manage the issue of fake news”[13]. This would enable societal standards to create boundaries for the content of these sites while mitigating the risk to individuals freedom of speech posed by interference by strict regulation bodies[14]. However, this is flawed as the general public cannot fact check or be used to, and therefore this issue would have to be handled by an association of the companies themselves. Self-governance would potentially give them more freedom to allow discussion to take place (freedom of speech interest) while ensuring they may take measures to combat false information, checking each other’s work so to speak. The companies would have to hold each other up to a standard. However given companies like Facebook’s current track record in policing false information on their site, it would be easy to see how this option would not sufficiently satisfy.

Furthermore, self-regulations can be seen as too soft, in the sense that there would be no third party body holding them to account without sympathy to their interests. If self-governed, the companies can set their own standards in way very favourably to themselves, ultimately giving themselves free passes[15]. It also takes away a third party who can impose sanctions for breach, which would mean diluted accountability and no deterrent to ensure compliance. As such, a system similar to IPSO or an association of self-regulation may not be the answer.

Another possible proposal for regulation, suggests that there should be a “structural regulation that focuses on systematic changes dealing with the”[16] “mechanics of algorithms, verification and bots”[17]. Social media companies already use algorithms to sort content and posts in users timeline based on their activity patterns and interests. These algorithms have also been used to recognise content contrary to its guidelines. However this is still largely a self-regulatory approach unless the algorithms were created by a third party body[18].

In January 2018 The Digital Charter, a business-government initiative launched initially by the UK government was published. It was amended in April 2019. The charter was fundamentally based on a number of key principles, namely, that “the internet should be free, open and accessible”, “people should understand the rules that apply to them when they are online”, “personal data should be respected and used appropriately”, “protections should be in place to help keep people safe online, especially children”, “the same rights that people have offline must be protected online”, and “the social and economic benefits brought by new technologies should be fairly shared”. The Charter’s work programme in its policy paper set out the actions that must be undertaken to achieve these fundamental purposes.

In February 2019, the House of Commons Digital, Culture, Media and Sport (DCMS) Committee published their Final Report on ‘Disinformation and fake news’ covering issues concerning individuals’ rights over their privacy, how their political choices might be affected and influenced by targeted online information campaign ads using algorithms, and interference in elections by those with malicious intent to sow lies, hate, and confusion via fake news[19]. The Committee concluded that it would be “difficult to prevent disinformation and fake news” but that “enforcement of greater transparency in the digital sphere, to ensure that we know the source of what we are reading, who has paid for it and why the information has been sent to us” was required as well as “information about how data is monitored, tracked and stored” via social media sites. This is where the focus must lie in regards to the UK’s future of online fake news regulation. Instead of arbitrary systems of loose guidance tech giant can skate through, or completely cutting off free speech rights with a regulatory body alone, it draws on individuals and allows the public to take back control of what they are consuming, by allowing them to see where information has actually come from, who are the benefactors lurking behind the shadows targeting these ads and info content, and why has it been put in front of them.

The aforementioned report included recommendations for, existing legislative tools such as privacy laws, data protection legislation, antitrust and competition law to be utilized, a compulsory Code of Ethics, unlike a recommended guideline, which would set out what constitutes harmful content on websites (including disinformation), and an independent regulating body with the ability to commence legal proceedings against companies in breach of said Code. This body would also be granted legislative powers to obtain information and impose large fines of eighteen thousand pounds for code violations. To supplement this, there would be changes to electoral law with the effect of changing political campaigning techniques, concerning the move to online ads and billboards and microtargeted political campaigning. This “absolute transparency of online political campaigning” would include visibly clear and legible banners appearing persistently on all paid-for political campaign ads, videos, and other content providing the source of the content and who the advertiser paying for it is. This will not only provide objective and impartial regulation of big tech companies they cannot so easily get around, and put in place fitting punishment and deterrent (in the form of fine per breach, and legal proceedings), but will also empower and educate the public to be aware of variables that could otherwise subconsciously influence them negatively.

April 2019 saw the producing of the Online Harms White Paper[20] by the UK government with their ambition to the UK “the safest place in the world to go online”. It was a furtherance to the report on fake news and sets out the government's proposals to dealing with "online harms". It defined this as “online content or activity that harms individual users, particularly children, or threatens our way of life in the UK, either by undermining national security, or by reducing trust and undermining our shared rights, responsibilities and opportunities to foster integration”. Following this, the Online Safety Bill[21], which is currently a proposed Act of Parliament, was published as a draft Act of Parliament on 12 May 2021, and includes provisions to fully implement both the white paper and the DCMS’s report on fake news (including everything discussed above). This would frame social media companies as publishers and hold them to account.

However, The Internet Society, a non-profit organisation which supports an open and free decentralized internet, has argued that the committee "has been too eager to ignore" the risk of any legislative move that will undermine encryption[22]. They raise the data protection and security point that this will dangerously jeopardize people’s online privacy, and give the government invasive powers over people’s data, breaching security. Robin Wilton of the Internet Society stated that “the findings released today are, sadly, a reflection of a public debate largely framed in misleading and emotive terms of child safety” and that "as a consequence, we see a bill that will result in more complex, less secure systems for online safety, exposing our lives to greater risk from criminals and hostile governments”.

Conclusion

In summary, fake news is more a problem that it ever was, facilitated by the very fast rise of the internet and social media platforms specifically, which over the past decade have taken over the world from their headquarters in Silicone Valley. This presents a plethora of challenges, and either way it is argued, social media sites have given huge platforms for news, information and opinions to be circulated to billions of people globally, and many times this information will be harmful, misleading, and/or malicious. An important component of this however, is people’s right to free speech, which is now inextricably linked to social media due to the way it has transformed the world. The interest in regulating social media and tackling harmful fake news must be balanced with people right to free speech. The UK currently has none in action. There are a number of different ways to achieve this goal, but the most reasonable and robust approach, also now the likeliest to be actually enacted in the UK, is a dual system of a third party regulatory body (in the form of Ofcom) which holds social media companies to account, coupled with an infrastructure for transparency that allows people to understand what is going on, even though this approach may entail some data protection and internet security concerns down the road.

Bibliography

Articles

Aldwairi and Alwahedi, ' Detecting Fake News in Social Media Networks' [ 2018] PCS 215, 222

Balkin, “How to Regulate (and Not Regulate) Social Media” [ 2021] JFSL 26, e.g. 45

Brett M. Pinkus, “The Limits of Free Speech in Social Media” (2021) 3 UNT DCL

C Calvert, S McNeff, A Vining and S Zarate, “Fake news and the First Amendment: reconciling a disconnect between theory and doctrine” (2018) 86(99) UCLR 103

D Vese, “Governing Fake News: The Regulation of Social Media and the Right to Freedom of Expression in the Era of Emergency” [2021] EJRR 1

Flintham, Karner, Bachour, Gupta, Creswick, Moran, “Falling for Fake News: Investigating the Consumption of News via Social Media” [ 2018] CHFCS 1, 10

Goldberg, “Responding to Fake News: Is There an Alternative to Law and Regulation” [ 2018] SLR 417,

Hargittai, “Potential Biases in Big Data: Omitted Voices on Social Media” [ 2018] SSCR 10, 12

Isar Khan, “How can states effectively regulate social media platforms” (2021)  OBLB

K. Sabeel Rahman, “The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept” (2017) 39(5) CLR

Pennycook and Zittrain, “The science of fake news” [ 2018] PSN 1094, 1096

T McGonagle, “Fake news: false fears or real concerns?” (2017) 35(4) NQHR Rights 203–09

YERLİKAYA, “Social Media and Fake News in the Post-Truth Era” [ 2020] IT 177, 19

Websites

Dean Baker, “Why Is Facebook, the World’s Largest Publisher, Immune to Publishing Laws?” (19 July 2019, Truthout) <Why Is Facebook, the World's Largest Publisher, Immune to Publishing Laws? (truthout.org)> accessed 24 November 2021

Department for Digital, Culture, Media & Sport, “Online Harms White Paper” ( Gov.uk 2020) < https://www.gov.uk/government/consultations/online-harms-white-paper> accessed 09 January 2021

Julia Alexander, “YouTube is hiring managers to work with political creators” (16 August 2019, The Verge) <YouTube is hiring managers to work with political creators - The Verge> accessed 24 November 2021

Maxwell, “Administrative Law” (Legallaw.com 2005) <http://www.legallaw.com> accessed 11 January 11

Tech, “Online Safety Bill: New offences and tighter rules” (BBC 2021) accessed 09 January 2021

Tech, “Online Safety Bill: New offences and tighter rules” (BBC 2021) accessed 09 January 2021

Footnotes

[1] YERLİKAYA, “Social Media and Fake News in the Post-Truth Era” [ 2020] IT 177, 19

[2] Pennycook and Zittrain, “The science of fake news” [ 2018] PSN 1094, 1096

[3] T McGonagle, “Fake news: false fears or real concerns?” (2017) 35(4) NQHR Rights 203–09

[4] Flintham, Karner, Bachour, Gupta, Creswick, Moran, “Falling for Fake News: Investigating the Consumption of News via Social Media” [ 2018] CHFCS 1, 10

[5] Balkin, “How to Regulate (and Not Regulate) Social Media” [ 2021] JFSL 26, e.g. 45

[6] Aldwairi and Alwahedi, ' Detecting Fake News in Social Media Networks' [ 2018] PCS 215, 222

[7] Dean Baker, “Why Is Facebook, the World’s Largest Publisher, Immune to Publishing Laws?” (19 July 2019, Truthout) <Why Is Facebook, the World's Largest Publisher, Immune to Publishing Laws? (truthout.org)> accessed 24 November 2021

[8] Hargittai, “Potential Biases in Big Data: Omitted Voices on Social Media” [ 2018] SSCR 10, 12

[9] Julia Alexander, “YouTube is hiring managers to work with political creators” (16 August 2019, The Verge) <YouTube is hiring managers to work with political creators - The Verge> accessed 24 November 2021

[10] Media essay plan

[11] C Calvert, S McNeff, A Vining and S Zarate, “Fake news and the First Amendment: reconciling a disconnect between theory and doctrine” (2018) 86(99) UCLR 103

[12] Brett M. Pinkus, “The Limits of Free Speech in Social Media” (2021) 3 UNT DCL

[13] Media essay plan

[14] D Vese, “Governing Fake News: The Regulation of Social Media and the Right to Freedom of Expression in the Era of Emergency” [2021] EJRR 1

[15] Goldberg, “Responding to Fake News: Is There an Alternative to Law and Regulation” [ 2018] SLR 417,

[16] Media essay plan

[17] Isar Khan, “How can states effectively regulate social media platforms” (2021)  OBLB

[18] K. Sabeel Rahman, “The New Utilities: Private Power, Social Infrastructure, and the Revival of the Public Utility Concept” (2017) 39(5) CLR

[19] Maxwell, “Administrative Law” (Legallaw.com 2005) <http://www.legallaw.com> accessed 11 January 11

[20] Department for Digital, Culture, Media & Sport, “Online Harms White Paper” ( Gov.uk 2020) < https://www.gov.uk/government/consultations/online-harms-white-paper> accessed 09 January 2021

[21] Tech, “Online Safety Bill: New offences and tighter rules” (BBC 2021) accessed 09 January 2021

[22] Tech, “Online Safety Bill: New offences and tighter rules” (BBC 2021) accessed 09 January 2021