The regulation of internet intermediaries such as Facebook and Google has drawn increasing academic, journalistic and political attention since the ‘fake news’ controversies following the UK’s Brexit vote and Donald Trump’s election victory in 2016. From then onwards, ‘misinformation’ (false or inaccurate information such as rumours) and ‘disinformation’ (misleading information spread deliberately to deceive) have also become buzz words. The rise of populist leaders in a number of countries, due to a mix of socio-economic and cultural factors, has been seen as symptomatic of a rise of distrust in established models of governance. All these point to a crisis of trust in governments, businesses, and social media platforms. While the rise of mistrust is of growing concern in the corporate world and among global elites, there is little consensus in academic literature about how the trend endangers democracy, and how different policies might be employed to safeguard it.
How can we then turn media intermediaries more accountable to the public? Digital platforms such as Facebook, Google and Twitter have become some of the world’s fastest growing and most powerful media and communications companies, so new approaches to overseeing these entities in the public interest are required. Increasingly global and regional bodies have introduced strategies to regulate platform power, including the Global Internet Forum to Counter Terrorism (GIFCT) and the European Commission’s Code of Conduct on countering illegal hate speech online. Certain EU Member States (Germany, France and Italy) have also taken initiatives to combat not just disinformation, but other detrimental digital content such as hate speech. There has also been an increase in the platforms’ own attempts at international, regional and localised forms of self-governance, such as Facebook’s Oversight Board and Twitter’s Trust and Safety Council. In the book, we asked specifically if a new generation of internet regulation is needed to counter these trends and overturn a situation where platforms have amassed great power but with ‘limited responsibility’ for the illegal and harmful content available via their services. We conclude that action is needed, but we acknowledge that media policy has always been controversial since it assumes state intervention which limits freedom of expression and the right to communication.
This is an emerging agenda whose intensity has grown since the US Presidential election and the UK’s Brexit vote in 2016, and the moral panic over fake news and disinformation. While the impact of peer-sharing on media distribution, or the impact of algorithmically-based persuasion techniques in the new field of micro-targeted advertising have not been sufficiently well-researched as yet, what we are clear about is that the rise of platforms represents a structural change in the political economy of the media. The oligopoly structure of today’s capitalist media ecology results in unprecedented corporate and political power of just a few large multinational companies, that monetise human effort and consumer assets in search for profits. This not only affects competition but also limits the liberal freedoms of speech and expression.
Written by Pr. Petros Iosifidis
Pr. Petros Iosifidis is Professor in Media and Communication Policy in the Department of Sociology, City University London. Professor Iosifidis has served on the Peer Review College of the Economic and Social Research Council (2011-18), he has been Vice-Chair of IAMCR Global Media Policy Working Group (2015-now) and has served as External Examiner at the University of Westminster and London Metropolitan University (2012-18). He is author of 9 books including Global Media & National Policies (2016, with T. Flew and J. Steemers), Digital Democracy, Internet Intermediaries and Disinformation (2020, with N. Nicoli).
Photograph: Hollywata |Flickr.com