The regulation of internet intermediaries such as Facebook and Google has drawn increasing academic, journalistic and political attention since the ‘fake news’ controversies following the UK’s Brexit vote and Donald Trump’s election victory in 2016. From then onwards, ‘misinformation’ (false or inaccurate information such as rumours) and ‘disinformation’ (misleading information spread deliberately to deceive) have also become buzz words. The rise of populist leaders in a number of countries, due to a mix of socio-economic and cultural factors, has been seen as symptomatic of a rise of distrust in established models of governance. All these point to a crisis of trust in governments, businesses, and social media platforms. While the rise of mistrust is of growing concern in the corporate world and among global elites, there is little consensus in academic literature about how the trend endangers democracy, and how different policies might be employed to safeguard it.
The issue of digital disinformation and the ways it can affect democracy and the public sphere has become more acute during the ongoing COVID-19 crisis. The health crisis has made us rethink about the balance between personal digital privacy and public health, the role of public sector agencies and tech giants such as Google, Facebook and Apple in contact tracing, and concerns about how personal data are being used. In the midst of a global health pandemic it is important for citizens to, first, have a clear idea about data and privacy policy, and second, have access to reliable and accurate information. Internet intermediaries should take steps to circulate true information to the public and refrain from carrying damaging items such as conspiracy theories. This can be done via various mechanisms, including self-regulation, moderation of content, as well as government regulation.
My co-authored book Digital Democracy, Social Media and Disinformation deals with the key concepts regarding information mechanisms through social media platforms and addresses these core issues and their complex relation. It assesses and analyses these notions while contributing to understanding the context of information via social media within the current, problematic status of digital democracy. A key issue that is examined is the pressure for a new regulatory framework for the information intermediaries both within and outside the media industry, noting that the range of issues thrown up by the operations of the information intermediaries now engage a wider focus than media policy per se, including data and privacy policy, national security, hate speech and other issues. The concept of ’fake news’ emerges as only one of the drivers of policy change: the dominance of information intermediaries such as Facebook and Google in respect of the digital advertising market and data monopolisation may be equally significant.
How can we then turn media intermediaries more accountable to the public? Digital platforms such as Facebook, Google and Twitter have become some of the world’s fastest growing and most powerful media and communications companies, so new approaches to overseeing these entities in the public interest are required. Increasingly global and regional bodies have introduced strategies to regulate platform power, including the Global Internet Forum to Counter Terrorism (GIFCT) and the European Commission’s Code of Conduct on countering illegal hate speech online. Certain EU Member States (Germany, France and Italy) have also taken initiatives to combat not just disinformation, but other detrimental digital content such as hate speech. There has also been an increase in the platforms’ own attempts at international, regional and localised forms of self-governance, such as Facebook’s Oversight Board and Twitter’s Trust and Safety Council. In the book, we asked specifically if a new generation of internet regulation is needed to counter these trends and overturn a situation where platforms have amassed great power but with ‘limited responsibility’ for the illegal and harmful content available via their services. We conclude that action is needed, but we acknowledge that media policy has always been controversial since it assumes state intervention which limits freedom of expression and the right to communication.
This is an emerging agenda whose intensity has grown since the US Presidential election and the UK’s Brexit vote in 2016, and the moral panic over fake news and disinformation. While the impact of peer-sharing on media distribution, or the impact of algorithmically-based persuasion techniques in the new field of micro-targeted advertising have not been sufficiently well-researched as yet, what we are clear about is that the rise of platforms represents a structural change in the political economy of the media. The oligopoly structure of today’s capitalist media ecology results in unprecedented corporate and political power of just a few large multinational companies, that monetise human effort and consumer assets in search for profits. This not only affects competition but also limits the liberal freedoms of speech and expression.
Written by Pr. Petros Iosifidis
Pr. Petros Iosifidis is Professor in Media and Communication Policy in the Department of Sociology, City University London. Professor Iosifidis has served on the Peer Review College of the Economic and Social Research Council (2011-18), he has been Vice-Chair of IAMCR Global Media Policy Working Group (2015-now) and has served as External Examiner at the University of Westminster and London Metropolitan University (2012-18). He is author of 9 books including Global Media & National Policies (2016, with T. Flew and J. Steemers), Digital Democracy, Internet Intermediaries and Disinformation (2020, with N. Nicoli).
Photograph: Hollywata |Flickr.com
Comentários