Panels

Panel Title: Internet Censorship and its Impact on Spread of Misinformation
(August 3, 2020)

Panel Moderator: Dr. Indrajit Ray, Colorado State University

Panelists:
  i.   Dr. Francesca Spezzano, Boise State University
  ii.  Dr. Hrishikesh B. Acharya, Rochester Institute of Technology
  iii. Dr. Jedidiah Crandall, Arizona State University
  iv.  Dr. Tarek F. Abdelzahar, University of Illinois, Urbana-Champaign

Abstract:

The ability to "seek, receive and impart information and ideas through any media and regardless of frontiers" is an explicit part of Article 19, right to freedom of opinion and expression, in the Universal Declaration of Human Rights (UN). Yet, in practice, there is a serious tension between free speech, and control of bad information. For instance, it would hardly be responsible behavior for a government to allow untested food, homes, or medicine to be advertised and sold; but quality control of goods and services itself requires some control over what information can be published (e.g. prevention of false advertising). This panel discusses the interaction between the twin threats of censorship and misinformation, and specifically, how they affect availability of information online.

One of our challenges is to agree on what we mean by each of these terms. For example, Netflix shows different content in different countries; this is considered customization. But when it removes specific content in response to government pressure (such as the Patriot Act episode blocked in Saudi Arabia), this becomes censorship. Is it reasonable to consider policies enforced by governments to be censorship, and policies enforced by private actors (including Google, Facebook, or Cloudflare) to be "editorial discretion"? How credible is the defense that such "platform" companies should not or cannot police their content?

Similarly, the question comes up as to what we mean by misinformation. Some information is non-factual, but an honest mistake ("Dewey defeats Truman"). Does this count as misinformation? On the other extreme, bad information can certainly be used as a weapon: hoaxes, phishing attacks, and propaganda are all examples of this. How can we build a coherent policy to deal with various cases of misinformation? And is it reasonable to guess the intent of the creator? Pinker and Galtung have also noted that media organizations themselves have an inbuilt bias: they are incentivized to promote triggering content, as it drives engagement. This applies with even more force to social networks, and it raises the question of who we can trust to police misinformation on the Internet.

Finally, we consider how censorship interacts with misinformation. The simplest answer would be to say, misinformation injects bad content, while censorship takes content out; but in practice the two interact much more deeply. For instance: if a government acts to block specific misinformation, does that qualify as censorship? Should governments – or platform companies – act to block misinformation? Misinformation can affect elections, health outcomes, and public order; on the other hand, censorship laws set up to combat misinformation have been abused to shut down inconvenient opponents. And there is the frightening prospect of the two combining together, when authorities themselves promote misinformation, and use censorship to ensure no competing information can be seen. We now move on to a discussion of these issues, with particular emphasis on how they affect the Internet today.

Bio sketches:

Dr. Francesca Spezzano is an Assistant Professor at Boise State University in the Computer Science department. She received her Ph.D. in Computer Engineering from the University of Calabria, Italy, in 2012. Before joining Boise State, Dr. Spezzano was a Post-doctoral Research Associate at the University of Maryland Institute for Advanced Computer Studies. While a Ph.D. student, she visited the Database Group at University of California Santa Cruz. Her research interests deal with social network analysis and mining with applications to misbehavior and misinformation detection and mitigation, information diffusion, and national security. She served as PC Co-Chair of ASONAM 2019. Dr. Spezzano is a recipient of the NSF CAREER award (2020). Her research is currently supported by the National Science Foundation and the US Army Research Office.

Dr. Hrishikesh B. Acharya is Assistant Professor of Computing and Information Sciences at the Rochester Institute of Technology, which he joined after spending four years as Assistant Professor at the Indian Institute of Technology, Delhi. He received his Ph.D. in Computer Science from the University of Texas, Austin in 2012. Dr. Acharya's main research interests are in networks and computer security (both proofs and code). He focuses on the actual boxes that are used. This includes switches and routers, in-path (firewalls, NAT) and on-path (NIDS) middleboxes, and end hosts (clients and servers). He is also interested in fault-tolerance, distributed systems, and ad-hoc networks. Some of his research questions have technical answers; some have political answers; and some don't have answers at all. Recently, Dr. Acharya received a National Science Foundation RAPID award to study Internet censorship policies in several countries ranging from those that employ very strict controls (e.g. China, Russia) through moderate (e.g. South Korea, India) to very open (e.g. USA, Germany, Japan) and characterize how they control access to information, specifically related to COVID-19, during the course of the pandemic.

Dr. Jedidiah Crandall recently joined the faculty at Arizona State University in the Biodesign Center for Biocomputation, Security and Society, as an Associate Professor in the School of Computing, Informatics and Decision Sciences Engineering (CIDSE).  Prior to that he was a faculty member at the University of New Mexico for 13 years. Jed has performed research to measure Internet censorship in different parts of the network stack (from IP up to social media and the application layer) since 2007, and also is interested in other aspects of Internet freedom such as Internet surveillance and protecting at-risk Internet users (such as journalists and activists) in repressive environments.  He is heavily involved with a non-profit called Breakpointing Bad (https://www.breakpointingbad.com) that fights for Internet freedom through vulnerability discovery, capture the flag, and outreach.

Dr. Tarek Abdelzahar is Professor and Willett Faculty Scholar in the Department of Computer Science at the University of Illinois, Urbana-Champaign. His research interests lie broadly in understanding and controlling the temporal properties of software systems in the face of increasing complexity, distribution, and degree of embedding in an external physical environment. He is especially interested in developing theory, architectural support, and computing abstractions for predictability in software systems, motivated by the increasing software complexity and the growing sources of non-determinism. Dr. Abdelzahar has authored/coauthored several book chapters and more than 350 refereed publications in leading conferences and journals in several fields including real-time computing, distributed systems, sensor networks, and control. Until recently, he served as the Editor-in-Chief of the Journal of Real-Time Systems and is currently serving as Associate Editor of the ACM Transactions on Sensor Networks, ACM Transactions on Autonomous and Adaptive Systems, and the IEEE Transactions on Mobile Computing. He also serves on the Steering and Executive committee of several professional societies and conferences, on numerous technical program committees and organizing committees in real-time computing, networking, quality of service, distributed systems, sensor networks, multimedia, and mobile computing, among others. Dr. Abdelzahar is a Fellow of IEEE and Senior Member of ACM.

Dr. Indrajit Ray is Professor of Computer Science at Colorado State University. Currently he is serving on a rotator basis as Program Director at the National Science Foundation in the Secure and Trustworthy Cyberspace program. His primary research is in computer security and privacy with major contributions in security and privacy models, in particular in trust model, access control models and risk models, and in security protocol design using applied cryptographic techniques. He has published more than 150 technical papers. He has advised 15 Ph.D. students many of whom hold tenured positions in academia. He has also played leadership roles in the academic community by serving as program chairs in various conferences. In 2015 he served as General Chair of the 2015 ACM Conference on Computer and Communications Security, which is the flagship conference of ACM SIGSAC, and in 2017 as the General Chair of the 2017 IEEE Communications and Network Security conference. He serves on the editorial board of several journals including the IEEE Transactions on Services Computing and the International Journal on Security and Networks. He was the founder of the IFIP TC 11, WG 11.9 on Digital Forensics and its first Chair. He is a Senior Member of both the IEEE and the ACM and a member of IFIP TC-11.