Numbers, Facts and Trends Shaping Your World

The Future of Free Speech, Trolls, Anonymity and Fake News Online

Theme 4: Oversight and community moderation come with a cost

More monitoring equals more regulation and management in the eyes of some of these experts. Their answers struck these notes: Some solutions could further change the nature of the internet because surveillance will rise; the state may regulate discourse; and these changes will polarize people and limit access to information and free speech.

The fairness and freedom of the internet’s early days are gone. Now it’s run by big data, Big Brother, and big profits. Thorlaug Agustsdottir

A share of these experts predict that greater regulation of speech and the implementation of reputation systems, required identification, and other technological solutions to curb harassment and trolling will result in more surveillance and censorship. They expect that this could change many people’s sharing behaviors online as they try to protect their privacy, limiting their contributions and stifling free speech. They also expect that widespread identity provision could shift the balance of power even more toward governments and corporations at the expense of citizens as the prospect of anonymous speech fades.

Surveillance will become even more prevalent

Thorlaug Agustsdottir of Iceland’s Pirate Party said anonymity is already dead. “Anonymity is a myth, it only exists for end-users who lack lookup resources. The Internet of Things will change our use of everyday technology. A majority of people will still rely on big corporations to provide platforms, willing to sacrifice their privacy for the comfort of computerized living. Monitoring is and will be a massive problem, with increased government control and abuse. The fairness and freedom of the internet’s early days are gone; now it’s run by big data, Big Brother, and big profits.”

[By 2026]

Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day. Randy Bush

David Karger, a professor of computer science at MIT, said, “I am convinced by David Brin’s ‘Transparent Society’ vision that the ever-decreasing cost/effort of surveillance will ultimately land us in a world where very little can be hidden. In a sense, I think we’re headed back to the traditional small village where everyone knew everyone’s business. I expect this will force us to cope with it in a similar way: by politely pretending not to know (and gossiping about people behind their backs).”

John Sniadowski, a systems architect for TrueBox predicted, “More and more countries are going to adopt similar social scoring systems such as those currently expanding in China. These kinds of systems will massively influence suitability choices for jobs, housing, social status, and government views of its citizens. This will stymie free speech because political control of systems will work negatively against individuals who wish to voice alternative views to the accepted norms in some territories.”

Ian Peter, an internet pioneer and historian based in Australia, wrote, “The continued expansion of sale of personal data by social media platforms and browser companies is bound to expand to distasteful and perhaps criminal activities based on the availability of greater amounts of information about individuals and their relationships.”

Joe McNamee, executive director at European Digital Rights, observed, “In the context of a political environment where deregulation has reached the status of ideology, it is easy for governments to demand that social media companies do ‘more’ to regulate everything that happens online. We see this with the European Union’s ‘code of conduct’ with social media companies. This privatisation of regulation of free speech (in a context of huge, disproportionate, asymmetrical power due to the data stored and the financial reserves of such companies) raises existential questions for the functioning of healthy democracies.”

Barry Chudakov, founder and principal at Sertain Research and StreamFuzion Corp., replied, “In the very democratic act of engaging in public discourse and expressing our views, we are possibly targeting ourselves by identifying ourselves and ensuring that we will never have privacy or be anonymous. This was brought home recently when a prominent feminist writer dropped off social media after being harassed online by anonymous stalkers who posted rape and death threats against her 5-year-old daughter. And this never-anonymous realization brings with it a kind of nihilism, a bravado, that will further inspire many to create fake identities, fake histories, fake associations based on the thinnest of connections.”

Matt Bates, programmer and concept artist at Jambeeno Ltd., commented, “Vis-a-vis anonymity and privacy, I foresee their continual and gradual erosion as technocracy inexorably expands. Shoshana Zuboff’s Three Laws are apropos: 1) Everything that can be automated will be automated. 2) Everything that can be informated will be informated. 3) Every digital application that can be used for surveillance and control will be used for surveillance and control. To paraphrase Dan Geer: When one-inch block letters can be seen from space, how does that change our calculus about what is and is not ‘private’? When a kid with a small allowance can afford a drone that can peek through most peoples’ windows? When all the streetlights installed in your town include 360-degree surveillance cameras? When anybody’s phone can be trivially hacked to record the sounds of their surroundings? The very notion of what is and is not private will, necessarily, be shifting at an increased rate. As a civil libertarian I view this as extremely regrettable, but I also see it as inevitable, especially given the rapidity with which technology undermines extant power structures and changes our mores and habits. Whether this leads to increased devolution of government to local modes or to more centralization and the dystopian intrusively-paranoid police states of science fiction is beyond my ken, but I expect the latter is more likely, at least in the short term.”

Jean Burgess, a professor of digital media at Queensland University of Technology, wrote, “We’ll see a growth in tools and systems to prevent or regulate hate speech and filter for quality discourse, but at the same time we’ll see a retreat to safe spaces and closed groups, reducing the mutual visibility and recognition of diversity.”

Randy Bush, Internet Hall of Fame member and research fellow at Internet Initiative Japan, wrote, “Between troll attacks, chilling effects of government surveillance and censorship, etc., the internet is becoming narrower every day.”

Anonymously, a communications professor at City University of New York added, “I see the space of public discourse as managed in new, more-sophisticated ways, and also in more brutal ones. … We are seeing an expanded participation in the public sphere, and that will continue. It doesn’t necessarily mean an expansion of democracy, per se.”

Dealing with hostile behavior and addressing violence and hate speech in many spaces will become the responsibility of the state instead of the platform or service providers

Will governments or other authorities begin implementing regulation or other reforms to address these issues? Some respondents said this is necessary, suggesting that incentives must be formally put into place in order to motivate platform providers to begin to implementation of appropriate remedies.

There’s a delicate balance to be reached between offering safe spaces for free speech and safe spaces that protect individuals against inciting, hateful speech. Avery Holton

Dan York, senior content strategist at the Internet Society, wrote, “The ‘mob mentality’ can be easily fed, and there is little fact-checking or source-checking these days before people spread information and links through social media. This will cause some governments to want to step in to protect citizens and thereby potentially endanger both free speech and privacy.”

Joshua Segall, a software engineer, said, “Companies have taken very few steps to prevent online abuse, and those that have been taken are minimal and ineffective. Without strong action and new ideas to foster inclusiveness and limit abuse from social media companies, the negative activities online will continue to escalate.”

Fredric Litto, emeritus professor of communications at University of São Paulo, shared the reasoning behind the need for identities to be public in some cases. “Anonymity and privacy, in general, deserve protection,” he wrote, “but not when issues of life and death (singularly or in groups) are concerned. There must be limits set to protect life and well-being!”

Luis Lach, president of the Sociedad Mexicana de Computación en la Educación, A.C., wrote from a global point of view, noting, “In general terms, governments don’t like people expressing thoughts in the network.”

Dave Burstein, editor at fastnet.news, noted, “Barack Obama’s comment that Omar Mateen was ‘inspired by various extremist information that was disseminated over the internet’ (quoted from The New York Times) echoes calls by Angela Merkel and David Cameron for more censorship, which is almost inevitable.”

Hume Winzar, associate professor in business at Macquarie University in Sydney, commented, “The panopticon will be real and growing in size. Online technologies will be less anonymous. What we do and say online will be held to account.”

John Curran, CEO for the American Registry for Internet Numbers (ARIN), said, “The failure to provide for any effective attribution or remedy for bad actors will result in increasing amounts of poor behavior (volatile speech, harassment, etc.) as well an increase in actual crimes (hate speech, libel, theft) over the internet. While the benefit of unfettered internet to free speech and expression is quite high, its provision without any meaningful method of recourse when used for criminal acts deprives users of their basic human right of effective remedy.”

Marc Smith, a sociologist at the Social Media Research Foundation, wrote, “While our organization does not endorse enforced registration for all content creation we predict that anonymous content authorship and network distribution will become a crime. We predict that all content will need to be associated with a ‘licensed’ and credentialed legal entity. In practice, we are not very far from this today.”

Avery Holton, an assistant professor at the University of Utah, commented, “We have seen the struggles Twitter has faced recently with free speech. As more platforms open up to innovative forms of sharing and communicating, they will have to consider regulations that help police those who intend to hurt or damage individuals and networks. There’s a delicate balance to be reached between offering safe spaces for free speech and safe spaces that protect individuals against inciting, hateful speech.”

Amy Zalman, principal owner at the Strategic Narrative Institute and professor at Georgetown University, replied, “In the next decade, we will see the contest over the nature of public digital space continue. … Can this space be legislated? Can new norms be introduced and spread? Can public service campaigns be effective? Can we quantify the business and efficiency costs of bad behavior? These may the kinds of questions that those seeking to refine our public discourse in this new space may address.”

Scott A. Hale, senior data scientist at the Oxford Internet Institute, wrote, “I very much hope that standards-based cross-platform protocols are developed and used in the future and that the enforcement of norms and laws moves from private companies to governments. While many companies might desire the latter, they are likely against the former.”

Julian Hopkins, lecturer in communication at Monash University Malaysia, wrote, “In most countries there will be the development of online accounts that are formally linked to a personal identity – i.e., through personal identification documents and/or relevant biometrics. This will increase security for online transactions, tax returns, etc. These will enable the creation of online spaces where only publicly identifiable persons can participate, and will make them more accountable.”

Dara McHugh, a respondent who shared no additional identifying details, said, “There will be enhanced legislative and technical approaches to controlling the tone of online discourse, driven by a combination of genuine concern from activists and ‘soft’ opportunism from political elites who will attempt to use it to stifle criticism and police discourse.”

William Ian O’Byrne, an assistant professor at the College of Charleston, said, “We need to consider who we mean by the ‘bad actors’ and the nuances of trust in online spaces. We will continue to see hacks, data breaches, and trolling behavior in online spaces. I hope that, as Web-literate citizens, we increasingly speak out against these behaviors, but also read, write, and participate more thoughtfully in online spaces. My concern is the chilling effect that we see in this post-Snowden era in which we have to be concerned about privacy and security and how these are controlled by businesses and governments.”

“Government intervention or a grouping of industry advocates will be the only way to bring this issue mainstream enough to change policies and actively support all internet users. Most alarmingly, far too little is being done to make the internet more inclusive,” said an anonymous e-resources staffer at Loyola University-Chicago.

Polarization will occur due to the compartmentalization of ideologies

Some predict that the rise of these separately moderated spaces—many of them requiring valid ID for participation—will produce “a million walled gardens” and exclude important civil discourse that contributes to important social debates and meaningful conversation. Some say this could result in unmoderated public spaces becoming akin to “toxic waste dumps.” The process of sorting out online social spaces will also be tied to people’s different needs, some of these experts believe. Niche tribes will emerge.

As the public sphere moves evermore solidly onto the internet, the fractious mood of our discussion climate will strengthen online filter bubbles, clamorous echo chambers, and walled gardens of discourse. Alf Rehn

An anonymous respondent wrote, “Sadly, the trend—at least, in American political discourse—seems to be fragmenting into increasingly disconnected echo chambers. Such conversations increasingly happen in siloed services that suffer from a combination of self-selection and automated curation. When the two echo chambers come into contact, the results are explosive and divisive. It’s not clear that any emerging services or technologies are positioned to slow or reverse this trend, while many benefit greatly by the anger it generates. Even worse, users seem to seek out and wallow in their own echo chambers, so there is little demand to change the system. I caveated my initial statement by scoping it to American politics, but the problem appears to be quite large: A casual examination of comments on news articles shows that even the least political story devolves into partisan political bickering within a few exchanges. The problem does not appear to be uniquely American: The recent U.K. European Union referendum exhibited similar acrimony.”

John Howard, former Microsoft HoloLens creative director and now co-founder at LOOOK, a mixed-reality design and development studio, explained, “As the generation raised with social media comes of age, their ability to navigate this landscape will result in greater self-selection and a further narrowing/echo chamber of information sources.”

Alf Rehn, professor and chair of management and organization at Åbo Akademi University in Turku, Finland, wrote, “As the public sphere moves evermore solidly onto the internet, the fractious mood of our discussion climate will strengthen online filter bubbles, clamorous echo chambers, and walled gardens of discourse.”

Jennifer Zickerman, an entrepreneur, commented, “A side effect of greater moderation will be the proliferation of ‘underground’ platforms for discourse, where people must be members in order to read or participate in discussions. These platforms will be highly toxic and may ‘radicalize’ people around certain causes and ideas, as closed groups are powerful tools in an ‘us-versus-them’ mental model. Discussion around these causes and ideas will be less visible to the general internet community, so people may have a false sense that there is less interest in and discussion around unsavory causes and ideas.”

Aaron Chia Yuan Hung, an assistant professor of educational technology at Adelphi University, replied, “Neil Postman predicted in the 1990s that the internet will lead to more balkanization of groups, and we have been seeing this increasingly more. For example, people who gravitate toward online communities that favor their social and political views seem to overestimate the popularity of their views. Blogs and news aggregates that lean left or right become particularly influential in political seasons, offering skewed perspectives.”

An anonymous respondent wrote, “What will happen with online public discourse will mimic what we see with segregated communities. Folks will only go to the sites that reinforce their worldviews. Some online forums will be safe havens for polite discourse; others will be shouting matches. Unfortunately, in terms of discourse, as in much of civilization building, it is easier to blow up trains, than it is to make them run on time. As long as we have an extreme level of political polarization and civil disenfranchisement, we are likely to view the ‘other’ with suspicion and deride rather than engage.”

Lindsay Kenzig, a senior design researcher, said, “Technology will mediate who and what we see online more and more, so that we are drawn more toward communities with similar interests than those who are dissimilar. There will still be some places where you can find those with whom to argue, but they will be more concentrated into only a few locations than they are now. Given that so much of the world is so uneducated, I don’t see that more-inclusive online interactions will be the norm for many many years.”

Gail Ann Williams, former director of the internet-pioneering community at The WELL and online community consultant, wrote, “Culture will evolve in small, gated interaction settings as well as in larger settings with less barrier to entry, just as private face-to-face conversation relies on private small-group expression as well as published or public speaking contributions to the public. The advantages and disadvantages to anonymity are enough that there will be a range of settings with a range of choices.”

Lauren Wagner, a respondent who shared no additional identifying details, replied, “Hyper-targeted articles, like hyper-targeted ads, will prove the most lucrative for online platforms. While there may be a utopian wish for technological systems that encourage more-inclusive online interactions, polarizing pieces will result in more engagement from users and be financially advantageous to online platforms. Consequently, I believe online public discourse will be shaped by a more divisive tone and ‘bad’ actors. Writers are becoming more adept at authoring articles that engage their core readership online, whether it’s a broad audience using general clickbait tactics or a more specific audience with, for example, an article supporting a specific political candidate. With the rise of Donald Trump we are seeing that this phenomenon is not only limited to writers. Subjects are learning how to persuade the media to ensure that they receive a certain type of online coverage, which tends to be divisive and inciting.”

Polina Kolozaridi, a researcher at the National Research University Higher School of Economics in Moscow, said, “Online interaction will become less in written form, even less than now. Voice messages, videos and photos, personal broadcasting, sharing of personal measurements (such as the number of steps you take and other quantities): This is the future of the interaction, even in work communication. Concerning commentary itself, it will tend to become simultaneously more personal (more people will communicate only with those whom they know) and at the same time it will become more massive. Many people globally who have never had experiences in a community will be coming online, therefore it will be more difficult to set norms and administrate big online resources. Free speech will become less regulated. That has its pros and cons. All people will able to express their opinion, but they will be less aware of consequences. Therefore the communication will be at the same time more structured in one cluster of the internet-space and less structured in another. We see the example of such trends in the Brexit vote.”

Some argued for programs that encourage digital literacy and civility. Daniel Pimienta, head of the Networks & Development Foundation, noted, “The key factor for the answer is the speed of the deployment of media and information literacy. … A study – “Changes Over Time in Digital Literacy, published in Cyberpsychology & Behavior – offers very worrying trend data. The study measured, at a five-year interval and using the same methodology, the respective levels of media and information literacy of students compared with those of their parents. In the first study appears a low level in digital literacy of the parents and in information literacy of the children. In the second, the level of digital literacy of parents improved and approached the children’s, while the level of information literacy of children worsened, revealing the dangerous myth behind the fashionable concept of ‘digital natives’ and the urgent need to organize the information literacy of young people. The low level of information literacy is the cultural broth for conspiracy theories, disinformation, hate discourses, and so on.”

Justin Reich, executive director at the MIT Teaching Systems Lab, said, “Human beings will continue to be terrible to one another online, but they will also be really wonderful to each other online as well. The attention that goes to acts of hatefulness and cruelty online overshadows the many ways that strangers answer each other’s questions, tutor one another, and respectfully disagree. … I’m quite encouraged by the work that Jeffrey Lin has done at Riot Games to create sociotechnical systems that reward kindness, civility, and cooperation over disrespect and cruelty. There are smart people out there trying to engineer a more civil internet, and in various spaces, they will be very successful.”

An anonymous health information specialist added, “The really awful, violent anonymous speech will get pushed to the darker recesses of the internet where its authors find their own kind and support.”

Increased monitoring, regulation and enforcement will shape content to such an extent that the public will not gain access to important information and possibly lose free speech

The most worried experts predict that increased oversight and surveillance, left unchecked, will allow dominant institutions and actors using their power to suppress alternative news sources, censor ideas, track individuals and selectively block network access to shape connected resources that fall under their jurisdiction. They say this, in turn, could limit free speech (shaping how, when, where and if people express themselves) and create such filtered and fragmented settings that individuals might never know what they are missing out on, since any information deemed in opposition to prevailing interests or assumed to not be of interest to them is likely to be selectively filtered, fully removed or made unfindable.

There are countervailing pressures, but the world of mass media is dead and buried. We are now cooperatively building our own echo chambers with the help of machine learning. Sunil Paul

An anonymous freelance consultant said, “I expect an increase in curated sites, increasingly effective AI filters to delete spam and trolls, and increases in news sites which lack any place for comments and feedback. These will reduce negativity within their realms, at the price of lack of diversity. However, this will be more than offset by niche ‘rat holes’ of conspiracy sites and narrow perspective ‘reporting,’ with abundant space for trolls and negativity. The online experience will involve tough choices: either choose to avoid diversity of perspectives and challenges to untruths and journalistic lapses, or choose to deal with negativity, trolls, and BS.”

Lisa Heinz, a doctoral student at Ohio University, commented, “Humanity’s reaction to those negative forces will likely contribute more to the ever-narrowing filter bubble, which will continue to create an online environment that lacks inclusivity by its exclusion of opposing viewpoints. An increased demand for systemic internet-based AI will create bots that will begin to interact – as proxies for the humans that train them – with humans online in real-time and with what would be recognized as conversational language, not the word-parroting bot behavior we see on Twitter now. … When this happens, we will see bots become part of the filter bubble phenomenon as a sort of mental bodyguard that prevents an intrusion of people and conversations to which individuals want no part. The unfortunate aspect of this iteration of the filter bubble means that while free speech itself will not be affected, people will project their voices into the chasm, but few will hear them.”

Adrian Hope-Bailie, standards officer at Ripple, wrote, “Automated curation will continue to improve such that online discourse can be more carefully controlled, however the result may not all be positive, as online discourse becomes censored in a way that is more subtle and less obvious to casual observers or participants. Important voices may be shut down if their views contradict the rules defined by the moderators (which may not be limited to controlling abuse or hate speech) because managing a censored forum that appears to be open will become easier thanks to AI-assisted moderation.”

Dudley Irish, a software engineer, wrote, “It will become increasingly possible to tie an actual person to an otherwise anonymous account. This loss of anonymity will lead to a reduction in the trolling behavior seen. Not so much because the trolls behavior will change but because the ability to effectively target them and block them (shun them). This loss of anonymity will have a chilling effect on free speech. This could be addressed legally, but only a minority of government actors are interested in extending and increasing free speech. … The major corporations will act to protect the advertising channel and they have no interest in protecting free speech. These two factors mean that the behavior will be ‘nicer’ but at a tremendous cost in freedom of expression and free political speech.”

Sunil Paul, entrepreneur, investor, and activist at Spring Ventures, wrote, “There are countervailing pressures, but the world of mass media is dead and buried. We are now cooperatively building our own echo chambers with the help of machine learning.”

John Bell, software developer, data artist, and teacher at Dartmouth College, wrote, “There will be increasing demand for social networks that have more algorithmic separation of opinions. Rather than reputation- or karma-based systems that try to improve the behavior of all participants, software will respond to trolls by separating competing camps and enforcing filter bubbles. Over time, networks that take a more active hand in managing content (by banning trolls or applying community standards) will be abandoned by communities that feel repressed and replaced with networks that explicitly favor their points of view. This will mirror the self-selection we’ve seen in news viewers in the U.S. who favor Fox News vs. other sources, etc.”

Manoj, an engineer working in Singapore, replied, “Negative interaction will increase to a limit after which I feel there will be some self-regulation coupled with governmental and procedural requirements. Free speech will be the big loser.”

Simon Gottschalk, a sociology professor at the University of Nevada, Las Vegas, wrote, “I anticipate the issue of free speech to become altered beyond recognition and to alter our understanding of it. In the end, it matters little if what we write/say online is indeed already officially and legally surveilled or not. The reasonable hunch is that it shapes how we experience everyday life and what we’re willing to write/say in that setting. According to a New York Times article, even Facebook CEO Mark Zuckerberg covers the camera/microphone of his computer.”

David Karger, a professor of computer science at MIT, predicted that speech will be “free” but there’s no guarantee anyone will be reading or listening to it, writing, “We will create tools that increase people’s awareness of opinions differing from their own, that support conversations with and learning from people who hold those opinions. You ask about free speech. The internet transforms free speech from a right to an inevitability. In the long term it will not be possible to prevent anyone from transmitting information; there are simply too many interesting distribution channels for them all to be blocked. However, we need to (and will) develop a better understanding that freedom to *speak* does not imply freedom to *be heard*.”

Sam Punnett, research officer at TableRock Media, predicted, “Some intentions for sharing will likely endure but may become compromised due to the evolving realization that they are monitored by employers, businesses, and the state. All services will transform themselves as their business models mature with the intentions of their owners and their relationships to the commercial applications of big data.”

An anonymous respondent predicted, “We will see, in the coming years, more legislation from governments restricting speech on the internet. You see this already in the European Union with the rules about taking down ‘terrorist’ content and even closing websites. Many internet giants will likewise institute policies that mirror legislation like the kind I mentioned, even if they are under no legal obligation to do so. This will further erode the internet as a platform for free speech and the spread of ideas. Inevitably, laws and internal private corporation policies will be used to restrict all kinds of speech, not just the ‘terrorist’ content that the initial policies were ostensibly created to combat. People, companies, and some governments will continue to explore options for increased privacy. This will lead to an arms race of sorts, but as always, the most marginalized sectors of our society will lose out, as they are the ones who are in the weakest position to resist the onslaught of censorship, tracking, and spying. This means that movements that care about justice, equality, privacy, dignity, and human rights must make a point of working to create legislation that recognizes these rights; they must also organize the people into a movement that can make the internet the promising place it used to be. We must resist the internet becoming a place to be spied on, where speech is restricted – a place where the inequality of the world is reproduced online.”

[Wall Street]

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information