Strange days out here on the internet. Dangerous days, too. Facebook groups have people drinking horse dewormer in anticipation of JFK Jr’s resurrection, Instagram’s filling kids up with eating disorders and suicidal ideations, while Twitter just peals along with that irate, mosquito-pitched whine you hear right just before everything goes red. Algorithmically-elected, engagement-optimized push notifications, suggestions, tips and tricks from the hottest thinkfluencers of the minute, pop, pop, popping up unbidden and inescapable, demanding the fealty of our screens as counted by our click throughs.
But the internet today is not the internet of 13 halcyon years ago, in 2009. Nor is it now as it might be 13 years hence, in 2035. The societal divisions we currently face could deepen into outright catastrophe over the next decade because, remember kids, it’s only ever the worst day of your life so far. Then again, humanity might just buck its ingrained tendencies and come together to build a more robust, more resilient reimagining of today’s internet. One that finally exemplifies the “us” that could be if you wasn’t playin’.
What form those future public spaces eventually take is anybody’s guess… so Pew Research Center had some of the best-informed technologists in the industry give theirs. The PRC partnered with Elon University’s Imagining the Internet Center in mid-summer of 2021 to survey 862 “technology innovators, developers, business and policy leaders” in a non-scientific canvassing. They were asked, “looking ahead to 2035, will digital spaces and people’s use of them be changed in ways that significantly serve the public good?”
The results were mixed. Of those polled, 61 percent of respondents predicted that things will change for the better by 2035, though 18 percent of them argued that currently “digital spaces are evolving in a mostly negative way” (compared to just 10 percent who think its evolution is mostly positive).
Their concerns centered around four thematic problems: Humans behave selfishly when not tethered by traditional societal norms; the rate of online advancement has confounded society’s less tech-savvy members, making them more susceptible to malicious digital systems they don’t fully understand; governments are increasingly ineffective at regulating the tech industry; and, as such, trolls, scammers and Nazis continue to run amok in digital public spaces. And though few of the respondents held much confidence in society’s short term solutions, many remained hopeful that we’ll get our collective act together and start acting like grown-ups on the internet by the middle of the next decade. Three cheers for low bars.
Harassment, cyberbullying, and doxxing are endemic to online interaction. For example, a 2019 report from the Anti-Defamation League (ADL) found that two-thirds of US online gamers have experienced “severe” harassment with more than half reporting having been been targeted based on their race, religion, ability, gender, sexual orientation or ethnicity; and nearly 30 percent claiming they’ve been doxxed in an online game. Likewise celebrities, politicians, professional athletes and public figures — even the unwilling ones — are all seemingly fair game for the vitriol of online mobs.
“Toxicity is a human attribute, not an element inherent to digital life,” Zizi Papacharissi, professor of political science and professor and head of communication at the University of Illinois-Chicago, told Pew surveyors. “Unless we design spaces to explicitly prohibit/penalize and curate against toxicity, we will not see an improvement.”
Many tech companies have tried tackling the issue, albeit half-heartedly as reflected by the middling results their efforts have delivered to date. Riot Games debuted a Player Behavior team in 2012 to help mitigate toxic interactions in League of Legends, Tumblr recently launched a digital literacy campaign to stamp out cyberbullying, and Facebook continues to throw money, bots, and personal boundary bubbles at its community. And what have they to show for it?
“My strong sense is that the conditions and causes that underlie the multiple negative affordances and phenomena now so obvious and prevalent will not change substantially,” Charles Ess, emeritus professor in the department of media and communication at the University of Oslo, told Pew. “This is… about human selfhood and identity as culturally and socially shaped, coupled with the ongoing, all but colonizing dominance of the US-based tech giants and their affiliates. Much of this rests on the largely unbridled capitalism favored and fostered by the United States.”
“Transformation and innovation in digital spaces and digital life have often outpaced the understanding and analysis of their intended or unintended impact and hence have far surpassed efforts to rein in their less-savory consequences,” Alexa Raad, chief purpose and policy officer at Human Security, told Pew Research. Rick Doner, a retired emeritus professor formerly at Emory University added, “We now have a vicious cycle in which the digital innovations are undermining both the existing institutions and the potential for stronger institutions down the road.”
The effects of this can be seen in the blackbox problem, in which the decision-making processes of AIs and algorithms are obscured from the humans who built them. Wisconsin’s use of the Compas judicial sentencing software is one such example.
“One of the biggest challenges is that the systems and algorithms that control these digital spaces have largely become unintelligible,” Ian O’Byrne, an assistant professor of Literacy Education at the College of Charleston, told Pew. “For the most part, the decisions that are made in our apps and platforms are only fully understood by a handful of individuals.”
People have historically reacted poorly to the “new normal” of emerging technologies — whether that’s pummeling surveillance state dorks, or that believing 5G wireless caused the COVID-19 pandemic and mRNA-derived vaccines contain Gates-brand mind control chips, or convincing themselves that NFTs are anything but a ponzi scheme — and that gap between the state of the art and the state of public opinion is where the tub-thumpers and hucksters thrive.
“We have ample evidence that significant numbers of humans are inherently susceptible to demagogs and sociopaths,” Randall Gellens, director at Core Technology Consulting, told Pew Research. “I see digital communications turbocharging those aspects of social interaction and human nature that are exploited by those who seek power and financial gain, such as groupthink, longing for simplicity and certainty, and wanting to be part of something big and important.”
Gellens points to the emergence of Zoombombing during the height of the COVID-19 pandemic as one such example. Additionally, we saw myriad self-proclaimed experts “who did their own research” and “who are just asking questions” set up shop throughout social media, peddling quack diagnoses and hoax remedies to the detriment of the general public.
“Better education, especially honest teaching of history and effective critical-thinking skills, could mitigate this to some degree,” Gellens noted, “but those who benefit from this will fight such education efforts, as they have, and I don’t see how modern, pluralistic societies can summon the political courage to overcome this.”
Looking at the interactions between America’s elected representatives and the heads of various social media companies in recent years, Gellens’ prediction seems reasonable if not outright expectable. For example, hearings regarding Section 230 (which governs the liability social media companies face for their users’ posts) in October 2020 were little more than a partisan circus. Follow up hearings last April, without the CEOs in attendance, were only marginally more productive but neither event led to substantive changes in how social media companies operate or how the federal government regulates their actions.
“Laws and regulations might be tried, but these change much more slowly than digital technologies and business practices,” Richard Barke, associate professor in the School of Public Policy at Georgia Tech, commented to Pew. “Policies have always lagged technologies, but the speed of change is much greater now.”
Even when social media purveyors are caught dead to rights, there’s precious little political inertia to do anything about it. This dissonance between technology and policy has raised concerns among Pew respondents that it may lead to the weaponization of data and accelerate America’s transition to a surveillance state.
“We are in a new kind of arms race we naively thought was over with the collapse of the Soviet Union. We are experiencing quantum leaps in AI/robotics capabilities,” said David Barnhizer, professor of law emeritus and founder/director of an environmental law clinic.
“It’s like trying to negotiate a mutually-assured-destruction model with several dozen nation-states holding weapons of mass destruction,” added Sam Punnett, retired owner of FAD Research. “I’d guess many Western legislators aren’t even aware of the scope of the problem.”
Between the digital world evolving faster than many of us can comfortably accommodate, the ineffectiveness of our elected officials in regulating it and the erosion of societal norms combating bad behavior, it’s little wonder why bad actors run rampant on today’s internet. There’s very little downside to doing it, noted Chris Labash, associate teaching professor of information systems management at Carnegie Mellon.
“My fear is that negative evolution of the digital sphere may be more rapid, more widespread and more insidious than its potential positive evolution,” he told Pew. “We have seen, 2016 to present especially, how digital spaces act as cover and as a breeding ground for some of the most negative elements of society, not just in the US, but worldwide.
“Whether the bad actors are from terror organizations or ‘simply’ from hate groups, these spaces have become digital roach holes that research suggests will only get larger, more numerous and more polarized and polarizing,” he continued. “That we will lose some of the worst and most extreme elements of society to these places is a given. Far more concerning is the number of less-thoughtful people who will become mesmerized and radicalized by these spaces and their denizens: people who, in a less digital world, might have had more willingness to consider alternate points of view.”
Countering this effect will take more than sending out good vibes into the ether, Labash argued. Nor will simply offering alternative spaces be enough, “it will take strategies, incentives and dialogue that is expansive and persuasive to attract those people and subtly educate them in approaches to separate real and accurate inaccurate information from that which fuels mistrust, stupidity and hate.”
While the experts above raised a number of terrifying(ly salient) points, their predictions are in the minority of respondents to the Pew survey. The majority, as one would expect, had a much rosier outlook for the future of the internet, though not without some reservations of their own. Their overarching reactions followed the common theme that while we face significant challenges now, users, governments and companies will eventually step up to do what is necessary and socially “right,” even if done out of naked self interest.
As Jenny L. Davis, a senior lecturer in sociology at the Australian National University, pointed out, “By 2035, I expect platforms themselves to be better regulated internally. This will be motivated, indeed necessary, to sustain public support, commercial sponsorships and a degree of regulatory autonomy.”
“We need to assume that in the coming 10 to 15 years, we will learn to harness digital spaces in better, less polarizing manners,” Alf Rehn, professor of innovation, design and management at the University of Southern Denmark, added. “In part, this will be due to the ability to use better AI driven for filtering and thus developing more-robust digital governance.”
“There will of course always be those who would weaponize digital spaces, and the need to be vigilant isn’t going to go away for a long while,” he conceded. “Better filtering tools will be met by more-advanced forms of cyberbullying and digital malfeasance, and better media literacy will be met by more elaborate fabrications – so all we can do is hope that we can keep accentuating the positive.”
Social media companies, if properly motivated, could do much towards that goal, argued Internet Hall of Fame inductee and former CTO for the Federal Communications Commission, Henning Schulzrinne. “Some subset of people will choose fact-based, civil and constructive spaces, others will be attracted to or guided to conspiratorial, hostile and destructive spaces,” he replied to Pew. “For quite a few people, Facebook is a perfectly nice way to discuss culture, hobbies, family events or ask questions about travel – and even to, politely, disagree on matter politic. Other people are drawn to darker spaces defined by misinformation, hate and fear. All major platforms could make the ‘nicer’ version the easier choice.”
The problem with these sorts of solutions is that they have to be implemented by the social media companies themselves, few of whom have traditionally shown much concern for anything aside from their bottom line.
“Issues of privacy, autonomy, net neutrality, surveillance, sovereignty, will continue to mark the lines on the battlefield between community advocates and academics on the one hand, and corporations wanting to make money on the other hand,” Marcus Foth, professor of informatics at Queensland University of Technology, told Pew. Convincing these companies to act in the public good will be no easy feat, explained Chris Arkenberg, research manager at Deloitte’s Center for Technology Media and Communications.
“I do believe the largest social media services will continue spending to make their services more appealing to the masses and to avoid regulatory responses that could curb their growth and profitability,” he said. “They will look for ways to support public initiatives toward confronting global warming, advocating for diversity and equality and optimizing our civic infrastructure while supporting innovators of many stripes.” But, in doing so, Arkenberg continued, social media services may have to reevaluate their business models in the face of content moderation at scale.
Such changes could be led by the users themselves, countered Susan Price, human-centered design innovator at Firecat Studio. “People are taking more and more notice of the ways social media has systematically disempowered them, and they are inventing and popularizing new ways to interact and publish content while exercising more control over their time, privacy, content data and content feeds,” she said. “The average internet user in 2035 will be more aware of the value of their attention and their content contributions due to platforms like Clubhouse and Twitter Spaces that monetarily reward users for participation.”
Price envisions new platforms and apps touting “fairer value propositions” to set themselves apart from their competition and attract users. “Privacy, malware and trolls will remain an ongoing battleground,” she continued, “human ingenuity and lack of coordination between nations suggests that these larger issues will be with us for a long time.”
Perhaps the most audacious suggestion put forth from the canvassed expert pool came from Barry Chudakov, founder and principal at Sertain Research.
“Digital spaces expand our notions of right and wrong; of acceptable and unworthy,” he exclaimed. “Rights that we have fought for and cherished will not disappear; they will continue to be fundamental to freedom and democracy. Public audiences have a significant role to play by expanding our notion of human rights to include integrities. Integrity – the state of being whole and undivided – is a fundamental new imperative in emerging digital spaces which can easily conflate real and fake, fact and artifact.”
As such, Chudakov has proposed a full conceptual framework for enacting more civil digital public spaces, a “Bill of Integrities” which would include Integrities of Speech, Identity, Transparency, Life and Exceptions. How we would enforce such a bill, whether through social norms or government policy, remains to be seen. But even though we don’t currently have all (or really, any) of the solutions to the structural problems we currently face, these challenges are not insurmountable.
“The only way we will push our digital spaces in the right direction will be through deliberation, collective action and some form of shared governance,” Erhardt Graeff, assistant professor of social and computer science at Olin College of Engineering, said. “I am encouraged by the growing number of intellectuals, technologists and public servants now advocating for better digital spaces, realizing that these represent critical public infrastructure that ought to be designed for the public good.”
“We need to continue strengthening our public conversation about what values we want in our technology,” he continued, “honoring the expertise and voices of non-technologists and non-elites; use regulation to address problems such as monopoly and surveillance capitalism; and, when we can, refuse to design or be subject to antidemocratic and oppressive digital spaces.”
Subscribe to our two newsletters:
– A weekly roundup of our favorite tech deals
– A daily dose of the news you need
Please enter a valid email address
Please select a newsletter