In the wake of Elon Musk’s controversial takeover of Twitter in October 2022 — and subsequent disastrous rebrand to “X” in July — the entire social media landscape is undergoing significant transformation.
It was one of the strangest moves in the history of brand strategy, bewildering many in the industry, who fail to see the genius in it. But the warning signs were loud and clear: Musk’s polarizing approach to content moderation and pay-for-play policies led to a mass exodus of users, with losses estimated in the millions. This shift spurred the emergence of new platforms such as Meta’s Threads, Jack Dorsey’s Bluesky, Mastodon, Post.news, T2 and Artifact.
Among these, Meta’s Threads, launched this summer, stands out due to its vast user base (the platform added 100 million users in its first week). However, concerns about data protection, content moderation and free speech loom large. The question remains: Can we restore civility to our online discourse?
Over the past decade and a half, social media has been a catalyst for societal disruption. A 2020 Pew Research Center poll revealed that 64% of Americans perceive social media as having a predominantly negative impact on society, a sentiment that remains unchanged today.
Ramesh Srinivasan, a professor of information studies and director of the University of California Center for Global Digital Cultures at UCLA, suggests that we’re at a crossroads, with social media creating individualized online experiences that foster division and polarization. The rise of new platforms promising enhanced privacy and decentralization doesn’t necessarily address these issues.
“All social media at scale means that individually, each of us is living in different worlds when we are online. So that means that in terms of any kind of question of democracy, or a larger global community, or even a national democracy, we’re all actually in our own tunnels, so to speak,” Srinivasan says. “Those tunnels are being architected for us invisibly, based on what will arouse and create outrage, which means that at scale, we’re going to see one another with suspicion. We’re going to get more and more into groupthink and polarization. We’re not going to be able to have compassionate disagreements with one another.”
The debate over online free speech versus hate speech has intensified, particularly since Elon Musk’s takeover of Twitter, with content moderation becoming even more of a politically charged issue. Meanwhile, major social media companies, including Meta and Twitter, have reduced their content moderation efforts.
“Users have been sold a lie, and it’s exemplified by how Elon Musk speaks about things. They’re presented with a dichotomy that says you can either have free speech, or you can have safety. I think that users increasingly don’t buy that. It’s a false dichotomy,” says Samuel Woolley, assistant professor in the School of Journalism and project director for propaganda research at the Center for Media Engagement at the University of Texas at Austin. Woolley is the author of Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity.
Musk’s actions on Twitter, including the removal of protections for users, has sparked deep concern over the spread of disinformation. The concept of free speech, Woolley argues, doesn’t equate to unrestricted speech, and private companies have the right to regulate content on their platforms.
Balancing user well-being and market decisions is crucial in this evolving landscape.
How do we rectify this? Major social media companies have consistently proven unreliable in handling user data and content moderation, and misinformation continues to proliferate at an alarming rate.
“We’re in a moment of clear transition,” says Woolley. The success of new platforms like Meta’s Threads and the rise of others like Bluesky and Mastodon indicates users’ dissatisfaction and their search for safer spaces for expression. The solution, it appears, won’t come from a single source but rather will require a collective effort from all users.
With nearly 5 billion global users, social media is a significant business arena. Danielle Wiley, CEO of influencer marketing firm Sway Group, notes that advanced technology allows businesses to measure ROI, offer shoppable content and target users geographically.
“Brands are being forced to figure out which platforms make the most sense so that they can focus where it counts,” Wiley writes in an email. “Social media was vital for businesses during the pandemic. The political environment has made things trickier. When everything is super polarized, like it is now, every interaction online becomes a high-stakes situation,” says Wiley.
Influencers who sway user behavior can exacerbate misinformation. Political commentator and influencer Kaivan Shroff believes businesses and creators share a responsibility to moderate these spaces. Shroff, who had built a following of around 120,000 on Twitter before Elon Musk took over, says that many like him have felt “digitally homeless” since the billionaire stepped in.
“I think we’ve come to this place where now it’s so easy to be for sale on a micro level,” Shroff says. “Nobody wants to talk about it because the people getting paid don’t want to talk about it, even on the corporate level.”
Welcome to Decentraland?
Federated and decentralized networks, which are self-regulated by a collective, are discussed as potential solutions for moderation and civility issues on social media as well, though their implementation is not without challenges.
“Most of those projects are just too difficult to implement. They’re too technically complex, creating challenges in allowing them to be viable competitors to platforms like Twitter,” says Srinivasan.
Woolley agrees: “There’s been movement towards connecting some of the federated platforms. The question is, at what point does it stop being a federated system? And does it just go back to becoming kind of the system that we know?”
While these systems alone may not solve civility issues, they could provide more user protections and enable businesses to connect with users in safer environments.
The Role of Regulation
Government involvement in social media regulation is a hot topic, particularly given the platforms’ impact on free speech and public discourse, and their use of public internet resources. Some propose treating social networks like traditional media and telecom industries.
Woolley suggests that antitrust and monopoly laws will play a significant role in the future. “We know that Meta and Google are not akin to AT&T or telecom companies. But they do benefit greatly from the public good and supply a service integral to democracy, governance and human rights. We’ve got to learn to take a more nuanced approach to the problems in these spaces.”
Srinivasan likens the situation to Amazon’s use of public infrastructure without bearing the cost. “The internet is a digital infrastructure of packet switching, and none of these companies would exist without that internet.”
However, government oversight of tech and social media in the U.S. has been limited. While Europe has implemented rules for social media moderation and user data protection, U.S. courts have protected social media companies from lawsuits related to content algorithms. Furthermore, a recent ruling from a U.S. federal judge limited government agencies’ interaction with social media companies. These developments raise concerns about the erosion of content moderation and corporate responsibility, posing potential risks to democratic values globally.
The Role of Users
Users and consumers also play a crucial role in fostering online civility. With 15 years of experience, the public is now more aware of the issues, toxicity and dangers associated with social media. Understanding how algorithms prioritize and amplify outrage and verifying shared content sources can help promote a more civil society.
Shroff emphasizes collective responsibility, while noting Twitter’s unreliability. He finds Threads, which requires an Instagram account for access, a more accountable and moderated platform. “It’s a lot of people posting, blocking bots and agreeing not to engage accounts that are clearly rage-gaming engagement,” Shroff says.
The ability to block dishonest discourse and set rules on Threads has been appealing to users. “We have a chance to reinvent the public square,” he concludes, highlighting the potential for a more civil and accountable social media environment.
AI Isn’t the Answer
Social media companies, including Meta and Twitter, are touting the promise of AI to bring us back to some semblance of civility, but the technology as it stands now is nowhere near ready for prime time. Srinivasan says that algorithms powering ChatGPT and other AI large language models are partaking in simple stochastic parroting. “It’s mimicry. It is not democracy. And mimicry is not creativity,” Srinivasan says. These are essentially beefed-up chatbots trained on the very content we share on social media.
“AI has kind of been a MacGuffin for Zuckerberg and other folks,” Woolley says. “They’ve presented it as the cure-all — a stand-in for the problems that exist, especially with content moderation. But AI as we know it right now is not sophisticated enough to do the kind of culturally, socially, linguistically nuanced content moderation that’s needed when you are dealing with things as complicated as free speech or privacy or user safety. Automation and AI already play a critical role in scaling efforts to moderate content and to oversee the management of content on these platforms. But we’re always going to need human oversight.”
One thing is certain: Social media has become an inextricable element of our modern society. But the cat is now out of the bag, and we are years behind where we should be when it comes to regulating the space.
“We’ve already allowed the current network system to grow at an unfettered rate to incorporate billions of users under a communication regime that is thoroughly unregulated. The solutions to the problems that exist are going to involve very unsexy, systematic regulatory work that’s careful and that has clear guardrails, teeth and repercussions,” Woolley says, citing the need for both governmental and corporate regulation. “I think we’re at a time in the world where we’re going back to the drawing board and asking really important questions about what it looks like to create sensible regulations for maintaining a healthier space — a space that’s better for everyone involved.”
The path to restoring civility and reducing polarization in our online lives is complex and multifaceted. It requires collective efforts from all stakeholders, including users, businesses, influencers and government entities. It also calls for a more nuanced approach to content moderation, data protection and free speech.
The future of social media is uncertain, but the need for a more civil and accountable online environment is clear.