Governing Tech
Conversations about online safety, digital wellbeing, and technology regulation come up almost daily in my work. While we often think of the internet as a vast, ungoverned space, the reality is far more complex. Understanding who really controls our digital experiences - and how - is crucial for navigating modern life and maintaining mental health in an increasingly connected world.
Hidden Hierarchies
Formally, the internet has no governing or regulatory body; it spans the globe composed of voluntary interconnected autonomous networks. But spend any time on the internet, and you may recognize that a lot goes into the systems of control and moderation - from content algorithms to payment processors to domain registrars.
While no single entity "controls" the internet, power concentrates in unexpected places. Take payment processors like PayPal, Stripe, and traditional credit card companies. These companies can effectively determine which websites and creators can monetize their work. Or consider companies like Cloudflare that provide DDoS protection - their decisions about which sites to protect can determine whether a website remains accessible.
Even seemingly simple technical standards carry enormous weight. Google's decision to mark sites without HTTPS as "not secure" did more to encrypt web traffic than many legislative efforts. Apple and Google's app store policies effectively regulate what software can reach mobile users - and mobile users make up the majority of the internet.
Content moderation itself often follows unofficial but rigid hierarchies. Reddit's volunteer moderators shape discourse across thousands of communities. Discord server administrators set and enforce their own rules. YouTube's recommendation algorithm, more than any formal regulation, determines what content reaches viewers.
These systems of informal control highlight an important truth: the internet's "freedom" masks layers of corporate and technological governance that shape our online experience far more than current formal regulations. Understanding these power dynamics is crucial for anyone trying to navigate digital spaces thoughtfully.
Racing to Regulate
Last month, we looked extensively into AI, and it's a technology the American government is struggling to regulate at pace with speed it's developing. On October 30 2023, President Biden signed an executive order to "manage AI’s safety and security risks, protect Americans’ privacy, advance equity and civil rights, stand up for consumers and workers, promote innovation and competition, advance American leadership around the world, and more."
It was an essential move. During the rise of social media, there were a few frameworks to fall back on such as the Communications Decency Act of 1995 and the Federal Media Bureau. But the widespread applications of AI don't fit neatly into these buckets and required something new.
Which highlights a recurring challenge in digital regulation: technology outpaces our ability to govern it. Even with social media - by the time we began seriously discussing its mental health impacts on teens, platforms like Instagram had already become deeply embedded in youth culture. The Kids Online Safety Act (KOSA) only recently passed in the Senate. Now with AI, we're trying to be more proactive, but the technology's rapid advancement makes this difficult. I feel disheartened knowing it takes deepfakes of Taylor Swift to motivate this regulation. For years now, deepfakes, photoshopping, and video manipulation have been weapons for cyber-bullying of both minors and adults. The widespread psychological harm to everyday individuals, unfortunately, did not generate the same urgency for regulation as recent high-profile cases.
These regulatory challenges become even more complex when we consider their international dimensions. While the Taylor Swift incident prompted quick action in the United States, digital harms don't stop at national borders. Content that's regulated in one country can remain accessible in another, creating a patchwork of protections that varies by geography. This inconsistency is particularly evident in how social media platforms operate across different nations.
Digital Dividing Lines
Almost all of the world's largest social media companies are hosted in the United States, but they do not operate the same way in every nation. This creates a complex web of varying user experiences, content policies, and mental health implications across borders.
Consider TikTok, which operates different versions of its app in different regions. In China, where it's known as Douyin, the app limits children to 40 minutes of use per day and primarily shows educational content. Meanwhile, in the US and other Western countries, users can scroll endlessly through whatever content the algorithm serves them. The divergence raises important questions about corporate responsibility and the role of design in mental wellbeing.
Meta's platforms (Facebook and Instagram) face similar challenges. They must navigate:
Europe's strict GDPR privacy regulations
China's complete ban on their services
Different content moderation standards across cultures
Varying age restrictions by country
Local laws about data storage and access
These differences create "digital borders" - invisible lines that shape our online experiences based on geography. And what we've discussed so far have been relatively tame examples. Musk deactivating Starlink in Ukraine can have a tremendous impact on the military. And reading further in this report from Amnesty International details the harm done to human rights by technology laws impacting migration, surveillance, and the promotion of terrorism.
There also seems to always be some litigation in motion. See Musk's recent disputes with Brazil, or look at how Russia sued Google for a comical $20,000,000,000,000,000,000,000,000,000,000,000 just last week for blocking Pro-Russian channels on YouTube. The United States has held multiple hearings with tech executives that have been frankly embarrassing - showcasing our digitally illiterate constituency.
Codes of Control
These overlapping systems of control - formal, informal, corporate, and governmental - impact people's mental health daily. Yes, the internet is a network of computers; it's also a tapestry of human experiences, relationships, and opportunities woven together by complex and often contradictory rules. So for a technology that connects billions of people across countless cultural, political, and economic boundaries, this messiness is to be expected. The challenge isn't necessarily to create perfect regulations (those don't exist), but to develop frameworks that protect human wellbeing while preserving the internet's potential for connection and innovation.
For further reading on this topic, I recommend:
Speech Police by David Kaye
Custodians of the Internet by Tarleton Gillespie
Those two were guiding lights in writing this article.
And as always, PowerPoint was my guiding light in creating sweet header images - with some help this week from official mobile app stores, this nifty database with icons segmented by color, and this extraction tool.
Best,
Will Ard
LMSW, MBA