Scrolling Through Shadows: AI's Influence on Our Online Experience

The dead internet theory hypothesizes that sometime around 2016-2018, genuine human activity on the internet dropped dramatically. Proponents claim that much of the content we see online - from social media posts to product reviews - is actually generated by AI or other automated systems. This tracks with communication through other technology mediums. According to the National Consumer Law Center (NCLC), Americans receive more than 50 billion robocalls per year and lost an estimated 29 billion dollars to spam calls in 2022. Legislation was passed last year to reduce the number of spam calls, however 2024 data suggests robocalls are more prevalent than ever. And I haven't even touched on robotexts, which are more convincing and easier to create thanks to the developments and increased accessibility of AI language models over these past years.
Social Media and AI
The influence of AI and automation is far reaching. Here are some examples from other popular platforms:
Instagram and TikTok: AI-generated art and deepfake videos are becoming increasingly common. This is fueled by the genuine art posted to these platforms. There is no way for artists to opt-out, and many artists are reliant on these platforms for their following, income, and community.
Twitter: In 2023, Elon Musk boldly proclaimed X had eliminated 90% of scams on its platform.... It's not true. Bot activity is rampant on this platform. Generating a concrete percentage is a bit more difficult. The internal estimate of bots on Twitter was initially debated between 5% to 33%. A former FBI special agent specializing in cybersecurity suggested 80% of profiles are bots. Twitter is a private company, AI is convincing, and the amount of obfuscation is inherently large.
YouTube: The platform's recommendation algorithm, powered by machine learning, determines what videos users see next. This AI-driven system has been criticized for potentially creating "echo chambers" and promoting controversial content. I've found ProZD's videos navigating YouTube's search delistings interesting as well.
Amazon: Product reviews, a critical factor in many purchasing decisions, are increasingly scrutinized for authenticity. Amazon has been battling fake reviews generated by humans and bots alike. Some of these are exceptionally easy to detect. In any case, Amazon synthesizes its reviews into an AI review overview.... Which seems counter to the problem they're trying to address? It's as if they're making a statement like, "We've done our best to remove AI reviews. Also, we've created our own AI overview that will be prominently displayed at the top of every review section." I could write an entire article dedicated to other aspects of Amazon negatively impacted by AI. For example, I purchased a book off of Amazon on a niche psychology topic that appears to have been written entirely by AI, with surface level content, high repetition, and no works cited.
Dating Apps: AI is being used to detect fake profiles and moderate content, however, it is imperfect. Bots still make up a significant portion of online dating profiles. In 2022, nearly 70,000 people reported a romance scam. The loss came to an average of $4,400 per person and a total loss of 1.3 billion dollars. Some services actually encourage the use of AI bots for icebreakers, which feels antithetical to relationship building.
Since we're on LinkedIn, let's quickly look at the prevalence and encouragement of AI content. For starters, I see this every time I begin a post:
There's missing context here. AI writing isn't even available for base users of LinkedIn; you'll be prompted to subscribe to LinkedIn Premium. I started a trial version of Premium in research for this article, so now my post window looks like:
See how subtle the difference is?
The platform also aims to motivate my contribution to collaborative articles that are synthesized and interpreted by AI into articles:
With Premium, I can delegate the writing of my profile to AI, although I find its raw efforts lacking:
To interact on social media is to assist in developing these systems. The internet as a whole was once crawled to promote search, and now it is crawled to train AI. I would be foolish to believe the articles I am writing here in earnest are immune from feeding many, many algorithms. It's clear that we're all engaging with AI-generated or AI-influenced content more often than we might realize.
My Approach to Writing with AI
An aspect of the dead internet theory is the lack of transparent use of AI in the content presented. It would be hypocritical of me to write all of this and not acknowledge that I do use AI as a collaborative tool in my writing process, but not as a replacement for my human thought and experience. Hopefully that comes through. I've found that using AI in the way I do actually takes longer than writing on my own, but I believe the quality of the content and my ability to present the information justify the expense. Here's how I use it:
Outlining: I use AI to brainstorm ideas and create initial outlines. It helps me consider angles I might not have thought of immediately. I end up dismissing most of these, but every now and then I'm struck by an idea that leads me to research further.
Editing and Refinement: After I posted my first article, I had a few people write with grammar suggestions and corrections. I now use AI tools to help polish my writing, checking for clarity and suggesting alternative phrasings.
Human Touch and Expertise: Crucially, every idea, example, and piece of advice in these articles comes from my professional experience, education, and personal reflection. AI doesn't generate the core content or insights. It doesn't create hyperlinks, cite sources, or take screenshots.
To drive this point, I'll highlight how this approach differs from the concerning practices outlined earlier in several ways:
Transparency: I'm open about my use of AI, unlike the covert AI-generated content the dead internet theory warns about.
Human-Centered: My content is fundamentally human-generated, with AI serving as a tool rather than the primary creator.
Ethical Use: I use AI to enhance my work, not to create false engagement or misleading content. As a licensed social worker, I'm bound by ethical standards in my communications, AI-assisted or not.
Navigating the Digital Landscape
Whether or not you buy into the dead internet theory, maintaining a balanced approach to online engagement is vital. Some very brief recommendations:
Cultivate Embodied Connections: Prioritize meaningful interactions, both online and offline. Match the content of a conversation with the mode of communication. If it's really important, it's probably best in person, through video, or over the phone.
Practice Digital Literacy: Learn to critically evaluate online content and its sources. Write with AI - at least once. After doing this, you might begin to intuit when something has been written using AI.
Set Boundaries: Limit your time online and be intentional about your digital consumption.
... But does it even matter if it's "me"?
That's the question to ponder. My slightly more removed take is that it depends on the goals you seek out of social media and the weight put behind authenticity. People create profiles for many, many reasons, and I'm struck by how many individuals even create multiple profiles on social media sites.
I heard the term "finsta" start cropping up towards the end of my high school years - a term referring to the creation of a fake Instagram account reserved for closer friends where the individual can interact in a more private way.... An ironic name, no? The most common ways I've seen it implemented are by teens looking to escape their parent's watch, individuals looking to hide outside relationships from their partners, and colleagues who've created additional profiles to separate their personal and professional life. The introduction of Close Friend systems are perhaps a way to change this behavior. From a business perspective, it's more complicated to track an individual through multiple accounts to synthesize their client persona.
I feel compelled to say your voice matters, as if a statement could hold the weight of everything I'm trying to express.
Because ultimately, the best way to know someone is not through online text. And with the context of the dead internet theory and the prevalence of AI-generated content, this becomes even more apparent. The nuances of human expression - the subtle inconsistencies, the evolution of thoughts over time, the context-dependent reactions - are difficult/impossible to replicate authentically through AI or carefully curated online personas.
Real human connection involves more than just exchanging information. It's about shared experiences, non-verbal cues, and the kind of spontaneous, imperfect interactions that often don't translate well (or at all) to text.
So... How do you maintain authenticity in your online interactions?
I’d love to hear your carefully curated perspectives in the comments. 😅
Will Ard, LMSW, MBA