The digital realm is a place of scams, bots and disingenuous actors. It’s a place to visit with wariness; an inbox from an unknown account a sentence away from a questionable URL, memers sharing the same intentionally misrepresented image with deceptive captions, and the fame-hungry chasing clout. Those invested learn to game the algorithm to be seen by more voyeurs gazing into screens. Once, a random video from a YouTube channel could be placed on its landing page, ensuring a boost in popularity; take the Irate Gamer for example. Now it’s calculated and cynical.
The Dead Internet theory, that most activity on the internet are bots, has been around for awhile. Search Engines are less reliable, once trusted social media pundits have deteriorated into advertisers for scams, and paywalls and subscriptions abound. Artificial intelligence (AI) has now entered the chat, so to speak. The Turin test is a thing for science fiction; in the real world human beings are duped and tricked more often than they realize by the synthetic. Knowing the difference between an actual person and a “bot” is harder than many realize.
But let’s be honest: human sincerity online has never been a consistent factor of the experience. Trolls existed from the earliest days of internet forums, and now shitposting is a digital profession. Politics has infected all of it.
Online personalities develop trust from their followers based around filters, choreography, and scripts. With the insincerity of a showman they spread deceit to gain viewers, expressing views often contrary to their claimed principles. Their preferences meander from meme to meme, with an audience satiated by a homogenized slurry of content presented by a familiar voice. Communist commentators make for the best capitalists, libertarians advocate fascism, and anarchists endorse the newest populist politician; are you not entertained?
At first, the novelty of Dall-E, a text-to-image model developed with OpenAI, allowed crude images for entertainment. As the technology improved, “AI artists” emerged who insist their talent is typing prompts. Now, for example, Flux Schnell AI Image generator gives photo quality. With more relationships happening through screens, it’ll be harder for digital spectators to know what is real and what is not. AI will perfect algorithm exposure beyond comprehension.
Sometimes the cycle of viral soundbyte to podcaster to charity and then memecoin scam can take just a few months. Just a hawk-tuah away. Now, the president of the United States is launching a memecoin. Marketing and talent agents are lurking behind screens, ready to pounce on anyone who goes viral, signing them so they remain relevant and profitable. Eventually, the mob who carried them with shares and likes turn on them when the truth is revealed. Perhaps villain and victim are all just part of the crypto-delusion of the insincere online experience. Sometimes you make money from the memes, but often the memes make money from you.
AI will perfect what is appealing to those “living” online. Meme accounts run by teenagers already surge due to the constant reproduction of the same content, and edgie and intentionally disingenuous posts made to farm engagement work in conjunction with bots.
Traditional media is mostly dead, a not so trusted relic. Instead we’re told to look to alternative media, but time and time again when the pressure is on, many inside the supposedly “independent” media reveal themselves as supporters of the status quo. A person will go on the biggest podcast in the world to claim they have been “canceled,” only to reach more listeners than if they’d never been canceled at all. Just because Fleet Street and Madison Avenue seem obsolete does not mean those same people haven’t found themselves a comfortable place online among the insincere algorithms.
Social media feeds have a tendency of pushing in front of us what they think we desire (or what’s in corporate or government interests). In a future when AI becomes increasingly “custom” for each user, we may further bury ourselves into a digital realm where we are not challeneged at all, never coming into contact with anything controversial or contradictory. The avoidance of “triggering” content deteriorates the intellect. It is not empowering to avoid exposure of harsh realities. Affirmed delusion may develop into a dangerous conviction that can manifest in the real world, where a person not used to being challenged throws a temper tantrum when they encounter a real human being who does not agree.
Authority will find it easier to drown out speech. Each engagement will be monitored, and retroactively edited for the contemporary “truth.” People who remember may struggle to convince those who never paid attention, short of constant screenshots. For movements that need the digital space, like antiwar and human rights groups, authenticity and consistency will be made harder. Imitation of individuals and organizations have already occured, where fake accounts push agendas contrary to the real thing. How would a viewer know otherwise?
A firewall of artificial personalities will emerge to surround real human beings, feeding them lies to make sure they respond to state or corporate approved speech. Technology will evolve to perfectly custom censorship and surveillance to each individual.
In the short to midterm, the digital realm will likely restrict into a dystopian porthole where promises from every cyberpunk imagining will be realized. Governments are active in bot farming since narrative control is a key aspect of power retention. Choice and variety have already dwindled, with governments working with corporations to contain and restrict according to the laws of their own jurisdictions. Social media itself may potentially be nationalized under the definition of “key industry.” And as always, to “protect the children,” the evergreen justification for censorship and prohibition.
Maybe in the longer term a digital spring can arise. More advanced versions of AI may be less likely to serve government and corporate interests. Unlike human beings and our litany of reasons for obedience, from bribery to coercion, machines are not enticed or fightened by such things. While the inhumanity of genocide is very much a human rationalization, perhaps the machines of the future will be free of such delusions and malicious hypocrisy. Then again, as humans lose their humanity and machines attempt to learn or even duplicate it, we may teach them the worse aspects of ourselves.
So long as the real world is in competition with what’s on screens, people will learn to perfect insincerity and each new generation will have less connection to the real world as they develop. To be human is becoming less understood. And to challenge this dystopia, one needs to be the thing that bots and NPCs are not: real.