The Dead Internet To Come
https://www.thenewatlantis.com/publicati...et-to-come
EXCERPTS: ... Dread gives way to the cold stab of terrible certainty as it hits you: they aren’t people. They’re bots. The Internet is all bots. Under your nose, the Internet of real people has gradually shifted into a digital world of shadow puppets.
They look like people, they act like people, but there are no people left. Well, there’s you and maybe a few others, but you can’t tell the difference, because the bots wear a million masks. You might be alone, and have been for a while. It’s a horror worse than blindness: the certainty that your vision is clear but there is no genuine world to be seen.
This is the world of the Internet after about 2016 — at least according to the Dead Internet Theory, whose defining description appeared in an online forum in 2021. The theory suggests a conspiracy to gaslight the entire world by replacing the user-powered Internet with an empty, AI-powered one populated by bot impostors.
It explains why all the cool people get banned, why Internet culture has become so stale, why the top influencers are the worst ones, and why discourse cycles seem so mechanically uniform. The perpetrators are the usual suspects: the U.S. government trying to control public opinion and corporations trying to get us to buy more stuff.
The Dead Internet Theory reads like a mix between a genuinely held conspiracy theory and a collaborative creepypasta — an Internet urban legend written to both amuse and scare its readers with tales on the edge of plausibility.
The theory is fun, but it’s not true, at least not yet. With AI-powered tools soon running in everyone’s pocket, the story of the Internet as a sterile realm of bots in human guise will become downright persuasive, and possibly true. Does it have to be this way?
[...] There’s long been a vague anxiety overshadowing the user-powered Internet hinting at a great fakeness at the core of it all, and the Dead Internet Theory is only the latest manifestation of this unease.
[...] Beneath these panics is a collective gut instinct: that hiding behind the one-man-one-account facade of the Internet could all too easily be something else, an impersonal manipulative force.
This is the substance of the Dead Internet Theory, the sigh of the web user who feels the weight of garbage that’s been choking the web for the last several years. [...] The browsing user’s last respite is appending “reddit” to search terms to bring up answers from real people having discussions.
But as we enter the age of large language models (LLMs), Reddit might not be safe either. LLMs are systems that take prompts and produce remarkably human-like text and media in response, and they’re poised to kick the flood of fake online content into overdrive...
[...] The good news is that these machines are not intelligent, and, the fears of otherwise-smart people aside, a terminator apocalypse will require something entirely different from GPT-4. The bad news is precisely that it doesn’t need to be intelligent to pass our tests; it passes because our tests are dumb and we’re gullible.
[...] It’s a safe bet ... that powerful chatbot models will soon be in everyone’s hands: a tool called AutoGPT already exists. This autonomous system runs on GPT-4 and executes user-defined goals without outside human help. Unlike stock GPT-4, it can search the Internet in real time, learn from the information it finds, and code and run its own software as it chains tasks together to achieve the end goal. It promises to perform marketing research, write articles, and create websites. If it can do all that, it can pose as a human creating content on social media platforms.
[...] The problem of a flood of bots that can pass the Turing Test — that is, pass as humans in a text-based conversation — is not that they are human-level intellects, it’s that they don’t need to be at that level to routinely fool humans. And if they can fool humans, they can fool spam filters.
[...] Let’s take a glimpse into a future where LLM bots are cheap, scalable, and ubiquitous.
It’s 2026, and the panic over an incipient AI apocalypse has subsided due to the fact that “self-driving cars” still reliably plow through barriers in San Francisco. We’re all still alive, but we live in a world of mounting suspicions over every online interaction with an ostensible person.
Concerns over spam — now considered a quaint worry of the pre-LLM world — have been replaced with rational fears about threats that are at once more subtle and dramatic than anything before. High-profile scams, manipulations, and attacks are now almost always executed by humanlike bots rather than real people. These bots are faster than human attackers and they never get bored or tired... (MORE - missing details)
https://www.thenewatlantis.com/publicati...et-to-come
EXCERPTS: ... Dread gives way to the cold stab of terrible certainty as it hits you: they aren’t people. They’re bots. The Internet is all bots. Under your nose, the Internet of real people has gradually shifted into a digital world of shadow puppets.
They look like people, they act like people, but there are no people left. Well, there’s you and maybe a few others, but you can’t tell the difference, because the bots wear a million masks. You might be alone, and have been for a while. It’s a horror worse than blindness: the certainty that your vision is clear but there is no genuine world to be seen.
This is the world of the Internet after about 2016 — at least according to the Dead Internet Theory, whose defining description appeared in an online forum in 2021. The theory suggests a conspiracy to gaslight the entire world by replacing the user-powered Internet with an empty, AI-powered one populated by bot impostors.
It explains why all the cool people get banned, why Internet culture has become so stale, why the top influencers are the worst ones, and why discourse cycles seem so mechanically uniform. The perpetrators are the usual suspects: the U.S. government trying to control public opinion and corporations trying to get us to buy more stuff.
The Dead Internet Theory reads like a mix between a genuinely held conspiracy theory and a collaborative creepypasta — an Internet urban legend written to both amuse and scare its readers with tales on the edge of plausibility.
The theory is fun, but it’s not true, at least not yet. With AI-powered tools soon running in everyone’s pocket, the story of the Internet as a sterile realm of bots in human guise will become downright persuasive, and possibly true. Does it have to be this way?
[...] There’s long been a vague anxiety overshadowing the user-powered Internet hinting at a great fakeness at the core of it all, and the Dead Internet Theory is only the latest manifestation of this unease.
[...] Beneath these panics is a collective gut instinct: that hiding behind the one-man-one-account facade of the Internet could all too easily be something else, an impersonal manipulative force.
This is the substance of the Dead Internet Theory, the sigh of the web user who feels the weight of garbage that’s been choking the web for the last several years. [...] The browsing user’s last respite is appending “reddit” to search terms to bring up answers from real people having discussions.
But as we enter the age of large language models (LLMs), Reddit might not be safe either. LLMs are systems that take prompts and produce remarkably human-like text and media in response, and they’re poised to kick the flood of fake online content into overdrive...
[...] The good news is that these machines are not intelligent, and, the fears of otherwise-smart people aside, a terminator apocalypse will require something entirely different from GPT-4. The bad news is precisely that it doesn’t need to be intelligent to pass our tests; it passes because our tests are dumb and we’re gullible.
[...] It’s a safe bet ... that powerful chatbot models will soon be in everyone’s hands: a tool called AutoGPT already exists. This autonomous system runs on GPT-4 and executes user-defined goals without outside human help. Unlike stock GPT-4, it can search the Internet in real time, learn from the information it finds, and code and run its own software as it chains tasks together to achieve the end goal. It promises to perform marketing research, write articles, and create websites. If it can do all that, it can pose as a human creating content on social media platforms.
[...] The problem of a flood of bots that can pass the Turing Test — that is, pass as humans in a text-based conversation — is not that they are human-level intellects, it’s that they don’t need to be at that level to routinely fool humans. And if they can fool humans, they can fool spam filters.
[...] Let’s take a glimpse into a future where LLM bots are cheap, scalable, and ubiquitous.
It’s 2026, and the panic over an incipient AI apocalypse has subsided due to the fact that “self-driving cars” still reliably plow through barriers in San Francisco. We’re all still alive, but we live in a world of mounting suspicions over every online interaction with an ostensible person.
Concerns over spam — now considered a quaint worry of the pre-LLM world — have been replaced with rational fears about threats that are at once more subtle and dramatic than anything before. High-profile scams, manipulations, and attacks are now almost always executed by humanlike bots rather than real people. These bots are faster than human attackers and they never get bored or tired... (MORE - missing details)