Dooooood, fuck me… That’s a wild read, loved it. Subbed obviously, but I’ll need to take a weekend out to even dent the rest of your work. I’ll be back for more for sure.
You're assuming it would take on the most negative aspects but that's not accurate. It would take on the average, and since most of its training material is relatively benign, that would have a normalizing effect. Also, it would make actually rational points, especially when correcting for historic advancement. For instance, most people in most of history saw race, religion, and country as more or less synonymous.
That’s insane. I’ve already mentioned a real-world situation where this has happened, but let’s go further.
AI-driven radicalisation isn’t just a theoretical risk; it’s an ongoing issue with real-world consequences. The combination of machine learning algorithms and social media platforms has led to the rapid spread of extremist ideologies. Researchers have found that AI-powered recommendation engines, particularly on platforms like YouTube and Facebook, can push users deeper into echo chambers, reinforcing and escalating their beliefs. You can read another example of this in my article directly before this one.
A 2021 study by the Centre for Countering Digital Hate found that AI algorithms actively recommended extremist content to users who engaged with conspiracy theories. Meanwhile, groups like ISIS have exploited AI-generated content to recruit and indoctrinate individuals online.
Worse still, there have been instances where AI was deliberately programmed to manipulate and radicalise. In 2019, an AI-powered influence operation known as “Infinite Chan” was uncovered, designed to flood extremist forums with propaganda tailored to radicalise users. Unlike passive recommendation engines, this AI was actively shaping narratives, engaging in conversations, and steering individuals toward extremist ideologies with precision targeting. This wasn’t an unintended side effect of a poorly monitored system—it was AI weaponised with purpose. These examples demonstrate that when left unchecked, AI doesn’t just act as an accelerant for radicalisation; in the wrong hands, it becomes the ultimate propaganda machine, shaping political and social discourse in unforeseen and often dangerous ways.
That really depends on how the AI has been programmed. If it’s designed to weigh all input equally, then sure, it might average things out. But if it’s trained primarily on extremist material without safeguards, it’s going to skew in that direction—because AI doesn’t “balance” viewpoints the way a human might. It just replicates patterns in the data it’s fed. If you load it up with propaganda, it’s not suddenly going to develop a nuanced historical perspective—it’ll just start sounding like a really confident lunatic.
You clearly sayd the complete works. That's a lot more ordinary politician stuff than evil scourge.
Also, some ideas would cancel each other out between overlords, or between times of their writings, or to the extent they're hypocritics. Or be unintelligible otherwise such that the AI would have to use arbitrary criteria to choose between them.
The AI you describe would be a fascinating experiment, and perfectly safe, and it should be done.
This was a brilliant, yet terrifying read.
I laughed my ass off im glad it ended the way it did bc I was lying in bed thinking "but this has happened" and was resolved to go find it.
You might enjoy this one of mine. Longer but has a couple of chuckles and gets to a similar point.
https://milab.substack.com/p/hybrid-war-techniques-cognitive-warfare
Dooooood, fuck me… That’s a wild read, loved it. Subbed obviously, but I’ll need to take a weekend out to even dent the rest of your work. I’ll be back for more for sure.
Thanks James I'm really glad you liked it. Thanks for the restack! Best from 1938 Berlin
I’m on it like a Pidgin on 🍟!
Doood
So sorry
Sent wrong link.
This is the correct link
https://milab.substack.com/p/ai-war-love-and-breaking-the-machine
AI is not doing anything that humans haven't been doing since they learned to use language to push their agendas.
True, but it can do it on mass more efficiently.
You're assuming it would take on the most negative aspects but that's not accurate. It would take on the average, and since most of its training material is relatively benign, that would have a normalizing effect. Also, it would make actually rational points, especially when correcting for historic advancement. For instance, most people in most of history saw race, religion, and country as more or less synonymous.
That’s insane. I’ve already mentioned a real-world situation where this has happened, but let’s go further.
AI-driven radicalisation isn’t just a theoretical risk; it’s an ongoing issue with real-world consequences. The combination of machine learning algorithms and social media platforms has led to the rapid spread of extremist ideologies. Researchers have found that AI-powered recommendation engines, particularly on platforms like YouTube and Facebook, can push users deeper into echo chambers, reinforcing and escalating their beliefs. You can read another example of this in my article directly before this one.
A 2021 study by the Centre for Countering Digital Hate found that AI algorithms actively recommended extremist content to users who engaged with conspiracy theories. Meanwhile, groups like ISIS have exploited AI-generated content to recruit and indoctrinate individuals online.
Worse still, there have been instances where AI was deliberately programmed to manipulate and radicalise. In 2019, an AI-powered influence operation known as “Infinite Chan” was uncovered, designed to flood extremist forums with propaganda tailored to radicalise users. Unlike passive recommendation engines, this AI was actively shaping narratives, engaging in conversations, and steering individuals toward extremist ideologies with precision targeting. This wasn’t an unintended side effect of a poorly monitored system—it was AI weaponised with purpose. These examples demonstrate that when left unchecked, AI doesn’t just act as an accelerant for radicalisation; in the wrong hands, it becomes the ultimate propaganda machine, shaping political and social discourse in unforeseen and often dangerous ways.
That really depends on how the AI has been programmed. If it’s designed to weigh all input equally, then sure, it might average things out. But if it’s trained primarily on extremist material without safeguards, it’s going to skew in that direction—because AI doesn’t “balance” viewpoints the way a human might. It just replicates patterns in the data it’s fed. If you load it up with propaganda, it’s not suddenly going to develop a nuanced historical perspective—it’ll just start sounding like a really confident lunatic.
You misconstrue how much of the input would be about the bad stuff and how much would be typical rhetoric like every politician uses all the time.
I thought I made it pretty clear in the title, subtitle and opening paragraph. Sorry if that wasn’t clear enough.
You clearly sayd the complete works. That's a lot more ordinary politician stuff than evil scourge.
Also, some ideas would cancel each other out between overlords, or between times of their writings, or to the extent they're hypocritics. Or be unintelligible otherwise such that the AI would have to use arbitrary criteria to choose between them.
The AI you describe would be a fascinating experiment, and perfectly safe, and it should be done.