It sounds silly, but memes might be the future of warfare.
I guess it’s not entirely correct to say that memetic warfare is a thing of the future. Because, well, it’s already happening.
Political memes shaped the 2016 presidential election – hate groups love hijacking memes and appropriating them into hate symbols – ASPI discussed the use of memes as propaganda for extremist movements in their Counterterrorism Yearbook 2021 – and NATO has repeatedly acknowledged the burgeoning threat information warfare poses (most notably here).
Memes have power. And bad actors are abusing them.
What is it that makes memes so damn easy to weaponize? Why are they this effective at spreading disinformation and influencing human behavior?
It’s probably too complicated for me to address in a succinct and comprehensive way. But I can say, speed and audience size are big factors.
Here’s the super-mega-ultra abridged version:
Troll factories, bots, and fake news all play a role in memetic warfare.
“Although social networks and online forums, where much of public discourse now takes place, enable greater access to participation for everyday writers…the current scene includes more aggressive intervention by nonhuman actors, such as bots, that generate writing. Humans are, of course, usually responsible for authoring the computational processes that generate writing…, but by making certain aspects of online writing computational, human authors can typically operate with greater speed, scale, and autonomy”
Humans participate in propaganda, espionage, and the like. This isn’t new, certainly not to warfare. Instead of the traditional places, though, you can now find these dehumanizing tactics in memes. And it’s precisely because bots are so good at what they do.