Weapons of Mass Persuasion: How AI Will Change Campaigns and Communication

How AI Will Change Campaigns and Communication
John Artunkal

John Artunkal

Estimated reading time: 12 minutes

The advent of polling and focus groups marked the first great revolution in the art of “manufacturing consent.” Then came the era of social media, Big Data, and microtargeting. It is time to think about what will come next in the Age of AI: campaigns that are faster, smarter, and potentially more subversive than ever before.

Introduction

In 2018, the controversy surrounding Cambridge Analytica and their methods during the 2016 U.S. Presidential race dominated headlines. Many audiences, particularly those already unhappy and shocked at the outcome of the election, were appalled at the idea that a firm had harvested data from millions of unknowing Facebook users to target and manipulate them based on their predicted psychological and demographic traits. Many commentators argued that the use of such “weapons-grade communication technology” should be outright banned.

In truth, the real scandal surrounding Cambridge Analytica was much more about the firm’s involuntary collection of data than its methods, which had already been partially pioneered by the Obama campaign in 2012. The evolution of methods in the arena of persuasion and information warfare is nothing new. The last century has seen many pioneers advance the science of persuasion, turning it from top-down “guess work” to series of rigorous quantitative and qualitative methods to a highly sophisticated and self-improving technological operation.

Today, we are witnessing the next chapter of communications technology, this time powered by AI. Campaigns will be faster, self-optimising, hyper-targeted, incredibly well-informed, and potentially much more negative in tone – with opposition research and misinformation playing a potentially much greater role than ever before.

I. The First Revolution: From Bernays to Luntz

Before looking at how AI will advance and accelerate progress in the arena of campaigns, it is worth looking at what progress has taken place over the past century, how it happened, and what was the result of such improvements. While there can be many different starting points suggested, it may be best to begin thinking about modern campaigns in the 1920s, when Edward Bernays, a nephew of Sigmund Freud, began making a name for himself as the architect behind numerous successful campaigns. Commonly referred to as the “father of public relations,” Bernays characterized the era of the “great mind” – an individual propagandist who, with enough instinctive insight and superior theory of human behaviour, could almost single-handedly orchestrate an impactful campaign – whether it be to encourage women to start smoking or to make bacon and eggs synonymous with breakfast in the mind of the American consumer. These campaigns, however, were entirely reliant upon their orchestrator’s instincts – information inputs were highly limited and campaign metrics came only long after execution.

The later decades of the 20th century saw the greater professionalisation of politics and the more widespread application and fine-tuning of new methods. This is when figures like Arthur Finkelstein and Frank Luntz helped usher in a “Golden Age” for Republican politics through their use of polling, surveys, and focus groups. Target audiences were identified and the messages for those audiences were constantly tested, adapted, and deployed for maximum impact. Frank Luntz’s dial sessions and “instant response” focus groups became infamous in Washington. With a group of swing voters, a set of dials, and a running video, he could test messages in real-time, refining language until every word was weaponised. “It’s not what you say, it’s what they hear,” Luntz famously declared. Meanwhile, Arthur Finkelstein ran campaigns across the U.S. and Eastern Europe, winning with relentless, data-driven negativity — he didn’t just ask voters what they thought, he found out what scared them, and then turned that fear into a strategy.

Now, campaigns had a much richer array of information inputs – surveys, polls, and focus groups. What’s more: these methods could be applied again and again over the course of a campaign, albeit over the space of weeks and months, to create a feedback loop that made the campaigns more and more effective.

II. The Second Revolution: From Pollsters to Programmers

In 2008, the digital era of political persuasion began. The advent of the internet and social media offered a myriad of new possibilities for those seeking to run effective campaigns. Barack Obama’s first presidential campaign seized the opportunity to weaponize the internet and mine supporter data, run microtargeted ads, and use social media to recruit volunteers. Over the next decade, digital methods proliferated, and the use of things like sentiment monitoring, A/B testing, predictive modelling, and microtargeting became widespread. Advertising began to shift online and campaigns now had more data to inform campaigns than ever before, ranging from people’s addresses, ethnicities, financial history, social affiliations, interests, marital statuses, and even, in some cases, more intimate details about themselves such as their psychological traits, sexuality, and level of intelligence.

The feedback loop became more powerful than ever as well. Digital campaigns provided managers with metrics in real-time to assess their performance and A/B testing offered an avenue of constantly and quickly optimising the right iteration of an advert with the right audience. The internet also meant that the traditional methods of polling, surveys, and focus groups could be done more regularly and cheaply than they were done before – strengthening the feedback loop even further. Cambridge Analytica did not invent microtargeting nor did it invent the use of these digital methods, but it did bring them into the popular consciousness and characterise the way in which modern campaigns are understood today.

However, while this second revolution in campaigns represented a major advancement in the field, this does not mean that campaigns were still not without their limitations. Polling and focus groups can still take weeks to complete. Opposition research can still be a painstaking task – time-intensive and expensive, with no guarantee of yielding actionable insights. Large data sets still need to be analysed and interpreted. Findings from focus groups need to be understood and synthesised into strategy. Videos and images used as advertising content need to be created and altered, requiring film crews and graphic designers on payroll. Lastly, due to constraints of research time and costs, the difficulty in creating multiple variations of an advertisement, and the human input required, campaigns have been limited in terms of how granular they can go. In the US, even Presidential campaigns often only have a dozen or so audiences defined and specifically targeted, while in the UK, campaigns can usually focus on even fewer personas. In the last UK General Election, for example, there were only four major identified target voter audiences for the Conservative Party, named “Harold”, “Denise”, “Alison”, and “Mark”. If campaigns had the ability to go even more granular and identify even more target audiences, understanding each of them deeply and targeting each of them uniquely and simultaneously with ease, they could potentially be more effective than ever before.


III. The Age of AI: The Next Great Disruption

Today, in the Age of AI, all of the limitations mentioned earlier are capable of being dissolved. Campaigns are going to become increasingly powerful, automated processes with a greater wealth of data inputs than ever before, outputs that take place at an incredible speed, and feedback loop that leads to constant self-optimization.

In some ways, this process has already started. However, with the current accelerations taking place in widely available AI tools and applications, it is likely the world will get its first true taste of AI-driven campaigning in the 2026 US midterm elections next year.

Based on current trends and the quality of AI software widely available, there are three main ways in which campaigns will first begin to leverage AI:

  1. Dirt and Deepfakes: AI-Powered Opposition Research & Content Creation

This is perhaps one of the most potentially destabilising applications of AI with regards to its possible impacts on political culture. Negative content and stories regarding candidates, both real and fake, are highly likely to proliferate in a significant way.

Artificial intelligence is already transforming the field of open-source intelligence (OSINT) and opposition research. What once took teams of analysts days or weeks of sifting through social media posts, news archives, and obscure forums, can now be accomplished in hours using AI-powered tools. Platforms can automatically scrape vast quantities of online data, use AI-driven facial recognition technology to identify faces, and even detect suspicious networks or associations, with the power to expose scandals that may have otherwise remained buried. As these tools become even more sophisticated, it will be hard to imagine a scenario in which almost every single political candidate and operative’s digital footprint cannot be quickly unearthed and, if and when there are “skeletons in the closet”, used against them in negative campaigns.

Meanwhile, those who do not have any actual scandals can have them generated for them by their enemies. AI generated video content continues to become more and more sophisticated with each passing week and is increasingly indistinguishable from real video content. As such, there is no doubt that this will be used in campaigns to discredit candidates and distract voters. In fact, this phenomenon has already started. In early May 2023, Turkey’s President Erdogan screened a manipulated video at a campaign rally that made a campaign commercial belonging to his opponent, Kemal Kilicdaroglu, appear to involve the participation of Kurdish separatist militants – thereby providing “evidence” of his claim that the opposition party was aligned with PKK terrorist. Similarly, another minor opposition candidate, Muharrem Ince, was the victim of a deepfake plot in which a video from an Israeli adult site was manipulated to appear as though he was having an extramarital affair.

The implications are profound. Anyone, from regional politicians to school board candidates, can now be forensically examined in hours, not weeks. Embarrassing incidents that might have lain dormant are now fair game. In the event that there are no embarrassing incidents to speak of, they can always be generated from thin air. The effect: politics will get nastier, bloodier, and far more personal. Conversely, politicians who might otherwise have been majorly damaged by genuine scandals may now have a new lifeline in the form of proclaiming all photo or video evidence against them to be nothing more than a “deepfake”.

  1. Synthetic Research, Real Results: The New Era of Campaign Intelligence

While there still may be a place for polling and focus groups in the Age of AI, particularly in those countries or areas where there are still those with little to no online presence or connection, way in which campaign research is conducted will change rapidly.

When it comes to focus groups, AI can now be used to create “synthetic focus groups” at a fraction of the time and cost. By running simulations with thousands of virtual personas that reflect real-world demographics, psychographics, and media habits, using massive pools of social media content to do so, AI is increasingly able to produce invaluable target audience insights and model public opinion shifts in response to campaign messaging. Interested in what keeps married middle-class white women in England up at night? Just ask AI.

As for polling, which often involves slow, telephone-based surveys and consistently declining response rates, AI also has a major role to play. In terms of tracking shifts in public opinion, AI can easily be employed to do this by scraping millions of public posts and making probabilistic estimates of sentiment shifts across regions and demographics – identifying emerging trends far sooner than they would be by traditional pollsters. Additionally, instead of taking snapshots of voter sentiment every month or so, AI will enable campaigns to have a living, breathing portrait of the electorate that they are able to analyse afresh any time. In terms of identifying target audiences, AI can be used to pinpoint who falls into categories like “soft supporter”, “neutral”, “persuadable”, and “soft opponent” by analysing social media content and then matching demographic markers to clusters of users that have posts indicating they are “battleground” voters.

Perhaps the most transformative change will be in terms of speed. As mentioned earlier, traditional focus groups and polls are limited by cost, logistics, and the time needed to process results. AI-driven analysis, by contrast, can operate on a daily, if not hourly, basis. Campaigns can track message resonance, shifting voter blocs, and potential crises as they unfold, adapting strategies far more nimbly. This perpetual feedback loop means mistakes can be corrected in real time, emerging opportunities may be seized immediately, and, perhaps, learnings applied instantly, without any human involvement.

  1. Campaigners Conceding Control: Automation & Hyper-Personalisation

There has already been speculation about how eventually CEOs, including Sam Altman himself, may rely on AI for all of their decision-making – or even be replaced by it. As AI continues to develop, its ability to synthesize complex information coming in from a wide variety of sources may indeed make it able to “run” large organisations – including political campaigns. If AI has the means to conduct opposition research, generate impactful photo and video content, constantly run focus groups and surveys, execute strategies, and then analyse performance to make any necessary adjustments, then why not have AI as the new campaign manager?

One of the most obvious benefits of AI taking the reins is that rather than having a human campaign manager look at research, already provided by AI, and then utilise the findings to create advertising content, also provided by AI, a central campaign AI could do all of this automatically, iteratively, and use its far greater cognitive capacity to make advertisements and messaging hyper-personalised.

Imagine a world in which every email, text message, social ad, or video spot is uniquely tailored to a single persuadable voter, based on their latest online behaviour. For example, an undecided young professional who shares environmental memes might see a campaign video focused on climate policy, delivered via Instagram at lunchtime. Meanwhile, her neighbour, an older parent worried about crime, receives a WhatsApp message highlighting a candidate’s tough-on-crime credentials, written in a conversational style, complete with neighbourhood-specific statistics. All of this could happen automatically, thousands or even millions of times a day, powered by AI models that are constantly testing, learning, and refining their approach. With a human at the helm of a campaign, this would be impossible.

In this case, the feedback loop becomes all the more powerful. Every metric – how much of a video is watched, how many shares a post gets, how many times a link is clicked – feeds back into the system, which then automatically adjusts the messaging and visuals to maximise impact. Political advertising, as a result of constant optimising and hyper-personalisation, may therefore become more impactful in influencing voter decisions than ever before.

Conclusion: The Next Era of Campaign Innovation or a Danger to Democracy?

Having detailed all of the potential ways in which AI may make campaigns more effective, it is worth considering it may also damage democracy and have other unintended consequences.

It is hard to imagine campaign managers resisting the urge to leverage AI to extract as much opposition research against their candidate’s opponent as possible. It is also quite unlikely that the use of deepfakes to discredit opponents will not continue – if anything, as such content becomes more believable, we should expect it to proliferate. As such, politics is likely to become either far more negative and inundated with scandal or, alternatively, scandals, whether real or imagined, may come to mean very little to the average voter and candidates with far worse skeletons in the closet may emerge emboldened and far more able to succeed in the political arena.

Similar risks may result from AI-driven synthetic focus groups and sentiment analysis. By relying on social media data, these models may amplify existing biases, overlook underrepresented populations that do not have a strong online presence, or misinterpret sarcasm and cultural nuance. While traditional methods may take longer and are more costly, there is a lower risk of something negative said being interpreted as something positive and vice-versa. Additionally, rapid, automated feedback loops can also lead campaigns to optimise purely for short-term improvements in sentiment, leading to far more divisive, irresponsible, and polarising messaging.

Finally, if AI is given the reins prematurely, or without the necessary safeguards built in, then politicians, including world leaders, are at risk of effectively ceding control over their own voices and narratives. This could have disastrous consequences. Candidates may end up promising things they are unable or unwilling to deliver, or at least doing so more often than they do currently. Alternatively, they may give highly contradictory messages to different audiences without knowing. If an AI decides that telling one group of voters that taxes will be increased while telling another group of voters that they will be cut, and if this becomes widely known, what happens to that candidate’s credibility? What happens if an AI decides that the best way to mobilise voters among one ethnic group is to promise the genocide of another?

The role of AI in communications and campaigns is just beginning – and while it will undoubtedly result in some remarkable innovations that may significantly enhance their effectiveness and provide great advantages to early adopters – it may also add new and severe challenges to democratic systems at a time when they are already under strain.

Share the Post: