Digital proxies: a possible future of AI, digital media and influence
Last week, OpenAI’s threat report showed how nefarious actors use its platform. Interestingly, many of the top use cases boil down to using ChatGPT for influence operations, where threat actors make massive amounts of social media comments aiming to make organic-looking conversation on topics like policy decisions (for and against), video games, the tariffs, USAID, and more.
I’m on the record saying that this is one future for AI in social media and content: making massive amounts of content, cheaply, that most people don’t find trustworthy, engaging and authentic and therefore, performs poorly. Such content has utility, giving advertisers a lower barrier to entry to cheaply advertise. This is fine, assuming that platforms figure out how to perform algorithmic and topical content moderation (and advertising) in a world where content volume and share of voice become increasingly unreliable metrics. (It’s still way too easy to game trending topics using this strategy, a reality that has been and will continue be played for political gain: use AI to make massive content volume, the topic trends, and a political leader can say, “look, it’s trending, it’s real”.) This authenticity and trust gap of AI-driven content are why, as of yet, I've largely not been worried about AI influencers "replacing" human content creators any time soon.
On the flip side, I see a thought-provoking outcome based on a pathway Ben Thompson has laid out for ChatGPT to be a person’s “everything” app: where it links ubiquitous memory, as well as plugins and attachments to all your services, and becomes the only companion that can possibly match you across all your life contexts that span work and personal.
In a world where the ChatGPT memory of Andy is the closest proxy to duplicating me across all my contexts, and it’s also an expert on my thoughts, opinions, tastes, writing style, etc., what stops OpenAI from releasing avatar capabilities that allow me to deploy this AI Proxy Andy on other digital services?
In fact, testing of this kind of use is already happening. Literally yesterday, from The Information:
CEOs of some firms are testing ways that AI can sub in for them. They're trying chatbots and video avatars to answer employee questions, schmooze with customers and talk during earnings calls. For instance, French footwear company Salomon recently built an AI chatbot to act as a copy of its CEO Guillaume Meyzenq, according to Jean Yves Couput, senior adviser to Meyzeng at Salomon. The text chatbot, which was trained on internal memos, strategy documents, and certain emails that Meyzenq wrote in recent months—as well as his media interviews and other public appearances—aims to answer employees’ questions about the company’s culture, mission and strategy, Couput said. “It’s common for people across the company to have questions for the CEO, but he doesn’t have time to answer them all,” Couput said. “Now, they can ask his ‘digital brain’ and get an answer immediately.” The chatbot has also served as a resource for Meyzenq’s direct reports. For instance, Salomon’s public relations staff has used the chatbot to prepare Meyzenq for media appearances by brainstorming potential interview questions and prompting the AI copy of Meyzenq to answer them, he said.
It's not a huge leap to see versions of this kind of software that can be deployed externally, commenting on social media, responding emails, placing phone calls, and who knows what else.
Many people - particularly CEOs, "thought leaders" and thinkfluencer types - would obviously pay top dollar to flood the zone with AI Proxy versions of themselves across the digital media landscape, making them nearly omnipresent across as many platforms and digital spaces that allow. People will be able to distribute themselves to perhaps, say, deploy a version of themselves on Instagram whose purpose is to promote a certain product, or who comments on every LinkedIn post that mentions a certain hashtag, or who replies to every post made by a journalist or a competitor that allows them to espouse certain talking points. You could imagine many productized versions of such omnipresence that monetize the likeness of the AI Proxy, complete high-value marketing and sales tasks, or just generally manage and curate online presence or steer conversation in some way.
Such a feature would surely be available only to the highest-paying users, perhaps even allowing different deployment volumes of AI Proxy's at higher-priced subscriptions, not unlike how most AI services price their offerings today.
Here, we see two different futures for how AI could intersect with influence and digital media. On one end, AI significantly lowering the bar for anyone to create massive amounts of content of middling originality and creativity, relatively easy to distribute cheaply and at scale; on the other, a (likely) ultra-expensive service that allows the distribution of one single person's IP across the digital landscape.
When thinking about this at first, I reflexively decided that the content from such a digital avatar would still "perform poorly," just like the cheap AI-generated slop content we discussed at the beginning, and for the same reasons - AI-generated content still isn’t considered trustworthy by most people, and such digital avatars of notable personas would almost certainly mandate a disclosure of some kind.
But thinking about it more, that view is too narrow. From an advertising, media and IP standpoint, maybe there's something here that's novel and exciting; Fortnite's Speak With Darth Vader - powered by Gemini and Eleven Labs - seems to be a smashing success, and I imagine we'll see more likeness-powered presentations soon. An AI-powered NPC in, say, Grand Theft Auto 6, feels realistic. How cool would it be to run a mission for Tony Montana or Vito Corleone? Perhaps, like all advertising, it comes down to creativity. I've said before that the platforms probably know that stuffing their users' feeds with crappy ads will cause them to leave, and an AI-generated Shaquille O'Neal commenting on everyone's posts about why they need The General or Icy Hot would probably fall in that camp, for example. t's the version of this future that allows thinkfluencers to propagate themselves all over the internet that leaves me with a sour taste.
This all relates to three meta-topics I keep coming back to lately.
This pops up whenever I investigate the downstream effects of innovative technologies: the same tech that lowers the barrier of access for many people to do something well cheaply, it also, in time, helps the highest payers to reinforce their advantages. We do not need AI Proxy's of middling thinkfluencers competing in people's social media feeds, though I worry such a product will arrive at some point.
It's impossible to label such technologies as fully either "good" or "bad". AI stands to do massive good things for digital media, creativity, and the future of agencies; it will also destroy many jobs and proliferate the creation and distribution of bad content and, worse, malicious attacks.
I decided long ago - and have written hundreds of pages about it for a forthcoming book - that politics flows downstream from culture, but that culture flows downstream from technology. It's why we as a society feel an air of helplessness about all this: technology's curse is that humans, despite inventing technology, simultaneously do not understand it and immediately distribute it. This has been true since fire. What can we as a culture, and worse, as a society run by a government, do to steer our species as our inventions exponentially exert downstream impacts on us? Neither "stop inventing" nor "stop distributing the inventions" are viable; what, then? Answering this question occupies much of my brainspace lately. (A morose end for a post that also featured AI Proxy's of Shaq hawking Icy Hot; at least, at the end, we'll all be insured by The General?)