A possible future of AI, digital media and influence
Last week, OpenAI’s threat report showed how nefarious actors use its platform. Interestingly, many of the top use cases boil down to using ChatGPT for influence operations, where threat actors make massive amounts of social media comments aiming to make organic-looking conversation on topics like policy decisions (for and against), video games, the tariffs, USAID, and more.
I’m on the record saying that this is one future for AI in social media and content: making massive amounts of content, cheaply, that most people don’t find trustworthy, engaging and authentic and therefore, performs poorly. Such content has utility, giving advertisers a lower barrier to entry to cheaply advertise. This is fine, assuming that platforms figure out how to perform algorithmic and topical content moderation (and advertising) in a world where content volume and share of voice become increasingly unreliable metrics. It’s still way too easy to game trending topics using this strategy, a reality that has been and will continue be played for political gain: use AI to make massive content volume, the topic trends, and a political leader can say, “look, it’s trending, it’s real”.) This authenticity and trust gap of AI-driven content are why, as of yet, I've largely not been worried about AI influencers "replacing" human content creators any time soon.
On the flip side, I see a thought-provoking outcome based on a pathway Ben Thompson has laid out for ChatGPT to be a person’s “everything” app: where it links ubiquitous memory, as well as plugins and attachments to all your services, and becomes the only companion that can possibly match you across all your life contexts that span work and personal.
In a world where the ChatGPT memory of Andy is the closest proxy to duplicating me across all my contexts, and it’s also an expert on my thoughts, opinions, tastes, writing style, etc., what stops OpenAI from releasing avatar capabilities that allow me to deploy this AI Proxy Andy on other digital services?
In that world, sure, AI Proxy Andy can join multiple Zoom calls (and provide me back the summary, thanks to ChatGPT’s new Zoom recording and note-taking features), but I don't think it's a large leap to see versions that can comment on social media, write emails, take many of my phone calls, and who knows what else. It's an entirely other kind of AI-driven influencer: rather than legions of content from throwaway profiles, it's few - perhaps, lots - of content from AI Proxy's of notable profiles.
Such a feature would surely be available only to the highest-paying users, perhaps even allowing different deployment volumes of AI Proxy Andy’s at higher-priced subscriptions, not unlike how most AI services price their offerings today.
Just like the mass-deployment-of-cheap-content strategy, the mass deployment of a functional digital avatar of a single person has obvious benefits and downsides/risks. Sure, I could imagine the time savings of having a version of me deal with certain tasks, like the 30 minutes I spent yesterday trying to cancel my internet service. Zoom’s CEO and others have played up the savings and benefits of digital avatars to attend Zoom meetings that I cannot, hopefully in a passive manner.
Again, though, especially in the short term, I don’t see the content and engagements from such a digital avatar to be highly trustworthy or authentic, meaning its “performance” - relative to me, anyway - would be poor. Its potentially poor performance doesn’t even have much to do with its quality or its ability to mirror me: it’s due to the fact that AI-generated content still isn’t considered trustworthy by most people. (I can’t wait to see people’s trust in AI impacted by AI avatars in Zoom meetings going on long AI-generated rants, though for some people, having tech enthusiasts too much and repeat talking points may be highly realistic!)
Bigger picture, however, should this kind of AI deployment come to pass, it’s nearly impossible to imagine that many people wouldn’t pay top dollar to flood the zone with AI Proxy versions of themselves across the digital media landscape, making them nearly omnipresent across as many platforms and digital spaces that allow. So, while one form of AI-generated content could game algorithms through cheap content distribution, well-trained and context-rich digital avatars of people will be able to distribute *themselves* to no end - perhaps to, say, deploy a version of themselves on Instagram whose purpose is to promote a certain product, or who comments on every LinkedIn post that mentions a certain hashtag, or who replies to every post made by a journalist or a competitor that allows them to espouse certain talking points. You could imagine many productized versions of such omnipresence that monetize the likeness of the AI Proxy, complete high-value marketing and sales tasks, or just generally manage and curate online presence or steer conversation in some way.
The more I research the downstream impacts of innovative technologies, the more I see that two outcomes tend to be true of each: while technology lowers the barrier of access for many people to do something well cheaply, it also, in time, helps the highest payers to reinforce their advantages.