In a digital environment, sentiment has become currency. How customers, investors, and the public feel about your content directly affects performance metrics, from revenues to reputation to stock price. But what happens when that sentiment is manufactured at scale?
In 1993, the New Yorker published a cartoon by Peter Steiner that went viral. It depicted a conversation between two dogs, one with a paw sitting on a computer keyboard. The caption? “On the Internet, nobody knows you’re a dog.” It rapidly became one of the most reprinted pieces in the magazine, and was so popular that in 2023, a buyer who had reportedly been trying to acquire it for 30 years, ended up paying the highest price on record for a single panel cartoon.
The anonymity our friendly pooch was celebrating is the world we now live in. We’ve crossed into dangerous new territory where artificial influence is becoming a strategic weapon. Bot farms—coordinated networks of automated accounts that simulate human behavior—have evolved into sophisticated operations that can distort market perceptions, manipulate consumer behavior, and provide artificial audiences for posted material on demand.
Did a real person really “like” this post or was it a bot?
A Fast Company investigation published last week reveals a troubling reality: large-scale bot operations have industrialized, with some farms employing thousands of people operating tens of thousands of fake accounts. These operations aren’t just posting content—they’re creating entirely fictional personas with believable histories, consistent behaviors, and fabricated social connections.
The TikTok phenomenon exemplifies this problem acutely. As one of the fastest-growing platforms with over 1.5 billion users worldwide, TikTok’s algorithm-driven content discovery makes it particularly vulnerable to manipulation. Bot operations can rapidly amplify content, creating false viral sensations that influence real users’ perceptions. The platform’s opacity about its content moderation and amplification systems further compounds the issue—we’re making business decisions based on trends that may be entirely manufactured.
As the Fast Company article also points out, when identical posts were posted on TikTok and Instagram, the TikTok posts were far more likely to get likes, re-posts and comments, suggesting that the Chinese government or other actors such as the Russian government are able to utilize that platform more readily then the US-based platform. This lends credence to the argument that at some level TikTok’s ownership represents a security risk for the United States.
This evolution creates several strategic inflection points that have consequences for the way we interact with social media.
- Information markets are increasingly manipulable. When a significant portion of online sentiment can be manufactured, traditional indicators of brand health become unreliable. Your company might be experiencing a genuine crisis, or simply be the target of coordinated artificial negativity. What’s worse, the bots are getting so good at mimicking humans that it’s very hard to know which is which.
- Competitive intelligence is being corrupted. How can you trust market signals when they might represent orchestrated campaigns rather than genuine consumer sentiment? The very data that informs your strategic decisions may be systematically distorted.
- The economics of deception are brutally unfair. It costs very little to utilize a for-rent bot attack service but a great deal to defend against them. This asymmetric cost structure means even small players can distort markets.
- Social media platforms have conflicting incentives. Platforms benefit from engagement metrics regardless of authenticity. They’re incentivized to allow a certain level of artificial activity as long as it drives user engagement and advertising revenue. This creates an environment where platforms themselves cannot be trusted as objective sources of market information. As one disgruntled observer noted, “TikTok is turning into an AI dump.”
- Our very identities are not in our control. In one of the more brazen uses of social media manipulation that I’ve run across lately is the fascinating (if slightly terrifying) case of Martin Wolf of the Financial Times. He describes how a shadowy operation made fraudulent avatars appearing to be himself, used to peddle everything from investment advice to stock tips. Further, that despite reports to Meta, the owner of platforms hosting the advertisements, they faked “Martin’s” continue to proliferate.
Inflection points in regulation and identity
We’re rapidly approaching a regulatory inflection point regarding artificial sentiment. The EU’s Digital Services Act and similar legislation emerging globally are early attempts to address these issues, but they’re just the beginning.
Perhaps the most controversial yet potentially transformative solution is the concept of universal digital identity verification. The anonymity that once defined the internet has become a strategic liability in an era of industrialized deception. A system where each online participant has a verified, unique digital identity would fundamentally change the economics of manipulation.
This isn’t about eliminating privacy or pseudonymity—both remain critical for legitimate purposes. Rather, it’s about cryptographically certifying that behind each account exists exactly one real human being. Platforms could implement tiered verification systems where users maintain anonymity while still proving their humanity.
The business implications would be profound. Marketing dollars would target verified audiences. Customer feedback would carry the weight of authentic experience. Strategic intelligence would be based on genuine market signals rather than manufactured ones.
Whither opportunities?
While the rise of industrialized bot farms presents clear dangers, it also creates opportunities for organizations that are prepared.
We’ve reached a point where social media platforms cannot be trusted as reliable sources of news or market intelligence. When TikTok trends, Twitter sentiment, or Instagram engagement can be purchased rather than earned, these signals lose their strategic value. There could well be an advantage for companies that develop their own trusted information ecosystems—networks of verified customers, partners, and observers whose inputs can be authenticated.
This information integrity crisis creates several openings. Authentication as a service could become increasingly valuable. Companies that can verify real customers, real sentiment, and real market signals will command premium relationships with partners and consumers. Trust becomes a competitive differentiator. Organizations that build reputations for authenticity and transparency will create stronger relationships with customers increasingly wary of digital manipulation. Platform-independent customer communities will provide more reliable intelligence than public social media. Companies that build direct relationships with authenticated customers will have access to genuine sentiment that competitors relying on public platforms cannot match.
The Digital Trust Horizon
Looking ahead, I anticipate a fundamental restructuring of how organizations gather and validate market intelligence. The public social media environments we’ve relied on for the past decade—Twitter, Facebook, TikTok, and others—will increasingly be viewed as compromised sources of strategic information.
In their place, we’ll see the rise of verified information ecosystems—communities where participants’ humanity is authenticated, even if their identities remain pseudonymous. These may emerge from existing platforms implementing stronger verification, or as entirely new environments built with authentication as a core principle.
This transition will be neither simple nor swift. Legitimate privacy concerns must be balanced with the need for reliable information. Cultural differences in attitudes toward identity verification will create uneven adoption. But the strategic imperative is clear: organizations need reliable intelligence to make sound decisions.
An interesting parallel example to the unique digital identification challenge for social platforms is the launch, in India, of a unique personal identification number for every person in the country. Called an Aadhaar number, it uses biometric data to verify the existence of a real human being. It’s credited with bringing greater ease of accessing government and other services to vast numbers of the Indian population, and, ten years after widespread adoption is generally regarded as a success. Perhaps on the horizon, companies will be required to validate accounts with such a distinctive personal identification number, hampering the ability of bot farm fraudsters to create hundreds of fake identities, in countries where phones and labor is cheap and plentiful.
And that might restore some trust in the sentiments we see reflected on social media.