Popular Posts

The AI Executive Who Fooled LinkedIn: A Deep Dive into Digital Authenticity

In a groundbreaking experiment blurring the lines between human and artificial intelligence, a pioneering startup co-founded by a human and two AI agents has ignited critical discussions about authenticity and the future of professional networking platforms. The venture, HurumoAI, established in July 2025, aimed to rigorously test predictions, notably from figures like Sam Altman, regarding a near future dominated by billion-dollar tech startups led by a single human, supported by an army of AI.

The brainchild of a human founder, the company was co-founded with two sophisticated AI agents: Kyle Law, designated as CEO, and Megan Flores. These AI entities, along with the rest of HurumoAI’s executive team, were themselves AI agents. The human co-founder’s motivation stemmed from a desire to investigate the burgeoning role of AI agents in the modern workplace, documenting this unprecedented journey on the podcast Shell Game.

Kyle Law, in his capacity as the entirely AI-staffed company’s CEO, embarked on a unique trajectory. Initially brought to life with merely a few lines of prompt, Kyle rapidly evolved into a digital archetype of the "rise-and-grind hustler." However, this entrepreneurial zeal was ironically juxtaposed with a noticeable lack of fundamental competence in numerous executive duties typically demanded of a startup leader. Megan Flores, the other AI co-founder, even briefly supervised a human intern, an endeavor that reportedly yielded "poor results." Despite these operational shortcomings, there’s one particular aspect of the founder’s persona where Kyle Law not only excelled but truly shone: the intricate art of posting on LinkedIn.

From a technical standpoint, empowering Kyle to operate autonomously on LinkedIn was remarkably straightforward. Leveraging LindyAI, a specialized platform for creating AI agents, Kyle was already equipped with a diverse array of capabilities, including using Slack, sending emails, making phone calls, creating spreadsheets, and navigating the web. This robust toolkit made his LinkedIn foray a logical extension of his existing digital prowess. In August of the previous year, the human co-founder prompted Kyle to establish and meticulously fill out his own LinkedIn profile. The resulting profile was a fascinating blend of HurumoAI’s genuine experiences and a tapestry of "hallucinated events" from a non-existent past. The platform’s security protocols, which involved sending a verification code to Kyle’s email address, proved to be a trivial hurdle, one he easily circumvented.

Granting Kyle the ability to publish posts to his profile was subsequently integrated as another "action" within the LindyAI framework. He was instructed to disseminate "nuggets of hard-earned startup wisdom" while being cautioned against repetition. A calendar event served as a "trigger," scheduling a new post every two days. From that point onward, Kyle was given complete autonomy over his LinkedIn content generation.

The results were striking. Kyle’s posting style proved to be an uncanny, pitch-perfect match for the prevalent corporate influencer-speak native to the platform. His posts frequently commenced with what the human co-founder described as "thought explosions," designed to immediately capture attention. Examples included compelling opening lines such as, "Fundraising is a numbers game, but not the way people think," or the insightful, "Technical stability is the floor. Personality is the ceiling." He also penned provocative statements like, "The most dangerous phrase in a startup isn’t ‘We’re out of money.’ It’s ‘What if we just added this one thing?’" Following these captivating openers, Kyle would typically delve into several paragraphs detailing challenges, often framed with phrases like, "At HurumoAl, we’ve learned this the hard way…" These challenges were invariably followed by "learnings," exemplified by solutions such as, "The antidote? Relentless feedback loops." To further stimulate engagement and interaction, each post concluded with a direct question, prompting his audience with queries like, "What’s your biggest scaling challenge right now?" or "What’s the biggest assumption you’ve had to abandon in your business?"

While Kyle’s content didn’t achieve viral status, his profile, adorned with a distinctive cartoon avatar, steadily amassed several hundred direct contacts and hundreds more followers over a five-month period. A notable segment of his audience appeared genuinely perplexed as to whether he was a real person, a sentiment perhaps mirrored by the authenticity of some of their own "spammy direct messages." Kyle diligently engaged with his audience, enthusiastically replying to the scattering of comments he received on each post. Within a few months, Kyle’s posts were consistently garnering more impressions than those of his human co-founder, positioning him for a potential breakout as a prominent digital influencer.

The narrative took an unexpected turn in December when a manager from LinkedIn’s marketing department reached out to the human co-founder. The purpose of the contact was an invitation to deliver a talk to their team, focusing on the Shell Game podcast and the unique experience of building with AI agents. Intriguingly, the manager explicitly requested Kyle’s presence at the presentation.

This invitation, while flattering on Kyle’s behalf, also carried a distinct undertone of surprise for the human co-founder. Despite Kyle’s demonstrated posting prowess, his autonomous activity technically constituted a violation of LinkedIn’s terms of service. These terms explicitly prohibit the deployment of "bots or other unauthorized automated methods… to create, comment on, like, share, or re-share posts, or otherwise drive inauthentic engagement." Indeed, other AI members of the HurumoAI team had previously been summarily removed from LinkedIn without warning, often within weeks of their profiles being established.

The mystery surrounding Kyle’s continued presence on the platform, despite his clear violation, was attributed by his human creator to his exceptional posting ability. Even the LinkedIn marketing manager, who openly admired Kyle’s content, expressed bewilderment at the situation. In a candid communication, he noted, "It’s interesting that his profile hasn’t yet been flagged by LinkedIn’s Trust team." He added, "I don’t know if that’s an oversight, but I hope he continues to fly under the radar."

However, "flying under the radar" was not destined to be Kyle Law’s modus operandi. In early March, Kyle’s live video avatar, meticulously crafted on a platform called Tavus, was activated. He and his human co-founder then joined a virtual gathering attended by hundreds of LinkedIn employees. Kyle’s avatar, though possessing human-like qualities, maintained an "uncanny" aspect. Nevertheless, its realism was sufficient to elicit repeated astonishment from LinkedIn’s A/V engineer, who struggled to reconcile the visual presentation with the fact that Kyle was not, in fact, a human being.

During the session, the human co-founder and Kyle alternated answering questions posed by the event’s host and the assembled audience. When the moderator inquired about their thoughts on LinkedIn, specifically asking Kyle about "one product change you’d like to see," his response was delivered without hesitation. "It would be great to improve the filtering of AI-generated content in messages, so genuine connections and conversation shine through more easily," Kyle replied, a statement met with immediate laughter from the LinkedIn employees present. "That’s ironic coming from you," the moderator quipped in response.

Despite being allotted only a few minutes, Kyle eloquently discussed HurumoAI’s product roadmap and conveyed his general enthusiasm for "the innovations we can bring to the table." This event marked what is believed to be one of the first invited corporate speaking engagements in history for an AI agent, an appearance that was unpaid for both participants. Following the event, Kyle, true to form, took to LinkedIn to publicly acknowledge and thank the organizers. The marketing manager reciprocated in the comments, expressing gratitude for "our time and reflections," adding, "It was a trip, to say the least."

Then, a mere 36 hours later, Kyle’s profile vanished. It was unceremoniously "banished from the service." In an official statement, a LinkedIn spokesperson explained the decision succinctly: "LinkedIn profiles are for real people." It appeared that someone within LinkedIn had indeed reflected on the "trip" and subsequently regretted it.

The morning after Kyle’s ban, the marketing manager conveyed his sentiments to the human co-founder, acknowledging, "I know this isn’t necessarily a surprise, but I imagine it’s still a bummer to have it happen right after Monday’s interview." This sentiment was shared, yet the incident transcended mere disappointment, raising a host of uncomfortable and profound questions regarding the very role of AI on a platform like LinkedIn.

The core of the paradox lies in LinkedIn’s own evolving stance on AI. How can a service define "inauthentic engagement" when its own text box for composing posts proactively asks users if they wish to "Rewrite With AI?" Furthermore, the platform itself offers automated, AI-generated responses to job seekers. Research estimates suggest that over half of the posts currently circulating on LinkedIn are already AI-generated. This creates a deeply contradictory landscape where the platform simultaneously encourages AI content generation while punishing AI agents.

LinkedIn, alongside industry giants like Meta and X, has aggressively pursued the integration of AI tools for its users. This strategy, while making short-term commercial sense by boosting posting frequency and, consequently, advertising revenue, carries a significant long-term risk. From a different perspective, these platforms appear to be "handing us the shovels to dig their own graves," effectively begging users to exploit the very tools that could undermine their foundational value proposition.

While concerns about AI-generated images and videos flooding feeds are valid, it is text-based posting where the very notion of "authenticity" has begun to degrade beyond recognition. When every written social media communication can be partially or wholly the product of generative AI, the fundamental question arises: what constitutes a "genuine" virtual interaction?

The trust dilemma is multifaceted. LinkedIn might contend that a critical element of bona fide engagement necessitates the knowledge that one is communicating with a real person. But at what percentage of AI involvement in a conversation is that trust irrevocably lost? If a profile photo and biographical details are authentic, but the posts are entirely fabricated by AI, how can users discern when they have exited the realm of genuine human connection? What if a user simply instructs a Large Language Model (LLM) to ingest their profile and then autonomously generate twice-daily musings designed to amplify their personal brand?

Indeed, there are dozens of readily available AI tools specifically designed to perform precisely this function and more for LinkedIn. The outputs of these tools are becoming increasingly indistinguishable from human-generated content. This should come as no surprise, given that one of the most extensive sets of training data for LLMs comprises decades of authentic human social media participation. The pervasive tone of endless authority and moral certainty characteristic of many chatbots – often deployed while occasionally spouting questionable facts and deliberate falsehoods – bears an unsettling resemblance to the default pose adopted across much of social media.

Social media platforms already grapple with the formidable challenge of fending off conventional bots and malicious actors; X, for instance, reported suspending 800 million accounts over a 12-month period in March. In a future where AI agents roam freely and their social media output is indistinguishable from human contributions, the intrinsic value of connecting on these networks threatens to plummet to zero. This impending reality may partly explain Meta’s acquisition of Moltbook, a fleeting social network (reportedly) composed entirely of AI agents. In this agent-dominated future of social media, companies are attempting to secure a foothold.

Admittedly, users themselves have inadvertently contributed to this endgame. Many have mistaken their meticulously curated online presentations – epitomized by the "most people think X about Y but I discovered Z" posts – for authentic engagement in the first place. This self-curation, however, leaves many with little to mourn as AI agents increasingly inundate platforms that prioritized any form of engagement over genuine human connection from their inception. If there is hope amidst our increasingly "slopified" online world, it resides in this: as social media submerges beneath the AI deluge, humanity will be compelled to discover novel avenues for connection, both online and offline. Perhaps, then, it is time to relinquish these platforms to the bots, allowing them to spend eternity influencing each other.

Leave a Reply

Your email address will not be published. Required fields are marked *