Margaret Thatcher never said anything particularly horny in her political career, but thanks to Kagi Translate's AI-powered language tool, users can now find out what she might have sounded like if she had. The discovery that Kagi's translation service will attempt to render text in virtually any "language" — from "LinkedIn Speak" to "horny Margaret Thatcher" to "tiny little kitten" — has turned the business tool into an unexpected internet playground.

What started as a straightforward competitor to Google Translate has evolved into something far more entertaining. Kagi launched its translation service in 2024 as a "simply better" alternative to established tools, using what the company called "a combination of LLMs, selecting and optimizing the best output for each task." The tool initially offered 244 traditional languages in simple dropdown menus.

The fun began in February 2025, when a Hacker News user noticed you could manipulate the URL parameters to set unusual target "languages" like "rude man with a Boston accent." The discovery went largely unnoticed until this week, when users realized they could type virtually anything into the translation interface and watch the AI attempt to accommodate their requests.

Remember when it was fun to play around with LLMs?

The floodgates opened Tuesday morning after a Hacker News post highlighted the tool's ability to generate "LinkedIn Speak." Users quickly discovered they could request translations into the voices of specific figures — Carl Sagan explaining quantum physics, Werner Herzog narrating a grocery list, or Morgan Freeman delivering a weather report. Others went delightfully absurd, asking for translations into programming languages or the perspective of household pets.

Kagi's social media team leaned into the phenomenon, encouraging users to try the LinkedIn translator to "fit right into that crowd." The company has featured examples like Reddit-style commentary and McKinsey consultant jargon on its official accounts.

What makes this workThe underlying large language model treats "translation" as pattern matching and synthesis rather than strict linguistic conversion. To the AI, mimicking Margaret Thatcher's speaking style is functionally similar to converting English to French — both involve recognizing and reproducing specific communication patterns.

The viral moment represents a return to the early days of consumer AI tools, when ChatGPT amazed users with party tricks like writing Shakespearean sonnets about debugging code or imitating specific writing styles. Before AI became the foundation of trillion-dollar market valuations and existential hand-wringing about job displacement, these tools felt magical precisely because they were playful.

"In a way, this all feels like a return to the early days of ChatGPT, when people still marveled at the seemingly miraculous capabilities of these new transformer-based text engines," noted Ars Technica's coverage of the phenomenon. The discovery brings back that sense of wonder about what these systems can actually do when freed from corporate guardrails and productivity use cases.

Popular Translation Targets
  • LinkedIn professional speak and corporate jargon
  • Celebrity impersonations (Carl Sagan, Werner Herzog, Morgan Freeman)
  • Internet culture languages (Reddit speak, Gen Z slang)
  • Absurdist requests (tiny kitten, programming languages)
  • Political and historical figures with distinctive voices

Unlike AI applications that promise to revolutionize entire industries or replace human workers, Kagi Translate's creative misuse feels refreshingly low-stakes. Nobody's going to mistake a Margaret Thatcher impression generator for an all-knowing oracle or use it to make critical business decisions. It's simply a demonstration of how pattern recognition systems can synthesize language in unexpected ways.

The phenomenon also highlights the inherent flexibility of large language models. What seems like a specialized translation tool is really just another interface to the same underlying technology that powers ChatGPT, Claude, and other conversational AI systems. The "translation" framework simply provides a different lens for accessing the model's ability to analyze and reproduce communication patterns.


Of course, this kind of open-ended AI playground isn't without risks. Users have discovered they can prompt the system to generate offensive content by requesting translations into problematic personas. As one demonstration showed, asking Kagi Translate to emulate "someone who keeps saying slurs" produces exactly the kind of output that most companies work hard to prevent their AI systems from generating.

The discovery raises questions about content moderation for AI tools that weren't explicitly designed as creative writing platforms. Traditional translation services focus on accuracy and linguistic correctness rather than creative expression, which may explain why Kagi's safeguards weren't designed to handle requests for "horny Margaret Thatcher" translations.

For now, though, the internet seems content to explore the tool's creative possibilities rather than push its offensive boundaries. The viral moment offers a brief respite from AI anxiety — a reminder that these systems can be genuinely entertaining when they're not being positioned as civilization-ending superintelligence or job-killing automation.

It's just a fun toy that lets the Internet at large play with language in a way that would have been practically unthinkable just five years ago.

Kagi's unexpected viral moment also demonstrates how AI capabilities can emerge organically from user experimentation rather than corporate product development. The company built a translation tool; users turned it into a voice synthesis playground. The gap between intended and actual use cases has created something more interesting than either the original product or a purpose-built celebrity impersonation generator.

Kagi faces a choice: embrace this creative chaos or restrict it through content filters. The company's social media engagement suggests they're enjoying the attention, but sustained viral misuse of enterprise software rarely ends without some form of corporate response. For now, users continue discovering new "languages" to explore, from historical figures to internet subcultures to increasingly abstract concepts. In an AI landscape dominated by productivity tools and existential concerns, sometimes the most valuable application is simply having fun with language.