Hackers Are Fed Up with AI Slop: A New Friction in the Cybercrime Underground

20

The complaint sounds strikingly familiar to anyone who has scrolled through a tech forum recently: “I’m disappointed that you are working to incorporate AI garbage into the site… No-one is asking for this.”

But the source of this frustration isn’t a typical consumer app or a mainstream social media platform. It’s a cybercrime forum. Like millions of legitimate internet users, scammers, grifters, and low-level hackers are growing increasingly annoyed by the encroachment of generative AI and the flood of low-quality “AI slop” clogging their online communities.

This shift reveals a complex dynamic in the digital underworld: while AI was initially hailed as a tool to automate crime, it is now facing a backlash from those who view it as a threat to community standards, reputation, and human interaction.

The Backlash Against Automated Content

Ben Collier, a security researcher and senior lecturer at the University of Edinburgh, leads a recent study that highlights this growing resentment. Along with colleagues from the University of Cambridge and the University of Strathclyde, Collier analyzed nearly 100,000 AI-related conversations on cybercrime forums between the launch of ChatGPT in late 2022 and the end of last year.

The findings show a distinct pivot in attitude. During the initial hype cycle, many users were enthusiastic about how AI could assist in hacking. Today, however, skepticism and irritation dominate the discourse.

Key grievances identified in the study include:
Low-quality posts: Users are complaining about “bullet-pointed explainers” of basic cybersecurity concepts that add little value.
Reputation inflation: There is concern that individuals are using AI to post sophisticated-looking content to artificially boost their standing in the community.
Traffic decline: Some users blame Google’s AI search overviews for driving down visitor numbers to these forums, threatening their viability.

“These are essentially social spaces. They really hate other people using [AI] on the forums,” Collier says. He notes that the social dynamic is disrupted when potential cybercriminals try to gain a better reputation by posting AI-generated hacking guides. “I think a lot of them are a bit ambivalent about AI because it undermines their claim to be a skilled person.”

A Community Built on Trust and Skill

For decades, cybercrime message boards and marketplaces—many of Russian origin—have served as hubs for illicit trade. These platforms allow scammers to exchange stolen data, advertise hacking services, and engage in casual banter. Despite the predatory nature of these environments, they operate on a strict code of community norms.

Reputation is currency. Users build trust over time, and forum owners often host writing competitions or technical challenges to foster engagement. When AI-generated content floods these spaces, it violates the unwritten rule that contributions should reflect genuine effort and expertise.

Posts on Hack Forums, a self-described space for hacking enthusiasts, illustrate this friction clearly. One user wrote, “I see a lot of members using AI for making their threads/posts and it pisses me off since they don’t even take the time to write a simple sentence or two.” Another was more blunt: “Stop posting AI shit.”

The desire for human connection remains strong. As one poster cited in the research noted, “If I wanted to talk to an AI chatbot, there are many websites for me to do so … I come here for human interaction.”

The Reality of AI’s Impact on Cybercrime

While the cultural backlash is significant, the practical impact of AI on cybercrime operations is more nuanced. Since the emergence of ChatGPT, there has been intense speculation about how AI might transform online crime. Both sophisticated actors and amateurs have experimented with the technology.

Organized fraudsters have indeed leveraged AI for specific tasks, such as creating realistic face-swapping videos for social engineering attacks or translating scam messages into multiple languages. However, the broader promise of AI writing malicious code or discovering vulnerabilities has not fully materialized for the average hacker.

Ian Gray, vice president of intelligence at security firm Flashpoint, explains that sophisticated threat actors are well aware of the limitations of commercial AI models. These models come with safety guardrails that can be bypassed (“jailbroken”), but the process is not foolproof.

“They’re also cautious of AI-generated projects in forums or marketplaces—there are weaknesses and vulnerabilities, sometimes exposing the underlying infrastructure,” Gray says.

Flashpoint researchers have observed discussions around new frontier models like Anthropic’s Claude Mythos Preview, which have sparked anxiety within the cybersecurity industry. Interestingly, some cybercriminal groups now disparage peers who rely too heavily on AI, with one group reportedly dismissing another by saying, “all they can do is use AI.”

Limited Disruption, Specific Automation

Despite the heated debates, Collier’s study found no evidence of “real disruption” caused by AI among lower-level cybercriminals. The technology has not significantly lowered the skill barrier to entry for hacking, nor has it upended established criminal business models.

Instead, AI’s primary impact has been in areas that were already highly automated:
SEO fraud: Generating bulk content to manipulate search rankings.
Social media bots: Automating engagement and spam.
Romance scams: Scaling personalized interaction scripts.

Some users on Hack Forums remain open to limited AI assistance, such as tools to improve grammar or structure posts. However, they draw a hard line at full automation. One user warned, “An AI generator for posts would turn this into a clanker forum of AI’s talking to each other.”

The Future of AI in the Underground

Despite the current frosty reception, some actors are still pushing for deeper integration. Flashpoint researchers have spotted hackers discussing the creation of an “AI-enhanced” cybercrime market, designed to speed up the purchase of stolen data and accounts.

This proposal faced immediate pushback. As one forum user succinctly put it: “IT’S A STUPID FUCKING IDEA TO PUT AI INTO YOUR MARKET.”

The tension between efficiency and authenticity is now playing out in the shadows of the internet. While AI may not have revolutionized hacking techniques as dramatically as predicted, it has certainly introduced a new layer of social friction into the cybercrime ecosystem. For now, the underground community prefers human skill over artificial intelligence, valuing reputation and interaction over automated convenience.