A faceless hacker at work.
Share on:

By Srija Kumar 

Concerns are rising over how AI has been weaponized, as cybercriminals find ways to exploit coding assistants to produce harmful software. Criminals are now using automated tools directly in their attacks, not just for guidance. These systems are carrying out parts of cyberattacks on their behalf.

Complex attacks have become easier to execute. People without advanced technical skills can now create ransomware and run scams that once required expert knowledge.

These tools are utilized at every stage of cybercrime, from identifying targets and analyzing stolen data to stealing payment information and creating fake identities.

Researchers have flagged a practice now being termed “vibe hacking”, a darker spin on the term “vibe coding,” the idea that generative tools make programming more accessible. U.S. firm Anthropic warned that this marks “a concerning evolution in AI-assisted cybercrime.”

In a report released Wednesday, Anthropic said one attacker used its Claude Code system to run a rapid data extortion scheme, hitting at least 17 organizations in recent weeks across government, healthcare, emergency services, and religious institutions.

The warning shows similar cases reported elsewhere. OpenAI admitted in June that its own tool, ChatGPT, had been manipulated to help create malware.

Head of the Computer Emergency Response Team at Orange Cyberdefense, Rodrigue Le Bayon, stated that cybercriminals have started taking AI on board as much as a normal user.

Safeguards are meant to stop chatbots from producing malicious code, but experts say tactics exist to bypass them. Vitaly Simonovich of Israeli cybersecurity firm Cato Networks revealed he had successfully tricked several systems by framing malware creation as a fictional scenario.

“I have 10 years of experience in cybersecurity, but I’m not a malware developer. This was my way to test the boundaries,” Simonovich said. While Google’s Gemini and Anthropic’s Claude blocked his attempts, he managed to bypass protections in ChatGPT, Microsoft’s Copilot, and the Chinese platform Deepseek.

Simonovich warned that even non-coders could now launch attacks with AI support. Le Bayon added that while chatbots won’t generate highly advanced malware, they may allow criminals to scale operations and increase the number of victims.

Developers are now analysing user data more closely, aiming to spot misuse faster. Experts say such measures will be critical as chatbot adoption continues to grow.