Hackers Now Reject AI-Generated Phishing Bait on Aesthetic Grounds
Criminal groups are rejecting phishing emails generated by AI systems because the writing is bad. The emails sound AI-generated. The targets notice. The attack fails before the payload deploys. Threat actors have reverted to hiring humans or writing their own copy.
This is the first recorded instance of a quality standard enforcing itself through criminal rejection. The platforms that built the AI systems are shipping the tool without adequate output filtering. The people using the tools to commit crimes have better quality control than the vendors. The gap suggests that criminal markets are now more competitive on execution than legitimate markets are on basic function.
The phishers will move on. The AI will keep generating bad emails. The platforms will call this a success.