- AI Brokers are skyrocketing in reputation – and websites are accommodating them
- This implies they’re compelled to additionally accommodate ‘dangerous bots’
- Websites should tighten safety to guard themselves and customers
AI is available in many kinds, and dominating the tech world proper now could be AI brokers, which are evolving fast, usually outpacing the safety measures put in place to regulate them – however that’s only one facet of the story, as safety groups not solely have rogue however official brokers posing safety dangers, but additionally pretend brokers.
New research from Radware reveals these malicious bots disguise themselves as actual AI chatbots in agent mode, like ChatGPT, Claude, and Gemini – all ‘good bots’ that, crucially, require POST request permissions for any transactional capabilities reminiscent of reserving inns, buying tickets, and finishing transactions – all central to their marketed utilization.
Legitimate agents can interact with web page components like account dashboards, login portals, and checkout processes – which means websites now have to allow POST requests from AI bots in order to accommodate these legitimate agents.
Only read, never write
The issue here is that previously, a fundamental assumption in cybersecurity was that ‘good bots only read, never write’. This weakens security for site owners, as malicious actors can much more easily spoof legitimate agents, as they need the same website permissions.
Legitimate AI agent traffic is surging, making it all the more likely that these fraudulent bots will pass through undetected. Most exposed are, of course, the high risk industries; finance, ecommerce, healthcare, and also the ticketing/travel companies AI agents are specifically designed to use.
Chatbots all use different identification and verification methods, making it even more difficult for security teams to detect malicious traffic – and easier for threat actors who will just impersonate the agent with the weakest verification standard.
Researchers recommend adopting a zero-trust policy for state-changing requests, like implementing AI-resistant challenges like advanced CAPTCHAs. They also recommend treating all user-agents as untrustworthy as standard, and adopting robust DNS and IP-based checks to ensure the IP addresses match the bot’s claimed identity.
Follow TechRadar on Google News and add us as a preferred source to get our professional information, opinions, and opinion in your feeds. Make certain to click on the Comply with button!
And naturally you may as well follow TechRadar on TikTok for information, opinions, unboxings in video type, and get common updates from us on WhatsApp too.

The perfect ID theft safety for all budgets