A brand new instrument goals to assist AI chatbots generate extra human-sounding textual content — with the assistance of Wikipedia’s information for detecting AI, as reported by Ars Technica. Developer Siqi Chen says he created the instrument, referred to as Humanizer, by feeding Anthropic’s Claude the listing of tells that Wikipedia’s volunteer editors put collectively as a part of an initiative to fight “poorly written AI-generated content material.”
Wikipedia’s guide accommodates an inventory of indicators that textual content could also be AI-generated, together with obscure attributions, promotional language like describing one thing as “breathtaking,” and collaborative phrases, comparable to “I hope this helps!” Humanizer, which is a customized talent for Claude Code, is meant to assist the AI assistant keep away from detection by eradicating these “indicators of AI-generated writing from textual content, making it sound extra pure and human,” according to its GitHub page.
The GitHub web page supplies some examples on how Humanizer would possibly assist Claude detect a few of these tells, together with by altering a sentence that described a location as “nestled inside the breathtaking area” to “a city within the Gonder area,” in addition to adjusting a obscure attribution, like “Consultants consider it performs an important function” to “in keeping with a 2019 survey by…” Chen says the instrument will “mechanically push updates” when Wikipedia’s AI-detecting information is up to date.
It’s solely a matter of time earlier than the AI firms themselves begin adjusting their chatbots towards a few of these tells, too, as OpenAI has already addressed ChatGPT’s overuse of em dashes, which has change into an indicator of AI content material.