Oops, ChatGPT can be hacked to offer illicit AI-generated content including malware

This dark version of ChatGPT can craft phishing emails or write code for malware.

This dark version of ChatGPT can craft phishing emails or write code for malware.

Remember a while ago when OpenAI Ceo, Sam Altman, said that the misuse of artificial intelligence could be “lights out for all?” Well, that wasn’t such an unreasonable statement considering now that hackers are selling tools to get past ChatGPT’s restrictions to make it generate malicious content.

Checkpoint reports (via Ars Technica) that cybercriminals have found a fairly easy way to bypass ChatGPT content moderation barriers and make a quick buck doing so. For less than $6, you can have ChatGPT generate malicious code or a ton of persuasive copy for phishing emails. 

These hackers did it by using OpenAI’s API to create special bots in the popular messaging app, Telegram, that can access a restriction-free version of ChatGPT through the app. Cybercriminals are charging customers as low as $5.50 for every 100 queries and giving potential customers examples of the harmful things that can be done with it. 

Other hackers found a way to get past ChatGPT’s protections by creating a special script (again using OpenAI’s API) that was made public on GitHub. This dark version of ChatGPT can generate a template for a phishing email to impersonate a business and your bank, even with instructions on where to best place the phishing link in the email.

Even scarier is that you can use the Chatbot to create a malware code or improve an existing one by simply asking it to. Checkpoint had written before about how easy for those without any coding experience to generate some fairly nasty malware, especially on early versions of ChatGPT, whose restrictions only became tighter toward creating malicious content.

(Image credit: Checkpoint)

OpenAI’s ChatGPT technology will be featured in the upcoming Microsoft Bing search engine update, which will feature an AI chat to provide more robust and easy-to-digest answers to more open-ended questions. This, too, comes with its own set of issues regarding the use of copyrighted materials.

We’ve written before how easy it’s been to abuse AI tools like using voice cloning to make celeb soundalikes say awful things. So, it was only a matter of time before some bad actors found a way to make it easier to do bad things. You know, aside from making AI do their homework

About Post Author