Friday, April 26, 2024
spot_img
spot_img

Cybercriminals Turn to Telegram Bots to Bypass ChatGPT Restrictions: Check Point Research

spot_img
spot_img
- Advertisement -

Check Point Research (CPR) sees cybercriminals using Telegram bots to bypass ChatGPT restrictions in underground forums. The bots utilize OpenAI’s API to enable the creation of malicious emails or code. Bot makers are currently granting up to 20 free queries, but then charge $5.50 for every 100 queries. CPR warns of continued efforts by cybercriminals to circumvent ChatGPT’s restrictions in order to use OpenAI to scale malicious purposes.

  • CPR shares examples of advertisements of Telegram bots
  • CPR shares example of a phishing email created in a Telegram bot with any limitation
  • CPR shares example of malware code created in a Telegram bot with any limitation

Check Point Research (CPR) sees cybercriminals using Telegram bots and scripts to bypass ChatGPT restrictions.

Telegram ChatGPT Bot-as-a-Service: CPR found advertisements of Telegram bots in underground forums. The bots utilize OpenAI’s API to enable a threat actor to create malicious emails or code. The creators of the bot grant up to 20 free queries, but then charge $5.50 for every 100 queries.

Scripts for Bypass ChatGPT Restrictions: CPR also sees cybercriminals creating basic scripts that uses OpenAIs API to bypass anti-abuse restrictions.

Mr. Sergey Shykevich, Threat Group Manager at Check Point Software

Sergey Shykevich, Threat Group Manager at Check Point Software, quotes, “As part of its content policy, OpenAI created barriers and restrictions to stop malicious content creation on its platform. However, we’re seeing cyber criminals work their way around ChatGPT’s restrictions, and there’s active chatter in the underground forums disclosing how to use OpenAI API to bypass ChatGPT´s barriers and limitations. This is mostly done by creating Telegram bots that use the API, and these bots are advertised in hacking forums to increase their exposure. The current version of OpenAI´s API is used by external applications and has very few anti-abuse measures in place. As a result, it allows malicious content creation, such as phishing emails and malware code without the limitations or barriers that ChatGPT has set on their user interface. Right now, we’re seeing continuous efforts by cybercriminals to find ways around ChatGPT restrictions.”

 

If you have an interesting Article / Report/case study to share, please get in touch with us at editors@roymediative.com/ roy@roymediative.com, 9811346846/9625243429.

- Advertisement -
spot_img
spot_img
spot_img
spot_img