WormGPT: Beware! This AI tool makes it easier for cybercriminals to hack, Details

WormGPT: Beware! This AI tool makes it easier for cybercriminals to hack, Details

WormGPT: The success of ChatGPT has led to an increase in “generative Artificial Intelligence (AI)” solutions, which may generate fresh sections of text, graphics, and other material. Concerns have been made regarding the tools’ tendency to say things that are false but difficult to spot because of how well the system understands the syntax of human languages. Using bogus content and the increase in cybercrime have also come up as issues. Our worries have come true because of WormGPT, a ChatGPT substitute that is being used by cybercriminals to perform sophisticated phishing attacks.

WormGPT

The inventor of WormGPT is allegedly selling access to the programme in a well-known hacker forum, according to email security service SlashNext, which made use of the chatbot. In a blog post, the company said that “we see that malicious actors are now creating their own custom modules similar to ChatGPT, but easier to use for nefarious purposes.”

Before being made public this month, the chatbot appears to have first been shown by the hacker in March. WormGPT, unlike ChatGPT or Google’s Bard, has any protections to stop it from responding to harmful queries.

How can cybercriminals use it?

Users of WormGPT are able to engage in a variety of illicit activities. It enables bots to produce persuasive and complex emails for phishing or Business email compromise (BEC) attacks as well as construct malware in the Python programming language. This means that fraudsters can generate convincing fake emails to target interested parties for phishing assaults. WormGPT is essentially ChatGPT with no moral restrictions.

Keep watching our YouTube Channel ‘DNP INDIA’. Also, please subscribe and follow us on FACEBOOKINSTAGRAM, and TWITTER.

Exit mobile version