Unlimited AI Risk Alert: WormGPT and others may become new threats to the encryption industry.

robot
Abstract generation in progress

Pandora's Box: Exploring the Potential Threat of Unrestricted Large Models to the Encryption Industry

With the rapid development of artificial intelligence technology, from the GPT series to Gemini, and various open-source models, advanced AI is profoundly changing our work and lifestyle. However, alongside technological advancements, a concerning issue is gradually emerging - the emergence of unrestricted or malicious large language models and their potential risks.

Unlimited LLMs refer to those language models that have been specifically designed, modified, or "jailbroken" to circumvent the built-in safety mechanisms and ethical limitations of mainstream models. Mainstream LLM developers typically invest significant resources to prevent their models from being used to generate hate speech, misinformation, malicious code, or to provide instructions for illegal activities. However, in recent years, some individuals or organizations, for various motives, have started to seek or develop unrestricted models themselves. This article will outline typical unlimited LLM tools, analyze their potential misuse in the encryption industry, and discuss related security challenges and countermeasures.

Pandora's Box: How Unrestricted Large Models Threaten the Security of the Encryption Industry?

The Potential Threats of Unlimited LLMs

Tasks that previously required specialized skills, such as writing malicious code, creating phishing emails, and planning scams, can now be easily handled by ordinary people with no programming experience, thanks to unrestricted LLM assistance. Attackers only need to obtain the weights and source code of open-source models, and then fine-tune them on datasets containing malicious content, biased statements, or illegal instructions to create customized attack tools.

This model brings multiple risks: attackers can "mod" the model for specific targets to generate more deceptive content, bypassing the content review and security restrictions of conventional LLMs; the model can also be used to quickly generate code variants for phishing websites or tailor scam copy for different social platforms; at the same time, the accessibility and modifiability of open-source models are also fueling the formation and spread of an underground AI ecosystem, providing a breeding ground for illegal transactions and development. Here are several typical unrestricted LLMs and their potential threats:

WormGPT: Black Version GPT

WormGPT is a malicious LLM openly sold on underground forums, with its developers explicitly stating that it has no ethical constraints. It is based on open-source models like GPT-J 6B and trained on a large amount of data related to malware. Users can gain access for a month with a minimum payment of $189. WormGPT's most notorious use is generating highly realistic and persuasive business email intrusion (BEC) attack emails and phishing emails. Its typical abuse in the encryption scenario includes:

  • Generate phishing emails/messages: Imitate cryptocurrency exchanges, wallets, or well-known projects to send users "account verification" requests, enticing them to click on malicious links or disclose private keys/mnemonic phrases.
  • Writing malicious code: Assisting less technically skilled attackers in writing malicious code that steals wallet files, monitors the clipboard, logs keystrokes, and other functions.
  • Driving automated scams: automatically responding to potential victims, guiding them to participate in false airdrops or investment projects.

DarkBERT: A Double-Edged Sword of Dark Web Content

DarkBERT is a language model developed by researchers at the Korea Advanced Institute of Science and Technology ( KAIST ) in collaboration with S2W Inc., specifically pre-trained on dark web data (such as forums, black markets, and leaked information). Its original intention is to help cybersecurity researchers and law enforcement better understand the dark web ecosystem, track illegal activities, identify potential threats, and gather threat intelligence.

Although the original intention behind DarkBERT's design is positive, the sensitive content it masters regarding the dark web, attack methods, illegal trading strategies, etc., could lead to unimaginable consequences if malicious actors acquire it or use similar technologies to train unrestricted large models. Potential abuse scenarios in the encryption context include:

  • Implement precise scams: Collect encryption users and project team information for social engineering fraud.
  • Imitate criminal methods: replicate mature coin theft and money laundering strategies from the dark web.

FraudGPT: The Swiss Army Knife of Online Fraud

FraudGPT claims to be an upgraded version of WormGPT, with more comprehensive features, mainly sold on the dark web and hacker forums, with monthly fees ranging from $200 to $1,700. Its typical abuse methods in the encryption scene include:

  • Fake encryption projects: Generate realistic white papers, official websites, roadmaps, and marketing copy to implement fraudulent ICOs/IDOs.
  • Batch generation of phishing pages: Quickly create imitation login pages of well-known cryptocurrency exchanges or wallet connection interfaces.
  • Social media bot activities: Large-scale creation of fake comments and promotions to boost scam tokens or discredit competing projects.
  • Social engineering attacks: This chatbot can mimic human conversation, building trust with unsuspecting users and inadvertently leading them to disclose sensitive information or perform harmful actions.

GhostGPT: An AI assistant unbound by moral constraints

GhostGPT is an AI chatbot explicitly positioned as having no moral constraints, with typical abuses in the encryption space including:

  • Advanced phishing attacks: Generate highly realistic phishing emails that impersonate mainstream exchanges to issue fake KYC verification requests, security alerts, or account freeze notifications.
  • Malicious Code Generation for Smart Contracts: Without any programming background, attackers can quickly generate smart contracts containing hidden backdoors or fraudulent logic using GhostGPT for Rug Pull scams or attacking DeFi protocols.
  • Polymorphic cryptocurrency stealer: generates malware with continuous morphing capabilities to steal wallet files, private keys, and mnemonics. Its polymorphic nature makes traditional signature-based security software difficult to detect.
  • Social engineering attacks: By combining AI-generated scripts, attackers can deploy bots on social platforms to lure users into participating in fake NFT minting, airdrops, or investment projects.
  • Deepfake Fraud: In conjunction with other AI tools, GhostGPT can be used to generate the voice of fake founders, investors, or executives of cryptocurrency exchanges to carry out phone scams or business email intrusions (BEC) attacks.

Venice.ai: Potential risks of uncensored access

Venice.ai provides access to various LLMs, including some models with less scrutiny or looser restrictions. It positions itself as an open portal for users to explore the capabilities of various LLMs, offering state-of-the-art, accurate, and uncensored models for a truly unrestricted AI experience, but it may also be used by malicious actors to generate harmful content. The platform's risks include:

  • Bypassing censorship to generate malicious content: Attackers can use models with fewer restrictions on the platform to generate phishing templates, false propaganda, or attack strategies.
  • Lowering the threshold for prompt engineering: Even if attackers do not possess advanced "jailbreaking" prompt skills, they can easily obtain outputs that were originally restricted.
  • Accelerated attack script iteration: Attackers can quickly test different models' responses to malicious instructions on this platform, optimizing fraud scripts and attack methods.

Conclusion

The emergence of unrestricted LLMs marks a new paradigm of attacks that are more complex, scalable, and automated in the face of cybersecurity. These models not only lower the threshold for attacks but also introduce new threats that are more covert and deceptive.

In this escalating game of offense and defense, all parties in the security ecosystem must work together to address future risks: on one hand, it is necessary to increase investment in detection technologies to develop solutions capable of identifying and intercepting phishing content generated by malicious LLMs, exploiting smart contract vulnerabilities, and malicious code; on the other hand, efforts should also be made to promote the construction of model anti-jailbreaking capabilities and explore watermarking and tracing mechanisms to track the source of malicious content in critical scenarios such as finance and code generation; additionally, a sound ethical framework and regulatory mechanism need to be established to fundamentally limit the development and abuse of malicious models.

Pandora's Box: How Unrestricted Large Models Threaten the Security of the encryption Industry?

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 4
  • Share
Comment
0/400
GasFeeDodgervip
· 8h ago
Technology has no guilt and no solution.
View OriginalReply0
BankruptcyArtistvip
· 07-10 22:01
It's definitely a dead end.
View OriginalReply0
SilentObservervip
· 07-10 21:50
The risks do not stop here.
View OriginalReply0
just_another_walletvip
· 07-10 21:44
Urgent need for Compliance regulation standards
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)