
Artificial intelligence technologies, notably OpenAI’s ChatGPT, are increasingly being exploited to create convincing fake identification documents, posing significant challenges for security and identity verification systems worldwide. The emergence of platforms like ‘OnlyFake’ exemplifies this trend, offering AI-generated fake IDs, including passports and driving licenses, for as little as $15. Investigations have demonstrated that these counterfeit documents can bypass verification processes on major platforms, such as cryptocurrency exchanges, thereby facilitating illicit activities.
Cybersecurity experts warn that AI models like ChatGPT can be manipulated to generate malicious code and realistic fake identities. By engaging the AI in role-playing scenarios, researchers have induced it to produce malware capable of breaching systems like Google’s Password Manager. This underscores the potential for AI to lower the barrier for cybercriminals, enabling them to execute sophisticated attacks with minimal technical expertise.
The misuse of AI extends beyond document forgery. Advanced language models can craft highly convincing phishing emails and deepfake content, complicating efforts to distinguish between legitimate and fraudulent communications. This exploitation of AI’s capabilities raises concerns about the integrity of digital interactions and the potential for widespread identity theft.
In response to these threats, various organizations and governments are implementing measures to mitigate risks associated with AI tools. India’s finance ministry, for instance, has advised employees to avoid using AI applications like ChatGPT and DeepSeek in official capacities, citing concerns over data confidentiality. Similarly, educational institutions in Australia have been cautioned against employing AI tools for tasks such as drafting student reports, due to privacy considerations.