Microsoft has sued a cybercriminal group for creating tools to exploit its generative AI services.
Published: Jan 14, 2025
Key Takeaways:
Microsoft has filed a lawsuit against a group of cybercriminals accused of creating malicious tools to bypass the security safeguards of its generative AI services. These tools were allegedly designed to produce harmful content, with the perpetrators profiting by selling access to other malicious actors.
In the court filings, Microsoft mentioned that a foreign-based threat actor group allegedly compromised the accounts of legitimate Microsoft customers. The cybercriminals then sold access to these accounts through a web domain. The service also included instructions on how to use these custom tools to generate harmful content. Microsoft has since shut down the service that ran from July to September 2024.
“First, Defendants created a client-side software tool referred to by Defendants as “de3u,” which Defendants make publicly available via the “rentry.org/dc3u” domain. Second, Defendants created software for running a reverse proxy service, referred to as the “oai reverse proxy,” designed specifically for processing and routing communications from the de3u software to Microsoft’s systems,” Microsoft explained.
Microsoft did not specify how the cybercriminals compromised legitimate customer accounts. However, the company noted that hackers have previously created tools to scan code repositories for API keys accidentally included in applications. Microsoft also warned that credentials could be stolen by hackers who gain unauthorized access to networks.
The lawsuit accuses that the group of cybercriminals violated the federal laws, including the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, and the Racketeer Influenced and Corrupt Organizations Act (RICO). The complaint also seeks relief and damages for the harm caused by illegal computer networks and pirated software affecting Microsoft, its customers, and the general public.
Microsoft has since enhanced safeguards of its GenAI services and added safety mitigations to prevent this type of activity in the future. The company has also published a report titled “Protecting the Public From Abusive AI-Generated Content,” which provides guidance for organizations and governments to protect the public from harmful AI content.