Four cyber attackers in China were arrested for developing ransomware with ChatGPT assistance. They demanded 20,000 Tether cryptocurrency. The use of ChatGPT is in a legal grey area in China, where authorities seek to restrict foreign generative AI products. Legal cases involving generative AI have increased, raising concerns about compliance risks for domestic companies accessing OpenAI’s services.
AT A GLANCE
- Cyber Attackers Arrested in China: Four individuals in China have been apprehended for creating ransomware using ChatGPT.
- Unidentified Company Targeted: The attack was reported by a company in Hangzhou, Zhejiang province, whose systems were blocked by ransomware. The hackers demanded 20,000 Tether, a cryptocurrency, for restoring access.
- Arrests and Admissions: Two suspects were arrested in Beijing, and two in Inner Mongolia in late November. They admitted to developing ransomware, optimizing it with ChatGPT, conducting vulnerability scans, infiltrating systems, implanting ransomware, and engaging in extortion.
- ChatGPT’s Legal Status in China: The report did not specify if the use of ChatGPT was part of the charges. The chatbot exists in a legal grey area in China, as the government aims to restrict access to foreign generative artificial intelligence products.
- OpenAI’s Chatbot Restrictions: OpenAI introduced ChatGPT at the end of 2022, leading to increased interest in China. However, OpenAI has blocked internet protocol addresses in China, Hong Kong, North Korea, Iran, and other sanctioned markets.
- VPN Usage to Bypass Restrictions: Despite restrictions, Chinese users employ virtual private networks (VPNs) and phone numbers from supported regions to access ChatGPT and similar products.
- Compliance Risks for Domestic Companies: Chinese companies face “compliance risks” when building or using VPNs to access OpenAI’s services, as noted in a report by law firm King & Wood Mallesons. Legal cases involving generative AI have risen due to the technology’s popularity.
In a groundbreaking development, Chinese authorities have apprehended four cyber attackers responsible for developing ransomware, marking the first instance in the country involving the illicit use of ChatGPT, a popular chatbot not officially available locally. The perpetrators, arrested in late November, utilized the assistance of ChatGPT in crafting and optimizing their ransomware programs.
Incident Report, Arrests and Confessions
The incident came to light when an undisclosed company in Hangzhou, the capital of Zhejiang province, reported a ransomware attack. According to a Thursday report by the state-run Xinhua News Agency, the hackers had targeted the company’s systems, demanding a ransom of 20,000 Tether, a cryptocurrency stablecoin pegged to the US dollar, for the restoration of access.
Law enforcement acted swiftly, apprehending two suspects in Beijing and two others in Inner Mongolia. The suspects reportedly confessed to various cybercrime activities, including writing ransomware code, optimizing programs with ChatGPT’s assistance, conducting vulnerability scans, infiltrating systems, implanting ransomware, and extorting victims. The report did not specify whether the use of ChatGPT was a charge in itself.
ChatGPT in Legal Grey Area
ChatGPT operates in a legal grey area in China due to the government’s efforts to restrict access to foreign generative artificial intelligence products. OpenAI, the creator of ChatGPT, had imposed restrictions by blocking internet protocol addresses in China, Hong Kong, and sanctioned markets like North Korea and Iran. Some users circumvent these restrictions using virtual private networks (VPNs) and phone numbers from supported regions.
The use of VPNs to access OpenAI services, including ChatGPT, poses compliance risks for domestic companies in China. A report by law firm King & Wood Mallesons highlights the potential legal challenges faced by companies involved in building or renting VPNs to access OpenAI’s products.
Rise in Legal Cases Involving Generative AI
Legal cases related to generative AI have surged in China, reflecting the growing popularity and misuse of such technologies. In February, Beijing police issued a warning about the potential for ChatGPT to “commit crimes and spread rumors.” Subsequently, in May, a man in northwestern Gansu province was detained for using ChatGPT to generate fake news about a train crash. In August, Hong Kong police arrested six individuals involved in a fraud syndicate that utilized deepfake technology for loan scams.
Intellectual Property Concerns and Lawsuits
In a separate but related development, concerns about mass intellectual property infringement have surfaced. The New York Times recently filed a lawsuit against OpenAI and Microsoft, the primary backer of the AI firm, alleging that their powerful models, including ChatGPT, used millions of articles for training without permission. This case is expected to have significant legal implications and will be closely monitored.
The arrests of cyber attackers in China using ChatGPT for ransomware development underscore the challenges posed by the misuse of advanced AI technologies. As legal cases involving generative AI continue to rise, it raises important questions about the ethical and legal boundaries of AI usage.
Earlier last week, The New York Times sued OpenAI and Microsoft, further emphasizes the need for clear regulations and guidelines to address intellectual property concerns in the rapidly evolving field of generative artificial intelligence.
Source(s): SCMP
The information above is curated from reliable sources, modified for clarity. Slash Insider is not responsible for its completeness or accuracy. Please refer to the original source for the full article. Views expressed are solely those of the original authors and not necessarily of Slash Insider. We strive to deliver reliable articles but encourage readers to verify details independently.