Chinese universities connected to known APTs are conducting AI/ML cybersecurity research
At least six major Chinese universities with previous connections to government-backed hacking groups have been conducting research on the intersection of cybersecurity and machine learning.
In a paper titled “Academics, AI, and APTs,” the Center for Security and Emerging Technology at Georgetown University warns that the research conducted today in these Chinese universities today could soon be integrated into the techniques used by Chinese state-sponsored hackers (APTs).
These universities have past connections to Chinese hacking groups, which have often recruited operators from the staff or student base, which the CSET team sees as an informal partnership.
“These partnerships, themselves a case study in military-civil fusion, allow state-sponsored hackers to quickly move research from the lab to the field,” said Dakota Cary, the author of the CSET research.
The six universities studied in their report include Hainan University (海南大学), Southeast University (东南大学), Shanghai Jiao Tong University (上海交通大学), Xidian University (西安电子科技大学), Zhejiang University (浙江大学), and the Harbin Institute of Technology (哈尔滨工业大学).
Current AI/ML research: A paper on using ensemble learning methods, a machine learning framework, to create an early warning system for distributed denial of service attacks. Other work could not be identified as the university had password-protected websites for its cybersecurity school and related research institutions.
Connections to Deep Panda: A university professor and a known security firm contracting for the Chinese Ministry of State Security held a hacking competition in 2014 that gave out points for hacking real targets in the US. Malware used in that competition was also used in the Anthem breach, a hack carried out by the Deep Panda APT. The university also received repeated funding from Chinese government information warfare projects. Southeast University also often connects students to the security services via job postings and research positions, which it then lists as achievements on its site.
Current AI/ML research: Current research AI/ML research conducted at the university focuses on defensive technologies. The professor who organized the 2014 hacking competition is currently active and researching how to use machine learning for anomaly detection—a technique that looks for unusual patterns of network behavior. He is also the recipient of funding from three secretive funding programs for information security research. Other professors are also conducting their own research in the use of AI systems in cybersecurity, based on their university pages.
Shanghai Jiao Tong University
Connections to APT1: News reports linked university employees to hacks carried out by APT1 (aka Chinese military unit 61398). University staff also published research articles together with APT1 members. The SJTU’s School of Information Security Engineering was also co-located on a military base. APT1 hackers also used their SJTU email addresses to register hacking infrastructure.
Current AI/ML research: There is a lot of research being conducted in offensive cybersecurity at SJTU, with many research groups being hosted here. Most of the AI/ML research is for the defensive use of ML/AI in cybersecurity, such as identifying malicious URLs, inspecting web traffic to identify botnets, attributing certain types of DDOS attacks, and a litany of specialized intrusion detection systems. However, there is also some research being conducted for offensive use of ML/AI, such as using ML to detect software vulnerabilities, detecting Tor traffic, and improving the accuracy of password guessing attacks.
Further, AI-related articles authored by SJTU professors also appear in an MSS periodical.
“The publication of the article by the MSS 13th Bureau demonstrates the service’s interest in such capabilities and illustrates the deference it pays to the research and analysis of SJTU faculty,” CSET researchers said.
Connections to APT3: Guangdong ITSEC, a division of the MSS 13th Bureau and the managing organization for APT3, started working with Xidian University in 2017 to offer a jointly administered graduate program under the Network and Information Security School. In this program, Xidian University students are allegedly paired with MSS employees for hands-on training.
Current AI/ML research: Two Xidian University professors claim an affiliation with China’s security services and are conducting research on AI and cybersecurity. In their most recent research, they focused on using AI/ML for data mining, vulnerability discovery and exploitation, and automated patching of software vulnerabilities. The conclusion of this work was that AI/ML is best suited for identifying software vulnerabilities, and one of the professors even submitted 20 vulnerabilities to China’s National Vulnerability Database.
Other professors without public connections to China’s security services are also conducting AI/ML and cybersecurity research, but it’s mostly focused on defensive uses, such as detecting malware and beating other ML-based systems.
Connections to MSS (APT1): The university was never directly tied to a specific hacking campaign, but many Chinese military hackers have been recruited from this university, which has “a highly-respected, internationally renowned school for cybersecurity studies.” According to the CSET research team, an interesting tidbit is that university students receive AI/ML classes but also courses on intelligence.
“Understanding the intelligence cycle and writing intelligence products is likely to be of little relevance to employees outside the national security sector,” the CSET team said.
Current AI/ML research: The university appears to have a focus on researching how to attack and defend other AI/ML systems and algorithms — such as data poisoning attacks or ways to backdoor training models.
Harbin Institute of Technology
Connections to APT1: The Mandiant APT1 report [PDF] named Harbin Institute of Technology (HIT) as a recruitment center for Chinese cyber operators in 2013, although the university was not directly involved in any attacks. However, HIT is authorized to work on top-secret government projects, and its cybersecurity school touts working on nine government-funded research projects. Archived web pages also confirmed that former HIT employees went to work for the Chinese army’s electronic intelligence department, members of which were later charged with the Equifax hack in the US.
Current AI/ML research: Research on AI/ML at HIT focused on AI use in the medical field (49 of 51 papers). The two papers published on MLand cybersecurity rehashed recurrent themes in the field, such as detecting and classifying software bugs and using AI for software behavioral analysis.
CSET researchers argue that this scarcity in the cybersecurity use of AI/ML could be explained by HIT’s clearance to work on top-secret projects, which would prevent it from publishing its work.
CSET said there are indicators that “HIT is performing, but not publishing, research on AI and cybersecurity,” such as mentions in government reports or the university’s descriptions for various labs.
All in all, the CSET report, available for download here, concludes that these “close relationships between universities and the state shortens the path to operationalizing new techniques and provides the security services quick access to talented researchers.”
But if we’re going to see AI/ML techniques being abused in real-world attacks remains unclear. Even if Chinese APTs ended up deploying AI/ML techniques to improve their attacks, these would be impossible to detect, Recorded Future CTO and co-founder Staffan Truvé told The Record this week.
“The exception being if we could see some activity where, for example, speed or volume of activities would indicate it was done by machines rather than humans,” Truvé said.
“But, on the other hand, it would be near to impossible to distinguish an AI-based algorithm from a ‘normal’ one!”
Truvé also argues that the moment when AI/ML algorithms will be deployed in real-world attacks is also slowly approaching.
“I’d say that in general, I’m 100% certain the bad guys will start using AI once they run out of steam with conventional methods. Right now, they are probably still doing well enough using traditional methods, but as defenders improve their tools and processes, that will, at some point, push the bad guys into using ‘next generation technology’.”
The CSET team argues that by analyzing the current areas of research, foreign analysts and decisionmakers can infer how China’s government hacking crews might rely on AI in the future, an understanding that could help current defenders prepare for future operational developments and their potential security impacts.