In This Article:
Microsoft (MSFT) and OpenAI released a report on Wednesday saying that hacking groups from China, Iran, North Korea, and Russia are increasingly probing the use of AI large language models (LLMs) to improve their chances of successfully launching cyberattacks.
According to the report, the state-affiliated groups are using AI to understand everything from satellite technology to how to develop malicious code that can evade detection by cybersecurity software.
"Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent," the companies said in the report.
Microsoft and OpenAI listed four different groups as using large language models in conjunction with their hacking efforts: Russia's Forest Blizzard, also known as Strontium; North Korea's Emerald Sheet, also known as Thallium; Iran's Crimson Sandstorm, also known as Curium; and China's Charcoal Typhoon, known as Chromium, and Salmon Typhoon, known as Sodium.
In the case of the Russian hackers, Microsoft and OpenAI say the group is using LLMs to understand satellite capabilities as well as radar technologies and getting assistance in scripting tasks and file manipulation.
North Korea's Emerald Sheet has used the technology to better understand public software vulnerabilities, for scripting tasks, improving social engineering for phishing and spear-phishing email campaigns, and learning more about groups such as think tanks that deal with North Korea's nuclear weapons program. Crimson Sandstorm also used the technology for spear-phishing campaigns, developing code, and trying to get past antivirus programs.
As for China's Charcoal Typhoon and Salmon Typhoon, Microsoft says the groups have used LLMs for an array of reasons ranging from translations and streamlining cyber tasks to detecting coding errors and developing potentially malicious code.
The company said they disabled the accounts and assets of each of the groups and added that they haven't identified any "significant attacks" employing the LLMs they monitor.
It only makes sense that hackers would use AI and LLMs in an effort to launch cyberattacks. Organizations are always looking for a means to improve their chances of penetrating victims' networks, and AI is just one more means of doing so.
Microsoft itself has faced its share of attacks, including one it reported in January. In that intrusion Russia's Midnight Blizzard, also known as Nobelium, gained access to accounts associated with the software company's senior leadership team and its cybersecurity and legal departments, and stole emails and documents.