Microsoft and OpenAI warn about nation-state hackers weaponizing AI for cyberattacks

Hackers Weaponizing AI for Cyberattacks

Nation-state actors associated with Russia, North Korea, Iran, and China are experimenting with artificial intelligence (AI) and large language models (LLMs) to supplement their ongoing cyberattack operations.

The findings come from a report that Microsoft published in collaboration with OpenAI said they disrupted the efforts of five state-affiliated actors who used its AI services to conduct malicious cyber activities by terminating their assets and accounts.

“Language support is a natural feature of LLMs and is attractive to threat actors with a continued focus on social engineering and other techniques that rely on false, deceptive communications tailored to their targets’ jobs, professional networks and other relationships,” Microsoft says. said in a report shared with The Hacker News.

While no significant or new attacks have been detected to date using the LLMs, adversarial reconnaissance of AI technologies has transcended several stages of the attack chain, such as reconnaissance, coding assistance, and malware development.

“These actors generally attempted to use OpenAI services to query open source information, translate it, find coding errors, and perform basic coding tasks,” the AI ​​company said.

For example, the Russian nation-state group, tracked as Forest Blizzard (also known as APT28), is said to have used its offering to conduct open-source research into satellite communications protocols and radar imaging technology, as well as support with scripting tasks.

Some of the other notable hacking crews are listed below:

  • Emerald Sleet (aka Kimusky), a North Korean threat actor, has used LLMs to identify experts, think tanks, and organizations focused on defense issues in the Asia-Pacific region, understand publicly available deficiencies, assist with basic scripting tasks, and to prepare content that can be used in phishing campaigns.
  • Crimson Sandstorm (aka Imperial Kitten), an Iranian threat actor who has used LLMs to create code snippets related to app and web development, generate phishing emails, and explore common ways malware can evade detection
  • Charcoal Typhoon (also known as Aquatic Panda), a Chinese threat actor that has used LLMs to research various companies and vulnerabilities, generate scripts, create content likely to be used in phishing campaigns, and identify techniques for behavior after a compromise
  • Salmon typhoon (aka Maverick Panda), a Chinese threat actor who used LLMs to translate technical documents, retrieve publicly available information on multiple intelligence agencies and regional threat actors, fix coding errors, and find cloaking tactics to evade detection

Microsoft said it is also formulating a set of principles to mitigate the risks of the malicious use of AI tools and APIs by advanced persistent threats (APTs), advanced persistent manipulators (APMs) and cybercriminal syndicates and devise effective guardrails and security. mechanisms surrounding his models.

“These principles include identification and action against the use of malicious actors, notification of other AI service providers, cooperation with other stakeholders and transparency,” Redmond said.

#Microsoft #OpenAI #warn #nationstate #hackers #weaponizing #cyberattacks

Notify of
Inline Feedbacks
View all comments
Previous Post
Critical Exchange Server Flaw

Critical Exchange Server error (CVE-2024-21410) under active exploitation

Next Post
Linux Rogue Packages

The Ubuntu ‘command-not-found’ tool can trick users into installing rogue packages

Related Posts