NIST warns of security and privacy risks resulting from rapid implementation of AI systems

AI Security and Privacy

The US National Institute of Standards and Technology (NIST) draws attention to the privacy and security challenges arising from the increased deployment of artificial intelligence (AI) systems in recent years.

“These security and privacy challenges include the potential for hostile manipulation of training data, hostile exploitation of model vulnerabilities to negatively impact AI system performance, and even malicious manipulation, modification, or mere interaction with models to gain sensitive information. exfiltrate about people represented in the data, about the model itself, or proprietary business data,” according to NIST said.

As AI systems are rapidly integrated into online services, driven in part by the rise of generative AI systems such as OpenAI ChatGPT and Google Bard, the models that power these technologies face a number of threats at various stages of the machine learning journey. operations.

These include corrupted training data, security flaws in the software components, data model poisoning, supply chain weaknesses, and privacy breaches that arise due to rapid injection attacks.

“For the most part, software developers need more people to use their product so it can get better with exposure,” said NIST computer scientist Apostol Vassilev. “But there’s no guarantee that the exposure will be good. A chatbot can spit out bad or toxic information when prompted with carefully crafted language.”

The attacks, which can have significant consequences for availability, integrity and privacy, are broadly classified as follows:

  • Evasion attacks, which aim to generate adversarial output after a model has been deployed
  • Poisoning attacks, which target the training phase of the algorithm by introducing corrupted data
  • Privacy attacks, which aim to collect sensitive information about the system or data it is trained on by asking questions that bypass existing guardrails
  • Abusive attacks, which aim to compromise legitimate information sources, such as a web page containing incorrect pieces of information, to repurpose the intended use of the system

Such attacks, according to NIST, can be carried out by threat actors with full knowledge (white-box), minimal knowledge (black-box), or who have a partial understanding of some aspects of the AI ​​system (gray box).

The agency further noted the lack of robust mitigation measures to counter these risks, and urged the broader technology community to “come up with better defenses.”

The development comes more than a month after Britain, the US and international partners from sixteen other countries released guidelines for developing secure artificial intelligence (AI) systems.

“Despite the significant progress that AI and machine learning have made, these technologies are vulnerable to attacks that can cause spectacular failures with serious consequences,” Vassilev said. “There are theoretical problems with securing AI algorithms that simply haven’t been solved yet. If anyone says otherwise, they’re selling snake oil.”

 

#NIST #warns #security #privacy #risks #resulting #rapid #implementation #systems

Total
0
Shares
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Previous Post
Silver RAT to Cybercriminals

Syrian hackers spread stealthy C#-based Silver RAT to cybercriminals

Next Post
Dark Web Marketplace Fraud

DoJ charges 19 people worldwide in $68 million xDedic Dark Web Marketplace fraud

Related Posts