Google Open Sources Magika: AI-powered file identification tool

AI-Powered File Identification Tool

Google has announced that it is open source Magican artificial intelligence (AI)-powered file type identification tool, allowing defenders to accurately detect binary and text file types.

“Magika outperforms conventional file identification methods, providing an overall accuracy improvement of 30% and up to 95% higher precision on traditionally difficult-to-identify but potentially problematic content such as VBA, JavaScript and Powershell,” the company says. said.

The software uses a ‘custom, highly optimized deep-learning model’ that enables the accurate identification of file types within milliseconds. Magika implements inference functions using the Open Neural Network Exchange (ONNX).

Google says it uses Magika extensively internally to improve user security by sending Gmail, Drive, and Safe Browsing files to the appropriate security and content policy scanners.

Cybersecurity

In November 2023, the tech giant unveiled RETVec (short for Resilient and Efficient Text Vectorizer), a multilingual text processing model to detect potentially harmful content such as spam and malicious emails in Gmail.

Amid an ongoing debate over the risks of the rapidly developing technology and its misuse by national actors associated with Russia, China, Iran and North Korea to fuel their hacking efforts, Google said that deploying AI on a large scale could Digital security can strengthen and ’tilt’ the cybersecurity balance from attackers to defenders.”

Google Open Source Magic

The need for one was also emphasized balanced regulatory approach to the use and adoption of AI to avoid a future where attackers can innovate but defenders are held back due to AI management choices.

“AI enables security professionals and defenders to scale their work in threat detection, malware analysis, vulnerability detection, troubleshooting and incident response,” said the tech giant’s Phil Venables and Royal Hansen. noted. ‘AI offers the best opportunity to turn the world upside down The defender’s dilemmaand tilt the scale of cyberspace to give defenders a decisive advantage over attackers.”

Concerns have also been raised about generative AI models’ use of web data for training purposes, which may include personal data.

Cybersecurity

“If you don’t know what your model will be used for, how can you ensure that downstream use respects data protection and people’s rights and freedoms?” said the UK Information Commissioner’s Office (ICO). pointed out last month.

Furthermore, new research has shown that large language models can function as “sleeping agents” that can be seemingly harmless but can be programmed to exhibit deceptive or malicious behavior when specific criteria are met or special instructions are given.

“Such backdoor behavior can be made persistent so that it is not removed through standard security training techniques, including fine-tuned supervised tuning, reinforcement learning, and adversarial training (provoking unsafe behavior and then training to remove it), researchers at AI startup Anthropic said in the study.




#Google #Open #Sources #Magika #AIpowered #file #identification #tool

Total
0
Shares
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
Previous Post
Zeus and IcedID Malware

FBI’s Most Wanted Zeus and IcedID Malware Mastermind Pleads Guilty

Next Post
Akira Ransomware exploits Cisco ASA/FTD vulnerability

Akira Ransomware exploits Cisco ASA/FTD vulnerability

Related Posts