Researchers warn of ‘poisoned’ AI chaos - and how to prevent it
Researchers warn of ‘poisoned’ AI chaos – and how to prevent it
Though the advances in artificial intelligence (AI) technology have the potential to change all of our lives for the better, sometimes its hunger for information can wreak havoc as it creates an ideal opening for ‘poisoned data’ to crawl in.
Indeed, AI has opened an entirely new venue for cyber attackers to carry out their mischievous missions by sneaking small doses of false or misleading information into the invaluable AI training sets, thus sabotaging the previously reliable models, per a report published by Tech Xplore on April 24.
The implications of this could be devastating. For instance, this ‘poisoned’ AI could cause a self-driving vehicle to ignore red stop lights or even trigger widespread electrical grid disruptions and outages, not to mention the potential ramifications in medicine that increasingly relies on AI.
To address this problem, a team of cybersecurity researchers at the Florida International University has combined two emerging technologies – federated learning and blockchain – to train AI more securely, successfully detecting and removing ‘poisoned’ data before it can compromise training datasets.
According to Hadi Amini, lead researcher and FIU assistant professor in the Knight Foundation School of Computing and Information Sciences:
“We’ve built a method that can have many applications for critical infrastructure resilience, transportation cybersecurity, health care, and more.”
Federated learning and blockchain versus ‘poisoned’ AI
Specifically, the scientists first deployed federated learning, which refers to using a mini version of a training model that learns directly on the device and only shares updates (without any personal data) with the global model on a company’s server – albeit still vulnerable to data poisoning attacks.
In the words of Ervin Moore, a Ph.D. candidate in Amini’s lab and lead author of the study:
“Verifying whether a user’s data is honest or dishonest before it gets to the model is a challenge for federated learning. (…) So, we started thinking about blockchain to mitigate this flaw.”
Therefore, they brought in the concept of blockchain, where data is stored in blocks linked chronologically in a shared database – i.e. a chain, each with its own fingerprint, as well as the fingerprint of the previous block, which makes it virtually tamper-proof.
For their model, they compared block updates, calculated whether outlier updates were potentially poisonous, recorded such possibly poisonous updates, and then discarded them from network aggregation, and now they’re working closely with collaborators from the National Center for Transportation Cybersecurity and Resiliency to bring quantum encryption into the mix.
As Amini, who also leads FIU’s team of cybersecurity and AI experts investigating secure AI for connected and autonomous transportation systems, pointed out, their ultimate goal is to “ensure the safety and security of America’s transportation infrastructure while harnessing the power of advanced AI to enhance transportation systems.”
Meanwhile, AI technology continues to push limits, as scientists have just devised a generative AI model that can independently and accurately create new custom fragrances based only on user-defined descriptors like ‘floral,’ ‘woody,’ ‘citrus,’ and the like.
How do you rate this article?
Subscribe to our YouTube channel for crypto market insights and educational videos.
Join our Socials
Briefly, clearly and without noise – get the most important crypto news and market insights first.
Most Read Today
Peter Schiff Warns of a U.S. Dollar Collapse Far Worse Than 2008
2Dubai Insurance Launches Crypto Wallet for Premium Payments & Claims
3XRP Whales Buy The Dip While Price Goes Nowhere
4Samsung crushes Apple with over 700 million more smartphones shipped in a decade
5Luxury Meets Hash Power: This $40K Watch Actually Mines Bitcoin
Latest
Also read
Similar stories you might like.