Home » biomimicy » A way to make deep neural networks immune to hacking?

A way to make deep neural networks immune to hacking?

The code mimics the mechanisms by which the immune system learns to identify antigens, instead learning to characterize the dubious inputs to the machine learning algorithm. Image credit: Ren Wang

The code mimics the mechanisms by which the immune system learns to identify antigens, instead learning to characterize the dubious inputs to the machine learning algorithm.

Image credit: Ren Wang


The adaptive immune system serves as a template for defending neural nets from confusion-sowing attacks

If a sticker on a banana can make it show up as a toaster, how might strategic vandalism warp how an autonomous vehicle perceives a stop sign? Now, an immune-inspired defense system for neural networks can ward off such attacks, designed by engineers, biologists and mathematicians at the University of Michigan.

Deep neural networks are a subset of machine learning algorithms used for a wide variety of classification problems. These include image identification and machine vision (used by autonomous vehicles and other robots), natural language processing, language translation and fraud detection. However, it is possible for a nefarious person or group to adjust the input slightly and send the algorithm down the wrong train of thought, so to speak. To protect algorithms against such attacks, the Michigan team developed the Robust Adversarial Immune-inspired Learning System.

“RAILS represents the very first approach to adversarial learning that is modeled after the adaptive immune system, which operates differently than the innate immune system,” said Alfred Hero, the John H. Holland Distinguished University Professor, who co-led the work published in IEEE Access.

While the innate immune system mounts a general attack on pathogens, the mammalian adaptive immune system can generate new cells designed to defend against specific pathogens. It turns out that deep neural networks, already inspired by the brain’s system of information processing, can take advantage of this biological process, too.

“The immune system is built for surprises,” said Indika Rajapakse, associate professor of computational medicine and bioinformatics and co-leader of the study. “It has an amazing design and will always find a solution.”

RAILS works by mimicking the natural defenses of the immune system to identify and ultimately take care of suspicious inputs to the neural network. To begin developing it, the biological team studied how the adaptive immune systems of mice responded to an antigen. The experiment used the tissues of genetically modified mice that express fluorescent markers on their B cells.

The team created a model of the immune system by culturing cells from the spleen together with those of bone marrow, representing a headquarters and garrison of the immune system. This system enabled the biological team to track the development of B cells, which starts as a trial-and-error approach to designing a receptor that binds to the antigen. Once the B-cells converge on a solution, they produce both plasma B cells for capturing any antigens present and memory B cells in preparation for the next attack.

Stephen Lindsly, a doctoral student in bioinformatics at the time, performed data analysis on the information generated in Rajapakse’s lab and acted as a translator between the biologists and engineers. Hero’s team then modeled that biological process on computers, blending biological mechanisms into the code. They tested the RAILS defenses with adversarial inputs. Then they compared the learning curve of the B cells learning to attack antigens with the algorithm learning to exclude those bad inputs.

“We weren’t sure that we had really captured the biological process until we compared the learning curves of RAILS to those extracted from the experiments,” Hero said. “They were exactly the same.”

Not only was it an effective biomimic, RAILS outperformed two of the most common machine learning processes used to combat adversarial attacks: Robust Deep k-Nearest Neighbor and convolutional neural networks.

“One very promising part of this work is that our general framework can defend against different types of attacks,” said Ren Wang, a research fellow in electrical and computer engineering, who was primarily responsible for the development and implementation of the software.

The researchers used image identification as the test case, evaluating RAILS against eight types of adversarial attacks in several datasets. It showed improvement in all cases, including protection against the most damaging type of adversarial attack—known as a Projected Gradient Descent attack. In addition, RAILS improved the overall accuracy. For instance, it helped correctly identify an image of a chicken and an ostrich, widely perceived as a cat and a horse, as two birds.

“This is an amazing example of using mathematics to understand this beautiful dynamical system,” Rajapakse said. “We may be able to take what we learned from RAILS and help reprogram the immune system to work more quickly.”

Future efforts from Hero’s team will focus on reducing the response time from milliseconds to microseconds.


Original Article: Immune to hacks: Inoculating deep neural networks to thwart attacks

More from: University of Michigan 


The Latest on: Robust Adversarial Immune-inspired Learning System

  • The Weekend Jolt
    on July 3, 2022 at 5:00 pm

    This week provided yet another stark reminder when a group of progressives banded together to force House speaker Nancy Pelosi to rip $1 billion for Israel’s Iron Dome missile-defense system ...

  • The Weekend Jolt
    on July 3, 2022 at 5:00 pm

    President Biden delivers remarks on Afghanistan at the White House in Washington, D.C., August 31, 2021. (Carlos Barria/Reuters) Dear Weekend Jolter, President Biden has a problem ...

  • Weekly Commentary: Pondering Modern-Day Runs
    on July 2, 2022 at 1:26 am

    Despite this, the fractional reserve system remained then ... The precious metals have not been immune to selling, with silver and platinum down 6.2% and 2.1% this week. Highfliers natural ...

  • Adversarial machine learning explained: How attackers disrupt AI and ML systems
    on June 28, 2022 at 2:00 am

    Threat actors have several ways to fool or exploit artificial intelligence and machine learning systems and models, but you can defend against their tactics.

  • Professor Keith Worden
    on August 14, 2020 at 2:03 am

    Research interests Keith's research is concerned with applications of advanced signal processing and machine learning methods to structural dynamics. The primary application is in the aerospace ...

via Bing News

The Latest on: Robust Adversarial Immune-inspired Learning System

via Google News

Add Comment

Click here to post a comment

Your thoughtful comments are most welcome!

This site uses Akismet to reduce spam. Learn how your comment data is processed.