Home Marketing This “Self-Aware” Algorithm Can Prevent Sophisticated Cyberattacks

This “Self-Aware” Algorithm Can Prevent Sophisticated Cyberattacks

0
This “Self-Aware” Algorithm Can Prevent Sophisticated Cyberattacks

[ad_1]

Cyberattacks in recent times have become so complex and sophisticated that conventional security systems such as anti-viruses and firewalls aren’t effective anymore.

Since most software is written by humans, it often tends to be flawed. Attackers often use the most minute security flaw within the system to gain access to data or even completely taken over the system. These days, the discovery of zero-day vulnerabilities and security flaws has become increasingly common. Even if a system seems bulletproof for now, it is only a matter of time before someone finds a loophole to breach its securities.

Protecting against these attacks needs constant monitoring and frankly has become a headache for system admins. Therefore, a team of researchers at Purdue University have developed a new security system that relies entirely on artificial intelligence (AI). In a new paper, the researchers reveal a new computer model that runs “cyber-physical” systems that are self-aware and can self-heal.

The system sends one-time signals to every connected component and turns them into active monitoring systems, on the lookout for potential intrusion. The algorithm is so “self-aware” that even if the attacker uses a perfect copy of the model itself, it can detect the falsified data and prevent the attack.

“We call it covert cognizance,” said Abdel-Khalik, lead author of the paper, and a research professor at Purdue University’s Center for Education and Research in Information Assurance and Security. “Imagine having a bunch of bees hovering around you. Once you move a little bit, the whole network of bees responds, so it has that butterfly effect. Here, if someone sticks their finger in the data, the whole system will know that there was an intrusion, and it will be able to correct the modified data.”

According to the researchers, any defence system is only as good as the knowledge of the model. If the attacker knows the defence model well, they can theoretically breach it.

“When you have components that are loosely coupled with each other, the system really isn’t aware of the other components or even of itself,” said Arvind Sundaram, a graduate student in nuclear engineering at Purdue. “It just responds to its inputs. When you’re making it self-aware, you build an anomaly detection model within itself. If something is wrong, it needs to not just detect that, but also operate in a way that doesn’t respect the malicious input that’s come in.”

Cover Image: Shutterstock

[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here