CMSWire.com September 14, 2020 – Looking at What AI Can Bring to Your Organization’s Cybersecurity Strategy

Data Drives Cyber Responses

However, an AI system is only as good as the data being fed into it. Just like a child, if you teach a child bad behaviors, those behaviors will be carried on with them as an adult.  If the information being fed to an AI system is intentionally malicious or inaccurate, the AI system will learn to behave the way the attacker wanted it to, not the way the system was designed to behave, said Steve Tcherchian, chief information security officer at XYPRO and a regular contributor to and presenter at the EC-Council, a cyber security technical certification body.

In less important cases, this can be a mild annoyance. He cites the example of smart homes. These smart devices, he said, learn our habits and adjust themselves based on those inputs. “My Roomba has mapped my house based on the house and furniture layout,” he said. “If my daughter were to place random objects in its path and do this on a routine basis, the Roomba would eventually learn to avoid the area where it encountered an obstacle. That means that area would not be swept.”

In more extreme circumstances, manipulating AI input can be dangerous. Planes have been using autopilot for years. Autopilot is getting increasingly smarter as AI technology advances, but flaws still exist because it’s based on input. One faulty input or sensor can have irrecoverable effects.  If an attacker could get his hands on the input the AI systems rely on to make decisions, the affects could be incomprehensible. Especially considering AI is being intertwined into our lives more and more without us even knowing. On a large scale, this could be very damaging.

To read the full article visit cmswire.com.