Menu Close

Here’s how neuroscience can protect AI from cyberattacks


Deep learning has come a long way since the days it could only recognize hand-written characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars. Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as we humans do. But they still have a long way to go and make mistakes in situations that humans would never err. These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial…

This story continues at The Next Web

Leave a Reply

Your email address will not be published. Required fields are marked *