Naked Science Forum
On the Lighter Side => New Theories => Topic started by: smart on 29/03/2018 18:42:18
-
Is it possible to "reverse engineer" artificial intelligence systems in order to warrant the democratization of public research in neuroscience?
What do you think?
tk
-
I'm not exactly sure what you're asking, but any computer system can be hacked in principle.
-
Is it possible to "reverse engineer" artificial intelligence systems
Yes, some of the training material used by AI researchers is available on the web.
- You could feed that into your own Neural Network and come up your own AI.
- You are effectively recreating what the researchers did in creating their AI
Or you could do what these researchers probably did, and trawl the internet for training material.
- For a self-driving car, you can get more training material by mounting the appropriate sensors on cars driven by human drivers.
But once you have a working AI, you have to test it under a wide variety of conditions.
- For a self-driving car, you can put it on real roads, with a human safety driver, ready to jump in if things get out of control.
-
Suppose I want to demonstrate security vulnerabilities in artificial intelligence systems. Is the best way to penetrate the system is through the frontdoor or backdoor?
In specific, let's say it's possible for a remote attacker to send mangled commands to a host device in order to compromise the system in order to gain access to internal/private data. What would be the best defense or counter-measure to mitigate this threat?
-
Suppose I want to demonstrate security vulnerabilities in artificial intelligence systems.
It's fairly easy - these researchers have automated the demonstration of security vulnerabilities in Artificial Neural Networks.
The examples are amazing, and begin at about 40 seconds.