Neural networks: a computing model inspired by the human brain
Neural networks are computer systems that mimic the inner workings of the brain. They underlie many AI-enabled technologies, such as facial recognition and vehicle routing, and are used to recognize patterns and objects in images, audio, video and other data.
In the brain's biological neural network, billions of neurons communicate via electrical signals. In a machine's artificial neural network, a layered thicket of math operations, called neurons, communicate via numbers, which they share with another to solve weighted equations.
In an AI’s neural net, neurons may fire in response to certain aspects of an image. In some cases, neurons send data in one direction, moving from general, low res patterns toward detailed, filtered representations of objects. In others, the network sends data back-and-forth.
One of the more fascinating parts of neural networks is that it is not always clear how they make decisions. How do they differentiate between a dog and a cat? Why do they make the moves they do in Chess, Go, and Starcraft? Each network is different, and it's not always clear what goes on inside their operations.
Some have tried to reverse engineer neural networks in order to learn. To do that, developers at Google and OpenAI began pulling half-complete data from neural networks, to visualize what the AI sees.
Results have been surprising. If the neural network is learning to identify a dog, for example, a visualization pulled from early in the process may show translucent, geometric shapes on top of a given picture. That's the AI looking for general, low resolution patterns. Data pulled later, as the neural net applies greater detail to an image, often produces hallucinogenic pictures covered in objects that humans can recognize. An animal-like shape with several snouts, for example, may eventually become a dog in the machine’s eye.
As neural networks surpass humans at pattern and object recognition, as well as in games and elsewhere, it will become increasingly worrisome that we don't understand how they make choices.
Jason Yosinski, who works at Uber's AI Lab, told the New York Times that machine decisions may only become harder to understand:
To a certain extent, as these networks get more complicated, it is going to be fundamentally difficult to understand why they make decisions... It is kind of like trying to understand why humans make decisions.