Neural networks are not procedural code. They’ve been treated as a black box where we can’t really learn anything from looking at the internal values or weights. This article, and the post in the link of the excerpted text below, shows that there is a lot of information embedded in these networks.
If a consciousness were to emerge in a very complex neural net, might it somehow manifest in the inner workings of the black box? Maybe patterns of activation that shift more than expected? Maybe a changing of weights and feedback even when not in training mode?
It seems we would need to think about looking for some sort of awareness, and make tools such as this visualizer to detect it.
Two weeks ago we blogged about a visualization tool designed to help us understand how neural networks work and what each layer has learned. In addition to gaining some insight on how these networks carry out classification tasks, we found that this process also generated some beautiful art.
Full Post at googleresearch.blogspot.com