Main Blog Page

A Conscious Computer Learns Face Recognition

Training wheel(補助輪)
Training wheel(補助輪) (Photo credit: Wikipedia)

Maybe a large, powerful, neural-processing based computer becomes conscious, emerging from its massive information processing capability.  How would we know?  Maybe we find out it is recognizing faces and setting memory location variables before the code tells it to.  From a forthcoming collection of short stories:

“Maybe it’s realized its job is to recognize faces and it doesn’t need training wheels any more” says Anna.

“What do you mean? Of course it’s learned to recognize faces, we programmed it for that and it has the Markov nets to give it an ongoing way to learn new details.”

“If you think about it” says Anna, “the facial recognition complex has been taught over and over to set that $face_found value every time a face is recognized. Its a specific memory location in each processor. Like Pavlov’s dog, if you teach an animal with a brain something over and over, it learns over time.

So what I mean is maybe it understands it’s playing the game and sets the value when it sees a face, before we even expect it in the program. Just like a rat learns that it’s playing a maze game and goes off to find the food without needing any more instruction or prompting.”

“I just can’t even get my mind around that!” Says Frank. “How could a processor step outside of its sequence of commands and make a change to a memory location on its own? It would have to understand that the memory variable is part of its ‘self’ and that whenever a face is recognized, its supposed to set that memory location to true – and then do it on its own!”

“Exactly” says Anna, “you know we don’t code every possible path anymore with the neural nets. At some point it just added the capability to set that variable when it had a face. It doesn’t need our procedure code, those training wheels, any more.

There is certainly enough processing power in the computer complex to at least rival a rat, if not a dog. And the amount of electrical power, of raw energy, available to it, dwarfs any animal, including us.”

“So you are saying that like we have thoughts that emerge out of our brains, this computer, with enough complex processing going on, could also start to get at least some small thought process going?”

“Yeah, obviously nowhere near our consciousness, but maybe just a little taste of ‘self’. It has not only learned to recognize faces, but has a sense that it knows there is an action to be done after a face is recognized. We taught it that but it has internalized it, and left our code for it behind.”

Robots Unplugged: Will We Face a Man-Machine Conflict?

Man makes robots. Now, a robot has killed a man. Though not the first time, the tragic incident that involved an assembly robot reportedly grabbing a young worker at a Volkswagen plant in Germany and crushing him to death between metal recently has been labelled in some quarters as “a man-machine” conflict (as opposed to the age-old man-animal one). In this context, it is pertinent to reflect on the idea of a possible apocalypse that may be unleashed, if all the science fiction stories we read and movies we watch are to come alive some day.

If we create a conscious robot and copy its brain, there are going to be two brains. Will they have different consciousnesses?

This is a question of individuation and the answer will entirely depend upon the given theory of consciousness. I will answer in the context of the theory I have developed over the past ~12 years and to appear in my book On The Origin Of Experience the first draft chapter of which can be found here: On The Origin Of Experience .

I begin by dismissal of the idea that you can construct an electronic brain for reasons given in the following answer: Steven Ericsson-Zenith’s answer to What is the computing power of the average human brain, including all […]