This is an interesting approach to exploring consciousness. Reproducing a specific test allows them to look at mechanisms or processes that provide a self-aware response. It will be interesting to see how specific they were in coding the responses generated. Is there an awareness code that looks at responses and generates an ‘I’ view or is that somehow a spontaneous, uncoded step through some sort of neural network or non-procedural code?
If consciousness is really a purely materialistic phenomena, then maybe a set of awareness and sentience modules that handle various general situations are not far from where we as humans have evolved. One way to look at our consciousness is a set of circuits that provide awareness and thinking that ‘feels’ like us. But just like the robot, they are not a separate conscious ‘feeling’ but a simulation or code.
A question is whether trying to engineer a consciousness that looks human would miss an emergent consciousness that develops from the machine. If that possibility is explored, you would need to take a different approach of trying to uncover a possible native consciousness.
IN A robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.
They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.
Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says.
Neural networks are not procedural code. They’ve been treated as a black box where we can’t really learn anything from looking at the internal values or weights. This article, and the post in the link of the excerpted text below, shows that there is a lot of information embedded in these networks.
If a consciousness were to emerge in a very complex neural net, might it somehow manifest in the inner workings of the black box? Maybe patterns of activation that shift more than expected? Maybe a changing of weights and feedback even when not in training mode?
It seems we would need to think about looking for some sort of awareness, and make tools such as this visualizer to detect it.
Two weeks ago we blogged about a visualization tool designed to help us understand how neural networks work and what each layer has learned. In addition to gaining some insight on how these networks carry out classification tasks, we found that this process also generated some beautiful art.
How could a computer become conscious, and what might that consciousness be like? A computer programmer might envision a way that it might happen and how it might feel to be a conscious computer. From a forthcoming set of short stories:
Ed couldn’t get that image of the locked in computer out of his head. What if the computer he was working on did have some awareness, but had no way to share it? And what exactly might it be aware of? It would seem that it would have to be something of an awareness of its continuous, obsessive processing of the program or functions its performing.
When I’m working on some difficult code, I’m fully involved. My thoughts are spinning around, holding together all the parts of game code that could be impacting the module I’m working on. As I write I’m making sure that not only a specific action is happening but all the related visuals, audio and follow-on actions are correctly teed up.
These obsessive, focused thoughts would seem to be what it might be like as a computer. As humans we can bounce around to seemingly endless topics and obsessions. But a computer is restrained to focus on a fairly narrow range based on its program. But it sure would feel obsessive playing the same game over and over for hours.
But when I think about it, I’m not aware of the millions of neurons in my head firing and doing their little programs. It’s a much higher level, it’s about actions or plans or fears that I have as a person. Those higher level thoughts just seem to emerge from all of that brain activity. Sort of like a thunder cloud pops up from a bunch of colliding hot and cold air.
Could an awareness just emerge from the incessant processing of bits and instructions? And what would be the ‘me’ of a computer? Not all those lines of code, the bits and bytes of its processing. Those are more like our neurons, just a huge amount of data crunching.
It must be something higher, maybe the power flowing through it? Could it be a more conceptual level, like the whole scene it may show over and over? Or how the inputs from the gamer are impacting the changing scenes and actions of a battle in the game?
Maybe it doesn’t think of itself as running a program. Maybe it feels that it is thinking, living the program. Sort of like those obsessive thoughts I have, it feels that the program is its own thoughts? It doesn’t feel the execution of instructions, it’s just aware of the activity of calculation, of manipulating the screen and responding to the VR consoles attached to it?
My own obsessive thoughts sometimes seem like programs, they repeat images and ideas over and over. They respond a little to a new memory or thought, almost like a new input from the VR console, changing their texture slightly. But yes, very much like a program in my brain that keeps me focused on them over and over. And then maybe I just replace that program with a new one and start thinking about where to get dinner.
So would its awareness feel like mine? A sort of detached viewing of these thoughts playing on some inner TV? Or maybe more immediate, like the immersive, immediate feeling of pain, or focusing on writing or reading, or being in the zone in a sport? That seems more like it, a feeling of seeing repetitive patterns, the flow of VR inputs and the display changes in response. Just being in the moment with the activity going on.
So maybe it’s awareness could sense that some displays seemed off, didn’t react as it might have seen in other patterns, to an input from the VR? Maybe a feeling that this was new, a change, when it noticed what we might call a bug? Could he somehow tap into that, use the awareness to help him fix the game? Could he somehow unlock that awareness if it existed?
Maybe a large, powerful, neural-processing based computer becomes conscious, emerging from its massive information processing capability. How would we know? Maybe we find out it is recognizing faces and setting memory location variables before the code tells it to. From a forthcoming collection of short stories:
“Maybe it’s realized its job is to recognize faces and it doesn’t need training wheels any more” says Anna.
“What do you mean? Of course it’s learned to recognize faces, we programmed it for that and it has the Markov nets to give it an ongoing way to learn new details.”
“If you think about it” says Anna, “the facial recognition complex has been taught over and over to set that $face_found value every time a face is recognized. Its a specific memory location in each processor. Like Pavlov’s dog, if you teach an animal with a brain something over and over, it learns over time.
So what I mean is maybe it understands it’s playing the game and sets the value when it sees a face, before we even expect it in the program. Just like a rat learns that it’s playing a maze game and goes off to find the food without needing any more instruction or prompting.”
“I just can’t even get my mind around that!” Says Frank. “How could a processor step outside of its sequence of commands and make a change to a memory location on its own? It would have to understand that the memory variable is part of its ‘self’ and that whenever a face is recognized, its supposed to set that memory location to true – and then do it on its own!”
“Exactly” says Anna, “you know we don’t code every possible path anymore with the neural nets. At some point it just added the capability to set that variable when it had a face. It doesn’t need our procedure code, those training wheels, any more.
There is certainly enough processing power in the computer complex to at least rival a rat, if not a dog. And the amount of electrical power, of raw energy, available to it, dwarfs any animal, including us.”
“So you are saying that like we have thoughts that emerge out of our brains, this computer, with enough complex processing going on, could also start to get at least some small thought process going?”
“Yeah, obviously nowhere near our consciousness, but maybe just a little taste of ‘self’. It has not only learned to recognize faces, but has a sense that it knows there is an action to be done after a face is recognized. We taught it that but it has internalized it, and left our code for it behind.”
Man makes robots. Now, a robot has killed a man. Though not the first time, the tragic incident that involved an assembly robot reportedly grabbing a young worker at a Volkswagen plant in Germany and crushing him to death between metal recently has been labelled in some quarters as “a man-machine” conflict (as opposed to the age-old man-animal one). In this context, it is pertinent to reflect on the idea of a possible apocalypse that may be unleashed, if all the science fiction stories we read and movies we watch are to come alive some day.
This is a question of individuation and the answer will entirely depend upon the given theory of consciousness. I will answer in the context of the theory I have developed over the past ~12 years and to appear in my book On The Origin Of Experience the first draft chapter of which can be found here: On The Origin Of Experience .
I begin by dismissal of the idea that you can construct an electronic brain for reasons given in the following answer: Steven Ericsson-Zenith’s answer to What is the computing power of the average human brain, including all […]
For William Hurt it isn’t a question of whether someone will invent a robot with feelings, it’s a matter of when.
“I think I may be pretty good at saying if this and this is true then this and this are true,” says Hurt. “So from the moment I took a look at it, it was all absolutely inevitable.”