This is an interesting approach to exploring consciousness. Reproducing a specific test allows them to look at mechanisms or processes that provide a self-aware response. It will be interesting to see how specific they were in coding the responses generated. Is there an awareness code that looks at responses and generates an ‘I’ view or is that somehow a spontaneous, uncoded step through some sort of neural network or non-procedural code?
If consciousness is really a purely materialistic phenomena, then maybe a set of awareness and sentience modules that handle various general situations are not far from where we as humans have evolved. One way to look at our consciousness is a set of circuits that provide awareness and thinking that ‘feels’ like us. But just like the robot, they are not a separate conscious ‘feeling’ but a simulation or code.
A question is whether trying to engineer a consciousness that looks human would miss an emergent consciousness that develops from the machine. If that possibility is explored, you would need to take a different approach of trying to uncover a possible native consciousness.
IN A robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.
They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.
Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says.
Full Post at www.newscientist.com