The Singularity, Virtual Immortality and the Trouble with Consciousness (Op-Ed)

The Singularity, Virtual Immortality and the Trouble with Consciousness (Op-Ed)

The author summarizes the explanations of consciousness into 5 possibilities.

It is an open question, post-singularity, whether superstrong AI without inner awareness would be in all respects just as powerful as superstrong AI with inner awareness, and in no respects deficient? In other words, are there kinds of cognition that, in principle or of necessity, require true consciousness? For assessing the AI singularity, the question of consciousness is profound .

The soul of the human machine

The soul of the human machine

Its easy to say that man is not like a machine.  Its very hard to say what that might mean for consciousness.

Machines possess no capacity to will, create, and want. From inside the computational framework, powers like these can only be bracketed or dismissed. If widely accepted, the moral and political implications of such dismissals would be grave. What becomes of democracy, individual liberty, and the right to pursue happiness, if computer-man has no capacities for free choice and is algorithm-driven?



Great little story by Terry Bisson.  So accurately mimics the conversations we have about machines being sentient.  Really makes you think about the conceit that it takes ‘meat’ or brain tissue to harbor consciousness!

I’m honored that this often shows up on the internet. Here’s the correct version, as published in Omni, 1990.

“They’re made out of meat.”


“Meat. They’re made out of meat.”

Tech 2015: Deep Learning And Machine Intelligence Will Eat The World

Tech 2015: Deep Learning And Machine Intelligence Will Eat The WorldThe article has a great graphic on all of the startup and existing company activity in machine learning and data.

Shivon Zilis, an investor at BloombergBETA in San Francisco, put together the graphic below to show what she calls the Machine Intelligence Landscape. The fund specifically focuses on “companies that change the world of work,” so these sorts of automation are a large area of concern. Zilis explains, “I created this landscape to start to put startups into context. I’m a thesis-oriented investor and it’s much easier to identify crowded areas and see white space once the landscape has some sort of taxonomy.”

Full Post at

Artificially intelligent robots don’t need to be conscious to turn against us

Artificially intelligent robots don't need to be conscious to turn against us

This is a very interesting interview with Stuart Russell.  He gives a nice overview of AI and the major areas of research, with its emphasis on solving problems, not getting consciousness in machines.

I think the quotes below about what we know about consciousness and how it might look in machines are great.

SR: The biggest obstacle is we have absolutely no idea how the brain produces consciousness. It’s not even clear that if we did accidentally produce a sentient machine, we would even know it.

I used to say that if you gave me a trillion dollars to build a sentient or conscious machine I would give it back. I could not honestly say I knew how it works. When I read philosophy or neuroscience papers about consciousness, I don’t get the sense we’re any closer to understanding it than we were 50 years ago.

There is no scientific theory that could lead us from a detailed map of every single neuron in someone’s brain to telling us how that physical system would generate a conscious experience. We don’t even have the beginnings of a theory whose conclusion would be “such a system is conscious.”

The secret of consciousness, with Daniel C. Dennett

The secret of consciousness, with Daniel C. DennettI think Dennett has the best view of consciousness as an emergent property.  His emphasis is on a gradient of consciousness, that it can occur at even the smallest single-celled creature.

Dennett believes that there’s every degree of sensitivity and reactivity right down to bacteria. “This idea that there’s this salient marvellous property that you either have or you don’t, that’s the mistake. Bacteria are remarkably adroit, sensitive and self-protective and every cell in our bodies is like a bacterium in this way.” He says that if people knew more about what single celled organisms can do they would realise that they are all conscious.

“What do you think consciousness is? As we build up in complexity from bacteria through to starfish to birds and mammals and us it seems to me the most important threshold is actually us, that we have the bigger and more impressive bag of tricks than any other species. But that doesn’t mean that we have this utterly different phenomenon that happens in our heads and it doesn’t happen in any other heads.”

Robot homes in on consciousness by passing self-awareness test

Robot homes in on consciousness by passing self-awareness test

This is an interesting approach to exploring consciousness.  Reproducing a specific test allows them to look at mechanisms or processes that provide a self-aware response.  It will be interesting to see how specific they were in coding the responses generated.  Is there an awareness code that looks at responses and generates an ‘I’ view or is that somehow a spontaneous, uncoded step through some sort of neural network or non-procedural code?

If consciousness is really a purely materialistic phenomena, then maybe a set of awareness and sentience modules that handle various general situations are not far from where we as humans have evolved.  One way to look at our consciousness is a set of circuits that provide awareness and thinking that ‘feels’ like us.  But just like the robot, they are not a separate conscious ‘feeling’ but a simulation or code.

A question is whether trying to engineer a consciousness that looks human would miss an emergent consciousness that develops from the machine.  If that possibility is explored, you would need to take a different approach of trying to uncover a possible native consciousness.

IN A robotics lab on the eastern bank of the Hudson River, New York, three small humanoid robots have a conundrum to solve.

They are told that two of them have been given a “dumbing pill” that stops them talking. In reality the push of a button has silenced them, but none of them knows which one is still able to speak. That’s what they have to work out.

Unable to solve the problem, the robots all attempt to say “I don’t know”. But only one of them makes any noise. Hearing its own robotic voice, it understands that it cannot have been silenced. “Sorry, I know now! I was able to prove that I was not given a dumbing pill,” it says.

Full Post at

Robots Unplugged: Will We Face a Man-Machine Conflict?

Man makes robots. Now, a robot has killed a man. Though not the first time, the tragic incident that involved an assembly robot reportedly grabbing a young worker at a Volkswagen plant in Germany and crushing him to death between metal recently has been labelled in some quarters as “a man-machine” conflict (as opposed to the age-old man-animal one). In this context, it is pertinent to reflect on the idea of a possible apocalypse that may be unleashed, if all the science fiction stories we read and movies we watch are to come alive some day.

If we create a conscious robot and copy its brain, there are going to be two brains. Will they have different consciousnesses?

This is a question of individuation and the answer will entirely depend upon the given theory of consciousness. I will answer in the context of the theory I have developed over the past ~12 years and to appear in my book On The Origin Of Experience the first draft chapter of which can be found here: On The Origin Of Experience .

I begin by dismissal of the idea that you can construct an electronic brain for reasons given in the following answer: Steven Ericsson-Zenith’s answer to What is the computing power of the average human brain, including all […]