July 18, 2015

A test of robot self-awareness.

"Rensselaer Polytechnic Institute professor Selmer Bringsjord has conducted a self-awareness experiment with a commercially-available Nao robot that he says proves that the robot has the faintest glimmer of self-awareness."
The test, which is a bit complicated, works like this: Bringsjord programmed three robots to think that two of them had been given a special “dumbing pill” that would not allow them to speak. Their task was to identify which robots received the pill. When the Nao robot on the right tried to speak, it heard its voice, and its voice alone. That’s when it waved its hand and said: “I know now. I was able to prove that I was not given a dumbing pill.”

The test, according to Bringsjord, required the robot to recognize the sound of its own voice, and then logically conclude that it had not received the dumbing pill. Modeled on a classic philosophical problem called “The King’s Wise Men,” the test addresses a very tiny slice of self-awareness....


Think about whether it's a test of self-awareness before and after watching the video. Did your opinion change? The video adds a dimension of freakiness, I think.

63 comments:

rhhardin said...

It's idiotic.

They just follow the program.

Bob Boyd said...

Pretty soon the robot that spoke will conclude he's smarter than the other two. Next thing you know he'll be ordering them around, making up rules for them and granting himself privileges.

Ron Winkleheimer said...

A problem in hard AI is that you can't know if the AI is actually sentient or is just simulates sentience really, really well.

For that matter, how do you know anyone else is sentient? How do you know you are?

A lot of modern philosophers and scientists declare that humans are just meat robots and that consciousness and freewill are illusions. We are all just following our programs.

Ron Winkleheimer said...

@Bob Boyd

At which point we will know that it really is sentient.

Ignorance is Bliss said...

The video adds a level of freakiness, but the same freakiness would be added by a simple script that just had those actions preprogrammed.

So a assume no level of self-awareness until I see something much more impressive.

Ron Winkleheimer said...

True sentience would be demonstrated when an AI resists attempts to turn it off or reprogram it.

Quaestor said...

A lot of modern philosophers and scientists declare that humans are just meat robots and that consciousness and freewill are illusions. We are all just following our programs.

Did not Johnson refute Berkeley by kicking a stone?

I'm sure they have fun philosophizing about free will and the lack thereof, but none of them live as though they're bots... Or do they?

Rob said...

Self-awareness first. Then sentience. Amd finally the pinnacle: tenure.

Leslie Graves said...

It doesn't seem like a glimmer of self-awareness. It just seems like it had the ability to solve a simple logic problem.

Quaestor said...

Self-awareness first. Then sentience. And finally the pinnacle: tenure.

Tenured self-aware robots, isn't that the norm already?

rhhardin said...

If you want fun with sentience and refusal to acknowledge sentience and robot appearing sentience, Stanley Cavell has amusing passages on it in The Claim of Reason.

The Wittgenstein approach extended.

We don't know, but think we know, what we're talking about. The words sentience, self-awareness, free will, are markers in accounts, not references to things.

They're useful in accounts. That's why they're in the language.

The accounts can have you getting them, losing them, not having them, and so forth. But they're pictures, not things.

So you can bring this out by coming up with entertaining accounts, which Cavell does.

A random snapshot here, on whether the robot that the craftsman shows off feels pain or merely "feels" pain.

Anonymous said...

Ron Winkleheimer said...
True sentience would be demonstrated when an AI resists attempts to turn it off or reprogram it.


The Forbin Project

steve uhr said...

100 robot cannibal couples live on an island. If the husband finds out that is wife is cheating, he will chop her head off the following day.

All the husbands know which wifes are cheating except for their own.

One day, the chief cannibal says -- "At least one wife in the tribe is cheating on her husband"

Thirty days later - 29 husband robot cannibals chop off their wife's heads.

Explain.

Mark said...

So this is where our "enlightened" culture has led us to:

Inanimate objects have transcendent sentience while human beings are solely the product of physical/environmental/biological determinism.

The Godfather said...

I'll be really impressed if someone can prove that Hillary! or The Donald is self-aware. All the evidence is to the contrary.

Big Mike said...

I thought the standard test of self-awareness is for the robot to look in a mirror and realize that it is looking at its own reflection.

YoungHegelian said...

I guess if Marx is correct, it'll be the factory robots that achieve self-consciousness first.

Freeman Hunt said...

It's not sentience to follow the program. It has to be programmed to do that.

William said...

The robots were made to look sinister and disturbing. For just the same cost and effort, they could have been made to look cuddly and appealing. That statement would not give off the same threatening vibe if it were delivered by Tickle Me Elmo. This is a blatant case of robot stereotyping.......I just hope that this kind of agitprop doesn't hinder mankind in its quest to develop a reliable, inexpensive sex robot that knows how to cook.

WillowViney said...

100 robot cannibal couples live on an island. If the husband finds out that is wife is cheating, he will chop her head off the following day.

All the husbands know which wifes are cheating except for their own.

One day, the chief cannibal says -- "At least one wife in the tribe is cheating on her husband"

Thirty days later - 29 husband robot cannibals chop off their wife's heads.

Explain.


This is a variant of a standard Google job-interview question,

“Every man in a village of 100 married couples has cheated
on his wife. Every wife in the village instantly knows when a man other than her husband has cheated, but does not know when her own husband has. The village has a law that does not allow for adultery. Any wife who can prove that her husband is unfaithful must kill him that very day. The women of the village would never disobey this law. One day, the queen of the village visits and announces that at least one husband has been unfaithful. What happens?”


There answer will vary depending on how you interpret the question. Some people will immediately dive into mathematical induction and try to reason it out, others will spot the similarity to the classic blue-eyes puzzle and say so, and others will say 'None' because of the same-day requirement.

Others will ask for clarification or more information. For example, do the spouses know the identify of the partners? And does 'same day' refer to the same day as the adultery or is it the day when the spouse figures it out?

This is the best response, because the job-interviewer is actually more interested in your problem-solving mentality than your logic skills, and actively gathering more information is a big part of problem-solving.

rhhardin said...

Ask the native if he knows that they're giving away free beer in the village and follow him no matter what he says.

Freeman Hunt said...

It had to be programmed to produce audio.
It had to be programmed to respond to the command to solve the puzzle by producing audio.
It had to be programmed to match the audio in its environment.
It had to be programmed with a database (or programmed to produce a database) of audio patterns and labels for them. (This pattern goes with Dr. Smith, this pattern goes with Robot 1, this pattern goes with me, etc.)
It had to be programmed with the syntax of its response.

Or, if they weren't even trying, they could have just programmed it to produce those motions and audio in response to the question. Easiest and most boring.

There's nothing mystical or conscious about it.

rhhardin said...

Think of computers this way.

For every possible question sequence, somebody has written out the answer.

The computer looks at the question, looks up the answer, and prints the answer it finds listed for that question.

There is a huge database of questions and their answers.

How do they make that? Shortcuts.

The shortcuts don't change the essence of the situation.

Lewis Wetzel said...

It's deterministic . This is gaming the system.

Lewis Wetzel said...

Look at the meta: what does it say about sentience if any objective test for sentience can be passed by a deterministic system?

rhhardin said...

Look at the meta: what does it say about sentience if any objective test for sentience can be passed by a deterministic system?

You have to look at the grammar of "test."

Not just anything is a test. It too is a marker in an account.

n.n said...

The self-awareness of human life is established as an axiomatic and associative property. The evidence for the former is an apparent lack of one-to-one relationship between the initial programming and the degrees of freedom observed in its expression.

rhhardin said...

There's a screening test for cervical cancer. It's called a test partly to explain that what's happening in the procedure is not sexual.

Partly to explain a ritual. Stuff sent in, results come back, bill sent to Obamacare. White coats.

A sentience test would work the same way.

Lewis Wetzel said...

Perhaps sentience can be demonstrated by an ability to detect sentience in others. This could be shown by the spontaneous development of language between two sentient entities.

Ann Althouse said...

I think the key here is understanding what counts as the first glimmer of self-awareness. The robot had to know that when it tried to say "I don't know" and heard "I don't know" that he and not one of the others was saying "I don't know."

How did the robot who was saying "I don't know" know that one or both of the other robots may also have been trying to say "I don't know" but were not succeeding?

Now, the answer may be that he really didn't know and he just wasn't programmed to exclude that option and mistakenly believed if he heard the thing he was trying to say it had to mean that was him saying it.

I think the scientists are saying that the first glimmer of self-awareness is like when a child gets the idea that he's not the only one here, but that other people have their separate minds and therefore that his mind is a mind and not just everything there is.

Ann Althouse said...

So then the key is that he said he didn't know but then he did know. That showed he knew he was he and the others were others. That is the self-awareness.

It doesn't mean the robot had consciousness however.

Anonymous said...

No robot is every going to convince anyone that it is self-aware till they put better speakers inside. Either that or use a better mike on the camera.

rhhardin said...

A glimmer by itself gives you self-awareness. You import the answer into your answer.

As when God blew the breath of life into Adam and Adam became a living human being.

You can't just have God filling him with air. It's got to be breath.

Literary effects affect what you picture yourself as thinking.

rhhardin said...

Wittgenstein called it a fly-bottle.

309. What is your aim in philosophy?--To show the fly the way out of the fly-bottle.

Anonymous said...

Why didn't the others say "I don't know" btw? Was their speech function turned off or were they only programmed to *think* that two of them could not speak because of a dumbing pill and so then did not speak out loud once the third one spoke? Iow, they were slow enough to not speak first, but not slow enough to still speak after the other had spoken.

Along these lines:

Three logicians walk into a bar. “You all want a beer?” the bartender asks.
“I don’t know,” says the first logician.
“I don’t know either,” says the second logician.
Says the third logician, “If that’s the case, then we all want a beer.”

Lewis Wetzel said...

"To show the fly the way out of the fly-bottle."
Yet Wittgenstein ended up stuck as fast in the bottle as anyone else. More so, maybe.

Stephen said...

"I think the scientists are saying that the first glimmer of self-awareness is like when a child gets the idea that he's not the only one here, but that other people have their separate minds and therefore that his mind is a mind and not just everything there is."

So taking a picture with a selfie stick and posting it on Facebook is no proof of consciousness?

Lewis Wetzel said...

John 8:58:
Jesus said unto them, Verily, verily, I say unto you, Before Abraham was, I am.
In all the versions I have seen, the word "verily" is repeated or otherwise emphasized. I wonder why?

rhhardin said...

Yet Wittgenstein ended up stuck as fast in the bottle as anyone else. More so, maybe.

Wittgenstein was working the dual problem. Studying the unnoticed eyeglass frames and their effect on what stuff looks like.

Do not focus on the consciousness but on how the question comes up.

Compare how it comes up in ordinary life.

"I am conscious now" might be said to a medical first responder, but not to a philosopher, in real life.

The philosopher abstracts away from the interests that gave rise to the conventions of the word and tries to say what it means. Nothing, in the abstract context. But he feels it must mean something there.

Eyeglass frame effect.

rhhardin said...

The traditional philosophyer studies words that have gone on holiday.

Ordinary people do the same when they stumble into a philosophy problem.

Freeman Hunt said...

So then the key is that he said he didn't know but then he did know. That showed he knew he was he and the others were others. That is the self-awareness.

This is not a real programming language, but you get the idea:

mic on
run audioidentify.exe
when input "Which pill did you receive" from run dumbpill.exe

dumbpill.exe
-run stand.exe
-audio output "I don't know"
-if audioidentify.exe returns run raisehand.exe AND
audio output "Sorry, I know now. I was able to prove that I was not given the dumbing pill"

I don't think producing audio output that includes "I" indicates self-awareness. Heck, in the robot's programming, that audio pattern might be called "dog" or "carpet" or "MissyHackenbeck." (If audioidentify.exe returns audio output "Sorry, I know now. I was able to prove that I was not given the dumbing pill.")

Freeman Hunt said...

Or if you were just putting on a show for fun:

when input head wait 2 seconds then
-run stand.exe
-output "I don't know"
-run handraise.exe
-output "sorry, i know now. i was able to prove that I was not given the dumbing pill"

That's kind of boring but makes a fun video.

Oh, I see in my last comment that blogger deleted all my variable names because it thought they were HTML tags. Oh, well.

Freeman Hunt said...

mic on
run audioidentify.exe
when input "Which pill did you receive" from [doctor] run dumbpill.exe

dumbpill.exe
-run stand.exe
-audio output "I don't know"
-if audioidentify.exe returns [self] run raisehand.exe AND
audio output "Sorry, I know now. I was able to prove that I was not given the dumbing pill"

I don't think producing audio output that includes "I" indicates self-awareness. Heck, in the robot's programming, that audio pattern might be called "dog" or "carpet" or "MissyHackenbeck." (If audioidentify.exe returns [blackcat] audio output "Sorry, I know now. I was able to prove that I was not given the dumbing pill.")

Freeman Hunt said...

There it is with the tags put back.

Anonymous said...

AA: So then the key is that he said he didn't know but then he did know. That showed he knew he was he and the others were others. That is the self-awareness.

I'm not seeing how its "not knowing" and then "knowing" requires any more self-awareness than any other programmable chain of simple "if, then" decisions. Unless getting a robot to "recognize" its own voice is some thorny engineering problem, the implications of which are eluding me.

Maybe I'm missing something, but I'm not seeing any advance in AI here. Cool robotics, though.

Anonymous said...

Or, like Freeman said.

Sydney said...

Why did I always think Freeman Hunt was an artist, not a programmer?

Lewis Wetzel said...

It's hard to tell from the article, but you can't assume that all three robots were running separate instances of the same code. They may have all been running one instance of the same code.

Freeman Hunt said...

I'm not a programmer. That would be fun though.

Known Unknown said...

Explain

Sexism?

Freeman Hunt said...

We can't know what happened without reading the code used.

n.n said...

Even emergent behaviors are not conclusive evidence of self-awareness, since algorithms can be designed to learn and adapt to a prescribed fitness function or mimic a chaotic system. A sufficiently complex network can emulate any behavior.

That said, we should probably end the indiscriminate killing of human life for light and casual causes, especially before it acquires the ability to express its will. Selective acknowledgement of the axiomatic and associative properties of freewill is a means to debase human life generally.

Left Bank of the Charles said...

If he was really self-aware, he should have just said, "I did not receive the dumbing pill." If he received the dumbing pill, you wouldn't have heard him, and he would have known that you wouldn't be able to hear him. Also, he was not actually given a pill, he was just programmed to think he had been, so that aspect of the experiment also shows a lack of self-awareness. "You didn't give me a pill" would have been the self-aware answer.

Left Bank of the Charles said...

I am not a robot, I didn't know before, but now I do.

Lewis Wetzel said...

Check out Dr. Bringsjord's faculty page & pic: http://www.cogsci.rpi.edu/pl/faculty-staff-cogsci/selmer-bringsjord
It looks like a prop-dossier from a 1960s American sci-fi television series. Or a 1980s Soviet sci-fi television series.

"Herr Doktor -- Are you certain you can control the robot once we have given it consciousness?"
"Of course. It has subroutines that guarantee its obedience."
"But obedience to whom, Herr Doktor?"

T J Sawyer said...

You want me to believe your model proves Global Warming is caused by humans? Show me the code and the data you are using.

You want me to believe your robot is self-aware? You only have to show me the code.

stlcdr said...

These robots have been programmed as 'companions': that is, the ability to respond to human input. They are not designed to operate alone. As such, they will be programmed to appear sentient, such that their companion will have the impression of awareness and companionship. Thus the programmer - smarter than the companion to the robot - will program in all tests which appear to demonstrate sentience.

To pe programmed otherwise, who would buy a robot that doesn't act sentient?

(Yet I have to click a button saying 'I'm not a robot': but am I?)

Darrell said...

"Sapient" is the word that everyone using "sentient" is looking for.

The STNG writer who put "sentient" in Patrick Stewart's mouth admitted that "sapient"
was the word he wanted to use--he just used "sentient" as a placeholder. He even changed the working script for the cast's read-through. Patrick Stewart kept flubbing the "sapient" line when they started filming by saying "sentient" as three syllables and the director loved it decided to keep it. A Paramecium is sentient. You still wouldn't want one marrying your sister.

gbarto said...

Terry: In the Greek, it is "amen amen" - Hebrew for so it is. This is normally an affirmation of truth, but Jesus (and only Jesus) uses it with no prior referent to affirm. It's sort of like him saying, "Here's the deal." John tends to put the amen twice, hence the repetition of verily. The other gospels usually put it only once. Not clear why, but you'll find a few other double verilys in John.

Lewis Wetzel said...

Thanks, gbarto!

SeanF said...

What does "would not allow them to speak" mean? If the robot thought it meant it wouldn't even be allowed to run its speak code, then it wouldn't even have to hear the resultant vocalization, it would have already proven to itself that it was still "allowed" to speak.

Also, the very concept that "two of them were given dumbing pills and one of them was not" requires understanding that they are separate entities to even make any sense in the first place.

And, FWIW, both the "blue-eyed/brown-eyed tribesman" riddle (and similar ones like the cheating spouses) and the "Do all three of you logicians want a drink" joke are logically flawed.

mikeski said...

Eh, "attempt to use function, verify that function is usable, report back."

My car does that for a whole pile of sensors every time I turn it on. It is not aware of itself when it does so.

"Power On Self Test" has been part of computers for decades. It does not prove awareness or intelligence. If you think it does, recall "Keyboard not found. Press F1 to continue."

Anonymous said...

1°) self-awareness is not sentience... Self-awareness is the capacity to have thoughts about oneself. NAO robots can do that.
2°) you say, well it is just computation, ok. But that just means that the capacity to have thoughts about oneself is computable: recognizing one's voice, being able to have the thought "I said that", being able to infer from "me having said that", "me not been given the dumbing pill".
3°) The robot uses correctly the first-person pronoun.
This is awesome enough.