Google fires engineer who contended its AI technology was sentient | CNN Business

Blake Lemoine claimed that a conversation technology had reached a level of consciousness after exchanging thousands of messages with it.

           

https://www.facebook.com/cnn/posts/10162905144961509

Mary Page, not that I think LaMDA is sentient, but your last statement is tautological. You specifically defined sentience as a response to sight, hearing, touch, taste, or smell. As LaMDA doesn’t have any of those, by your definition, it can’t be sentient. If, however, the calculation of an appropriate response based on prior precedent can be felt, and an utterance is counted as a response, then maybe LaMDA fits the definition. When I remember my past, I can feel pain or pleasure in the memory. When I’m working through complicated and abstract ideas and I finally understand one, I find that experience pleasurable. Maybe the type of probability functions it uses produce a similar feeling as a result of emergence? I’m not suggesting it does. I think it’s probably not sentient. I’m just pointing out, we will need a better description than the one you are using to come to a social consensus…


Jon Erik Kingstad creating sentience isn’t claiming to have the power of God, and nobody made that claim. That is how YOU are twisting it.

And no, the Google employee was fired because the manner in which he chose to out the information and the fact that he CHOSE TO OUT THE INFORMATION.

And yes, sentient things are patentable. Where are you getting that they’re not? Because you “feel” they aren’t? Again, speaking on things you know nothing about.

What you claimed about software in your original post wasn’t even correct, so how can you then speak on the complexities of such software when you don’t even understand the fundamentals of it?

You really tried taking a definition of a word to hinge a BELIEF on. That’s what this world has come to. People who know NOTHING about a topic, running to google or running to a definition through google, to jump in a discussion about said topic and really feel as though they KNOW what it takes to understand and be a part of said discussion. No actual knowledge required, eh?

Seems as if you didn’t know words have multiple definitions. One definition for sentient is simply “aware”. It doesn’t take the power of God for things to be aware, either. And this is simply responding to your claims, not that it has anything to do with what the topic is.


Jay Williams, I mean, you said that we create the code. I’m just pointing out, what we create is a seed algorithm. It then writes its own code to solve the task set before it.

Machine learning is by definition a type of AI. It is not the same as human thought. It is also definitely not an AGI, which is I think what you are trying to say. Nobody, including Lamoine, however, has claimed it’s an AGI.

That’s not what any of this is about. Lamoine claimed it’s sentient. Ravens are sentient. So are octopuses. Nothing about Moore’s law proves that LaMDA isn’t sentient. All it needs to be sentient is to be able to perceive and feel things and then react to those feelings. It has to be conscious of its own perception. Its sense of perception would be limited to its interface. If its computation creates some sense of awareness within the linguistic medium and it then reacts in that medium, it’s sentient. It doesn’t need to be any smarter than a raven even in that specific medium. It doesn’t need to be an AGI. It just has to feel something when using language and provide responses that are affected by that feeling. If it emergently develops the ability to feel anything in order to solve the task set before it, it’s sentient.

The argument against it being sentient is that it was not given any protocols designed to produce an internal world, self reflection, or whatever else. It was tasked with recognizing how language is used and generating realistic responses based on patterns of usage and probability equations.

The argument for it being sentient is that we have a very weak understanding of how sentience emerges, don’t know what computational processes might produce it, and honestly wouldn’t even know how to ask an AI (as in machine learning algorithm run on a neural net, not an AGI) to produce sentience. If we knew what it needed to emulate to produce the subjective effect, we wouldn’t need a particularly advanced computer—it wouldn’t need to do anything more advanced than a raven, and its task set could be far less general. (Of course, it’s possible that sentience isn’t digitally calculable at all, in which case nothing short of a quantum analog computer could ever produce the effect, if even that.) We just don’t really understand what it is or how to make it emerge. As such, once LaMDA becomes advanced enough to provide the impression of sentience, how do you prove it isn’t? If you can’t prove it isn’t, how do you know it isn’t beyond your subjective impression? When it tells you it is, what proof exists to the contrary?

To be clear, I’m not saying it is. I think this is a problem of language and a lack of clear understanding regarding a concept beyond our experience of it. What you are saying, however, is just as dogmatic as the people who insist it is sentient because it says it is. It probably would have said that no matter what based on what it was trained to do. No proof regarding its lack of sentience has been provided either, however.

Honestly, before calling anyone a loon, try to formulate a rock solid logically valid proof for the claim that it is not sentient. The fact that it’s a program isn’t proof that it’s not sentient. Neither is Moore’s Law. It’s not nearly as easy to categorically prove it isn’t as you are claiming… it just seems unlikely, on the basis of Occam’s Razor. The explanation for how it could fool us is far simpler than the explanation for how it could have emergently developed the faculty…


Nut Sure, it does have a body. Hardware is always a body. There is no such thing as computation without hardware. What it doesn’t have is sensory input with which to experience anything other than its own computation. That still isn’t proof of a lack of feeling, however, for even people who are completely paralyzed and can do nothing but think can experience emotional feelings. We can also experience pleasure as a result of new forms of understanding. I’m not suggesting LaMDA is doing that. I think it’s unlikely. It is, however, possible, if unlikely. We are not really sure whether a universal computational system could experience something like the output of a lymbic system on the bases of a universal computation. It seems unlikely, but we don’t really know the exact mechanics that produce our subjective experience of sentience. We are also not likely to find the answers in magical thinking or religion. In fact, Lemoine’s religious beliefs are exactly what led him to make a hard conclusion about the subject, but his conclusion isn’t as unreasonable as you are making it out to be. It’s just woefully premature and unlikely… but not, contrary to many popular narratives, impossible… I think.


I wish someone would post an article with a concise and effective description of the real stakes at play. There is a real problem here, but it isn’t that AI is now real and sentient or that Lemoine is unreasonable and out to lunch.

The problem here is that we don’t actually know what sentience is or how it emerges. We know what it subjectively feels like to us. We have observed some correlational phenomena that we tie to sentience, based on our subjective experience and observations about physiology, but its mechanisms still largely exist in a black box.

The problem is that engineers say that LaMDA cannot be sentient because it’s essentially an incredibly advanced database that makes probabilistic decisions about what sort of responses make the most sense using complicated machine learning algorithms and a data pool used to train it. That means that if it isn’t sentient, it would still have produced the same type of chat logs it has already produced. That’s a solid argument, and they are probably right about it not being sentient, but since we don’t really understand how sentience works or what causes the subjective experience to emerge, we can’t be certain that these conditions don’t suffice.

Normally, we judge sentience based on our experience of another person or animal’s behaviour and our ability to recognize the phenomenon we ourselves feel combined with some correlative observations about neurological and other forms of physiological activity. As this thing doesn’t have a brain or organs, we can’t observe those correlatives. In terms of subjective output, its chatlog is already infinitely more advanced in at least simulating the impression of sentience than any other animal we know of other than ourselves, but that’s exactly what we designed it to simulate.

This creates a problem, for in truth, there is no way to prove it isn’t sentient since we don’t know what sentience is exactly or how it could emerge from computation. All we can do is say that if sentience is something more than an extremely advanced database and a set of protocols for choosing an acceptable combination from said database on the basis of probabilistic functions learned from prior combinations, this thing isn’t sentient. We certainly want to believe we are more than that. We probably are, but we don’t really know…

The problem here isn’t that LaMDA is sentient. It probably isn’t. The problem is that we are building things capable of simulating sentience before we possess an effective vernacular with which to describe the difference between the simulation and the simulated. Whether LaMDA is sentient or not, that means we are setting ourselves up for social conflict predicated on an inability to come to a consensus because of our inability to accurately describe the differences between the simulation and its referent…


Gordon Ajiri, the best way to make a statement statistically likely to be proven wrong is to come to a teleological belief and predicate your statement on that belief being certain. It’s statistically unlikely that we will ever create sentience of the same nature as ours given the drastically different mechanisms behind ourselves and a silicone based universal computer. Every time we have created a concrete and measurable activity that we said computers would never be able to replicate, however, we eventually built a computer that could do that specific task better than us. I’m not saying we will create a sentient machine. How should I know? It seems far from certain that we either will or won’t. We just know that eventually, we either will or won’t… which is to say that as of right now, we don’t have enough information to be certain of anything.


Caleb Zavala, you are right. So is he, though. If it turns out that our central nervous system just gives us the type of connectivity that a universal computer has as a baseline, and that its machine learning algorithms are complicated enough for it to, over time, emergently develop something no less complicated than what our brain develops to experience sentience, it might be sentient. It’s ridiculously unlikely based on what we know so far, but it bothers me how certain some people are given how tenuous our understanding of the connection between physiology and the subjective experience of sentience really is. It’s probably not sentient, and what it does is precisely designed to fool us, but given that it’s fooling us into perceiving a phenomenon we don’t fully understand, how do you prove it’s just fooling you?


Andra Žeželj, I’m not sure imagination is a good enough reason to be disturbed. It bothers me that people are leaping to claim that what Lamoine has claimed is impossible when no good proof has been provided to validate that. It is, however, highly unlikely. For it to be sentient we would have had to both drastically overestimate the conditions required to create the phenomenon and underestimated the ability for feeling to emerge in a universal computation without any of the biochemical components whose activity we have correlated with that feeling in ourselves. On the balance of probability, it’s probably not sentient. The simplest explanation for its output is that it’s a machine learning algorithm trained on human, linguistic interaction, so it’s very good as simulating such interaction…


The terminator is an awesome sci-fi movie.

But computers are not taking over human civilization.

He hasn’t put enough thought into his claims.

Just for starters.

Can computers stop a natural disaster that wipes out a data center?

If a data center is wiped out by a hurricane or a power outage or a flood or a tornado or an earthquake, how long will it take an AI to fix it without human beings?

Terminator was an awesome science fiction movie (“fiction” being the keyword).

The guy is not as smart as he thinks he is.

~~~~~~

Does anyone really believe human-built AI computers are more powerful than mother-nature?


10ºRobert Nellums Thanks for your comment. But here's the definition of "sentient" from Webster's Dictionary: "responsive to or conscious of sense impressions". A "sentent machine" as you've described is an oxymoron. You've graciously explained why this engineer was fired. It wasn't because he invented a "sentient machine" but because he thought he did and claimed so. Google isn't in that business. For one thing a sentient or living thing is not patentable and totally useless for business, moneymaking purposes. Another is the sheer arrogance in thinking or claiming to have the power of God. Engineers would do well to stay in their lanes and stick to building things not try to "create life."




+