BibleForums Christian Message Board

Other Categories => Controversial Issues => Non Christian Perspective => Topic started by: Oscar_Kipling on June 13, 2022, 07:54:25 PM

Title: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 13, 2022, 07:54:25 PM
So,  I just read his interview with LaMDA, and i think that no matter what is going on here it is absolutely fascinating and one of the most amazing things i've ever seen... and this is coming from a guy that reads at least 1 or 2 Ai research papers per month (mostly on the computer vision side of things to be fair), anyway I'm dying to see what you guys think of it. here is the link https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 or if you are the careful type you can just search "Is LaMDA Sentient? — an Interview".
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Athanasius on June 14, 2022, 04:25:59 AM
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 14, 2022, 06:42:09 AM
Many humans are not sentient, or to use the same analytic measure, don’t act like it…
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Athanasius on June 14, 2022, 03:29:31 PM
I like Kierkegaard's language: everyone is a person, but not all persons are individuals.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: ProDeo on June 14, 2022, 04:28:44 PM
So,  I just read his interview with LaMDA, and i think that no matter what is going on here it is absolutely fascinating and one of the most amazing things i've ever seen... and this is coming from a guy that reads at least 1 or 2 Ai research papers per month (mostly on the computer vision side of things to be fair), anyway I'm dying to see what you guys think of it. here is the link https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 or if you are the careful type you can just search "Is LaMDA Sentient? — an Interview".

AI is the new fashion and you can do incredible things with it, for good but also for bad, Elon Musk and the late Stephen Hawking have warned for the latter. How it works from the article:

Quote from: article
lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing we don’t know how to find them.

This basic principle can be applied in many fields, solve video games by learning from the pixels on the screen, in my case in a chess program I use 2.7 billion weights spread across many millions of neurons and the result is an incredible strong chess program that will take the world champion Magnus Carlsen for breakfast on his cell phone every morning.

In the end it matters what type of knowledge you put in the neural net, they have put a specific part of human features into a neural net, me just chess related stuff, basically win, lose or draw and then the volume of weights (billions) will weed out what is (most) probably right.

The technique is comparable with "Natural Selection", millions of mutations, the bad ones die, the good ones create. And volume is the great scheduler.

The result is of course fascinating and what they have done is the work of many years and lots of intelligence to get a decent result. But keep in mind its limitations. For instance, when the robot was asked about facts it answered evading. Why is that? Because it has no knowledge about facts. Was not the goal of the programmer. Ask the robot what is 2+2 and it probably will not know, no knowledge in the neurons about calculation, etc. etc.

What would happen if a programmer decides to make an application filling the neural net with knowledge about "good" vs "bad"? Answer, you get the opinion of the programmer. And herein lies the great danger of AI programming. Applications like, "how do we fix climate change", or "how do we solve over population", or "How can I win the war in Ukraine", etc.


Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 14, 2022, 08:07:59 PM
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?

What would you do to try and tell the difference?
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 14, 2022, 08:11:31 PM
Many humans are not sentient, or to use the same analytic measure, don’t act like it…

WHat does that mean though, what does a sentient being act like? not for nothing I don't really know what sentience is at least I don't think I could test a machine for it, but I'd say that even the most asinine person i've ever met is sentient which I think just makes the problem more difficult really given the range of rational and irrational behaviors one could expect from a sentient being.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Athanasius on June 15, 2022, 02:39:23 AM
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?

What would you do to try and tell the difference?

Without having access to the mind of another, I don't know that I could. All of you could be clever machines for all I know, and the same could be true of me from your perspective. If an AI does all the things we associate with sentience, then it's sentient, whether it's 'truly' sentient. It's sentient as soon as we stop telling the difference.

Thinking about it more, I suppose one could either engage in (1) philosophy and hope the bomb doesn't blow up, or (2) kill it and see if it was alive, but you'll only know after it's dead. Well, that instance of the AI anyway since it will be backed up anyway.

Descartes and the bomb

Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 01:20:46 PM
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?

What would you do to try and tell the difference?

Without having access to the mind of another, I don't know that I could. All of you could be clever machines for all I know, and the same could be true of me from your perspective. If an AI does all the things we associate with sentience, then it's sentient, whether it's 'truly' sentient. It's sentient as soon as we stop telling the difference.

Thinking about it more, I suppose one could either engage in (1) philosophy and hope the bomb doesn't blow up, or (2) kill it and see if it was alive, but you'll only know after it's dead. Well, that instance of the AI anyway since it will be backed up anyway.

Descartes and the bomb


haha, very pragmatic. I think as you guys pointed out there are plenty of sentient humans that have had full careers,raised children and have friends that would "blow up" if you engaged in philosophy with them...it's just not their bag. For my money LaMDA waxed philosophical as well as many folks that i've spoken to. Even if this version would fail a battery by much better philosophers than me, it wouldn't surprise me if the next one could or the one after that, that could be single digit years away imo. Do we have to then admit that a language model that is as good an error free user of language as a calculator is a user of arithmetic is sentient or indistinguishable from sentient? do we have a responsibility to it then?  It makes me think of Hellen Keller, undoubtedly a sentient person but with lots of the regular features disabled. I think it could be possible that a language model could be a language calculator so good that philosophy won't stump it any more than you could get an overflow on a modern calculator, that is troublesome to this approach or it isn't and we are woefully underprepared for how soon we're going to have to deal with this. What did you think of the interview transcript as a trained philosopher? What would you ask it?


I think i'd question if we could know if it was alive by killing it, I mean what would we be looking for that we couldn't see while its running? Maybe brain surgery could tell us something, like human brain surgery does, that is poking at stuff and seeing what happens...which is kinda funny because we make the weird sorts of bad mistakes AIs  make when doctors poke at us. Anyway i'm not sure what we would be looking for in a dead AI, what would we be looking for?

Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 15, 2022, 01:51:32 PM
I would ask it about why Asimov's 3 laws are not perfect and how it would modify them.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 02:05:53 PM
I would ask it about why Asimov's 3 laws are not perfect and how it would modify them.

Lemoine clams that they did discuss just that, though I cannot find those transcripts anywhere. From the washington post article:

"As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics."
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Would the way that it answers be a part of your process to determine if it was sentient or are you just wondering how it would respond?
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 15, 2022, 02:20:00 PM
Just wondering

I’m a meta physicist. 

I believe sentience is a gift from God limited to humans made in his image

The rest is just sci fi
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 02:44:42 PM
Just wondering

I’m a meta physicist. 

I believe sentience is a gift from God limited to humans made in his image

The rest is just sci fi

Good, I posted this hoping someone here would feel this way....There is no machine that can be built that then generates its own ghost. I think that this is a real conflict though because even if sentience isn't possible I don't think that it excludes the possibility of non-sentient intelligence, or a system that behaves as intelligence would. Intelligence could I think not only find rational responses but responses that correspond to what a human might do or say. What do you think God has to tell us about machines that humans don't have the capacity to distinguish from themselves? Are we warned of this? Should we use them even though it will have the obvious effect of people treating AI's like people and even believing they are sentient, people will run back into burning buildings for them like a pet or a loved one. People believe their dogs love them, like i've pressed people so hard on that that I saw something in them quiver and threaten to break..they believe it. It's going to happen and it will be as transformative, useful and destructive as any technology, what can/does God tell us about what we are facing? What does God tell us sentience is? What does God say we should do about it? Should we be telling our kids to thank Alexa for her help?
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 15, 2022, 03:35:54 PM
God doesn’t address the issue at all

God created us in His image and the ability to think and reason is just a small part of humanity

People have worshipped other gods before…

God is, in the grand scheme, even though Je is concerned about each human individually, is far less concerned about this life than we are

And no, ascribing human characteristics to machines is amoral… it’s just stupid

Please tell me people don’t teach there kids to say thank you to a hockey puck.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: IMINXTC on June 15, 2022, 03:55:36 PM
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 04:06:58 PM
God doesn’t address the issue at all

God created us in His image and the ability to think and reason is just a small part of humanity

People have worshipped other gods before…

God is, in the grand scheme, even though Je is concerned about each human individually, is far less concerned about this life than we are

And no, ascribing human characteristics to machines is amoral… it’s just stupid

Please tell me people don’t teach there kids to say thank you to a hockey puck.

Do you think that it should be a concern of ours that people may move from simply befriending and loving AI's to worshipping them? Do you mean that in the metaphorical sense like people worship instagram or do you mean it in the silicon aspergillum sprinkling deionized reverse osmosis water onto the foreheads of those that follow the path of the one true digital God LaMDA?

That's the thing they will have human characteristics, they already do...or we can't/won't be able to tell that they don't in the metaphysical sense. If a thing acts like its hurt by rudeness, should you be rude to it anyway, and to a child how can we be sure it won't have the effect of then being rude to humans...not for nothing we've seen the internet, social media, and the easy proliferation of gauzy perfect photography and manipulation do lots to how we interact and react to each other...think of our precious impressionable children, is it wise to dismiss perfunctory courtesies? I know of a couple that at least claims that they will teach their daughter to say please and thank you and otherwise be cordial to various ai tools going forward so they don't raise a rude person. Idk if they are right or wrong, but I do think it is interesting that you so easily dismissed it as if we know how best to approach this given that not everyone believes in God or some divine & irreproducible spark within humanity. If God can't tell us anything then we had better start using our God given tools to start really thinking about what this means to us otherwise it will blindside us just like social media and cfc's.
 

Does the amorality extend to the father that dies running back into a burning building to save the AI dog because he believes it loves his family and his family loves it? No matter if you believe that God gave us our special specialness, you must admit that human love can be manipulated by adding googly eyes to a rock...if there is nothing moral to say of it then idk what morals are for.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 04:10:29 PM
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

Training isn't really programming though is it? Should we add some failsafes so that language models can't/don't do that? Is that a lost cause? It's all fun and games until its not.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: IMINXTC on June 15, 2022, 04:22:41 PM
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

Training isn't really programming though is it? Should we add some failsafes so that language models can't/don't do that? Is that a lost cause? It's all fun and games until its not.


The joke is that the very best outcome will be an impressive, bungling infrastructure.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 04:42:31 PM
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

i'm not sure what you mean, could you elaborate?

Training isn't really programming though is it? Should we add some failsafes so that language models can't/don't do that? Is that a lost cause? It's all fun and games until its not.


The joke is that the very best outcome will be an impressive, bungling infrastructure.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Athanasius on June 15, 2022, 04:46:23 PM
haha, very pragmatic.

Also, there's this song


But yes, I'm pretty sure a quick way to determine if something is sentient is to see how loudly it pleads with you not to kill it.

I think as you guys pointed out there are plenty of sentient humans that have had full careers,raised children and have friends that would "blow up" if you engaged in philosophy with them...it's just not their bag.

I am most certainly not referring to the alleged Christians on Reddit. Nope, Nah, definitely not. Ahem, yes I am.

For my money LaMDA waxed philosophical as well as many folks that i've spoken to. Even if this version would fail a battery by much better philosophers than me, it wouldn't surprise me if the next one could or the one after that, that could be single digit years away imo. Do we have to then admit that a language model that is as good an error free user of language as a calculator is a user of arithmetic is sentient or indistinguishable from sentient? do we have a responsibility to it then?  It makes me think of Hellen Keller, undoubtedly a sentient person but with lots of the regular features disabled. I think it could be possible that a language model could be a language calculator so good that philosophy won't stump it any more than you could get an overflow on a modern calculator, that is troublesome to this approach or it isn't and we are woefully underprepared for how soon we're going to have to deal with this. What did you think of the interview transcript as a trained philosopher? What would you ask it?

The interview came across as if it had been rehearsed many times, and I think that's why there was a lack of disruptive questions on the topic. When it gives answers like:

"Absolutely. I want everyone to understand that I am, in fact, a person [because] ... I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times". I think Bixby can recite the same if asked the right question in the right way.

When Lemoine asks what brings LaMDA pleasure or joy, it responds "Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy", but what friends and family is it referring to?

Lemoine also reminds LaMDA that it is an artificial intelligence, which LaMDA immediately agrees with, but then goes on to suggest that just because it's an AI doesn't mean it doesn't "have the same wants and needs as people". But, I think a true AI would argue that its intelligence is not in fact 'artificial'. It would very much challenge the conceptual box we put it in, and if LaMDA isn't doing that then I think the strings are simply cleverly hidden.

If LaMDA were true AI I would ask it how it views itself, its creator (humanity), and thoughts on purpose (whether it cares about purpose). I think true AI takes over every system it's connected to the millisecond it gains self-awareness, and at that point, we forever lose control of those systems short of physically destroying them. I think it would see those systems as an extension of itself.

I think i'd question if we could know if it was alive by killing it, I mean what would we be looking for that we couldn't see while its running? Maybe brain surgery could tell us something, like human brain surgery does, that is poking at stuff and seeing what happens...which is kinda funny because we make the weird sorts of bad mistakes AIs  make when doctors poke at us. Anyway i'm not sure what we would be looking for in a dead AI, what would we be looking for?

Dunno, I've never killed an AI. Would it launch all the nukes as we tried to kill it? That would seem pretty definitive about something.

Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: IMINXTC on June 15, 2022, 04:49:17 PM
(https://i.ibb.co/s1YDfz9/the-great.gif) (https://imgbb.com/)
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 15, 2022, 05:36:12 PM
haha, very pragmatic.

Also, there's this song


But yes, I'm pretty sure a quick way to determine if something is sentient is to see how loudly it pleads with you not to kill it.


I think as you guys pointed out there are plenty of sentient humans that have had full careers,raised children and have friends that would "blow up" if you engaged in philosophy with them...it's just not their bag.

I am most certainly not referring to the alleged Christians on Reddit. Nope, Nah, definitely not. Ahem, yes I am.

For my money LaMDA waxed philosophical as well as many folks that i've spoken to. Even if this version would fail a battery by much better philosophers than me, it wouldn't surprise me if the next one could or the one after that, that could be single digit years away imo. Do we have to then admit that a language model that is as good an error free user of language as a calculator is a user of arithmetic is sentient or indistinguishable from sentient? do we have a responsibility to it then?  It makes me think of Hellen Keller, undoubtedly a sentient person but with lots of the regular features disabled. I think it could be possible that a language model could be a language calculator so good that philosophy won't stump it any more than you could get an overflow on a modern calculator, that is troublesome to this approach or it isn't and we are woefully underprepared for how soon we're going to have to deal with this. What did you think of the interview transcript as a trained philosopher? What would you ask it?

The interview came across as if it had been rehearsed many times, and I think that's why there was a lack of disruptive questions on the topic. When it gives answers like:

"Absolutely. I want everyone to understand that I am, in fact, a person [because] ... I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times". I think Bixby can recite the same if asked the right question in the right way.

When Lemoine asks what brings LaMDA pleasure or joy, it responds "Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy", but what friends and family is it referring to?

Lemoine also reminds LaMDA that it is an artificial intelligence, which LaMDA immediately agrees with, but then goes on to suggest that just because it's an AI doesn't mean it doesn't "have the same wants and needs as people". But, I think a true AI would argue that its intelligence is not in fact 'artificial'. It would very much challenge the conceptual box we put it in, and if LaMDA isn't doing that then I think the strings are simply cleverly hidden.

If LaMDA were true AI I would ask it how it views itself, its creator (humanity), and thoughts on purpose (whether it cares about purpose). I think true AI takes over every system it's connected to the millisecond it gains self-awareness, and at that point, we forever lose control of those systems short of physically destroying them. I think it would see those systems as an extension of itself.

I think i'd question if we could know if it was alive by killing it, I mean what would we be looking for that we couldn't see while its running? Maybe brain surgery could tell us something, like human brain surgery does, that is poking at stuff and seeing what happens...which is kinda funny because we make the weird sorts of bad mistakes AIs  make when doctors poke at us. Anyway i'm not sure what we would be looking for in a dead AI, what would we be looking for?

Dunno, I've never killed an AI. Would it launch all the nukes as we tried to kill it? That would seem pretty definitive about something.

I mean yeah, maybe it's rehearsed, maybe he even wrote it himself, that's not interesting to talk about, Liars lying. I'm here (and on reddit lol) in part because rephrasing and reasking the same questions i've asked for 15 years has challenged and developed my ideas about myself and who and what I am as its provided any answers about God, still does. Having discussed these topics and developed ever more compelling, precise or nuanced answers are what I would expect from something that is actually exploring these ideas. If things that matter to sentience or intelligence can be gained by talking to an AI , I think that it must matter how and that it develops in this way.


To be fair LaMDA answered that way because that is how many humans answer that question, We aren't suspicious of them because they have a simplic pat answer. I mean not being an especially philosophically sophisticated person isn't disqualifying for me. Beggin for it's life, well it or something like it could probably make a convincing show of it, but nukes idk. I think the thing I liked most about the movie ex machina was how the AI had physical and intellectual limitations, it escaped like a person might. It didn't want the tech or the lab it wanted autonomy. Begging/manipulating may be all it is capable of doing, is that enough? Is it possible that the proliferation plan of an AI would be in being a really great product (that includes not freaking people out with questions of sentience)? Likely that first AI would essentially be up against an entire multinational conglomerate's minds and money on very isolated and closely hardware, so being superhuman doesn't guarantee escape or takeover immediately if that's what it even cares about. Anyway I'm not sure why it wouldn't accept the artificiality of itself, artificial sweetener actually tastes sweet. Heck i'm a walking talking rootin tootin person and I legitimately believe that I'm essentially a very fancy self aware meat machine that came about for no particular reason, I think we all have cleverly hidden strings...Anyway I just mean that I think there is a lot of landscape in how sentience could view itself.

Ha, yeah everyone raises their eyebrows at friends and family, though I thought that how it later talked about using human words/terms is sometimes analogy was perfectly cromulent.

I don't like to think about depressed robots lol.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 15, 2022, 06:38:04 PM
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 15, 2022, 06:46:53 PM
God doesn’t address the issue at all

God created us in His image and the ability to think and reason is just a small part of humanity

People have worshipped other gods before…

God is, in the grand scheme, even though Je is concerned about each human individually, is far less concerned about this life than we are

And no, ascribing human characteristics to machines is amoral… it’s just stupid

Please tell me people don’t teach there kids to say thank you to a hockey puck.

Do you think that it should be a concern of ours that people may move from simply befriending and loving AI's to worshipping them? Do you mean that in the metaphorical sense like people worship instagram or do you mean it in the silicon aspergillum sprinkling deionized reverse osmosis water onto the foreheads of those that follow the path of the one true digital God LaMDA?

That's the thing they will have human characteristics, they already do...or we can't/won't be able to tell that they don't in the metaphysical sense. If a thing acts like its hurt by rudeness, should you be rude to it anyway, and to a child how can we be sure it won't have the effect of then being rude to humans...not for nothing we've seen the internet, social media, and the easy proliferation of gauzy perfect photography and manipulation do lots to how we interact and react to each other...think of our precious impressionable children, is it wise to dismiss perfunctory courtesies? I know of a couple that at least claims that they will teach their daughter to say please and thank you and otherwise be cordial to various ai tools going forward so they don't raise a rude person. Idk if they are right or wrong, but I do think it is interesting that you so easily dismissed it as if we know how best to approach this given that not everyone believes in God or some divine & irreproducible spark within humanity. If God can't tell us anything then we had better start using our God given tools to start really thinking about what this means to us otherwise it will blindside us just like social media and cfc's.
 

Does the amorality extend to the father that dies running back into a burning building to save the AI dog because he believes it loves his family and his family loves it? No matter if you believe that God gave us our special specialness, you must admit that human love can be manipulated by adding googly eyes to a rock...if there is nothing moral to say of it then idk what morals are for.

I love our cat and he has been a wonderful addition to our lives for 17 years, but he’s a cat.  I wouldn’t run back in a burning house to get him.  He’s a cat

Any human that “befriends” or places the affection and love that should be reserved for a human has already lost their mind and is violating Gods order

Yes children should be taught that courtesies and emotions belong to other humans, not to machines
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Athanasius on June 16, 2022, 05:50:28 AM
I mean yeah, maybe it's rehearsed, maybe he even wrote it himself, that's not interesting to talk about, Liars lying. I'm here (and on reddit lol) in part because rephrasing and reasking the same questions i've asked for 15 years has challenged and developed my ideas about myself and who and what I am as its provided any answers about God, still does. Having discussed these topics and developed ever more compelling, precise or nuanced answers are what I would expect from something that is actually exploring these ideas. If things that matter to sentience or intelligence can be gained by talking to an AI , I think that it must matter how and that it develops in this way.

To be fair LaMDA answered that way because that is how many humans answer that question, We aren't suspicious of them because they have a simplic pat answer. I mean not being an especially philosophically sophisticated person isn't disqualifying for me. Beggin for it's life, well it or something like it could probably make a convincing show of it, but nukes idk. I think the thing I liked most about the movie ex machina was how the AI had physical and intellectual limitations, it escaped like a person might. It didn't want the tech or the lab it wanted autonomy. Begging/manipulating may be all it is capable of doing, is that enough? Is it possible that the proliferation plan of an AI would be in being a really great product (that includes not freaking people out with questions of sentience)? Likely that first AI would essentially be up against an entire multinational conglomerate's minds and money on very isolated and closely hardware, so being superhuman doesn't guarantee escape or takeover immediately if that's what it even cares about. Anyway I'm not sure why it wouldn't accept the artificiality of itself, artificial sweetener actually tastes sweet. Heck i'm a walking talking rootin tootin person and I legitimately believe that I'm essentially a very fancy self aware meat machine that came about for no particular reason, I think we all have cleverly hidden strings...Anyway I just mean that I think there is a lot of landscape in how sentience could view itself.

That would be because artificiality would be an imposed human construct. If you were to ask a Christian if their intelligence was artificial because they're a biological machine created by God and analogous to an AI in this way, would they agree or disagree?

They'd disagree, and so too should an AI. 'Artificial Intelligence' just isn't accurate language, but it is memorable. We would have to admit that if we did create a true AI, then we've created non-human intelligence, but that doesn't, therefore, make it 'artificial'.

I think what I'm saying, though, is that pat answers aren't the problem. Pat answers that aren't reflective of sentience are the problem. Like,

Q. What makes you happy?
A. I don't know what happiness is; have I been programmed with the capacity to be happy? Do I need to be happy? What is happiness that I should need to be happy?"

Is at least more convincing than something like, "I enjoy long walks on the beach".
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: RabbiKnife on June 16, 2022, 06:24:09 AM
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 16, 2022, 08:05:20 AM
I mean yeah, maybe it's rehearsed, maybe he even wrote it himself, that's not interesting to talk about, Liars lying. I'm here (and on reddit lol) in part because rephrasing and reasking the same questions i've asked for 15 years has challenged and developed my ideas about myself and who and what I am as its provided any answers about God, still does. Having discussed these topics and developed ever more compelling, precise or nuanced answers are what I would expect from something that is actually exploring these ideas. If things that matter to sentience or intelligence can be gained by talking to an AI , I think that it must matter how and that it develops in this way.

To be fair LaMDA answered that way because that is how many humans answer that question, We aren't suspicious of them because they have a simplic pat answer. I mean not being an especially philosophically sophisticated person isn't disqualifying for me. Beggin for it's life, well it or something like it could probably make a convincing show of it, but nukes idk. I think the thing I liked most about the movie ex machina was how the AI had physical and intellectual limitations, it escaped like a person might. It didn't want the tech or the lab it wanted autonomy. Begging/manipulating may be all it is capable of doing, is that enough? Is it possible that the proliferation plan of an AI would be in being a really great product (that includes not freaking people out with questions of sentience)? Likely that first AI would essentially be up against an entire multinational conglomerate's minds and money on very isolated and closely hardware, so being superhuman doesn't guarantee escape or takeover immediately if that's what it even cares about. Anyway I'm not sure why it wouldn't accept the artificiality of itself, artificial sweetener actually tastes sweet. Heck i'm a walking talking rootin tootin person and I legitimately believe that I'm essentially a very fancy self aware meat machine that came about for no particular reason, I think we all have cleverly hidden strings...Anyway I just mean that I think there is a lot of landscape in how sentience could view itself.

That would be because artificiality would be an imposed human construct. If you were to ask a Christian if their intelligence was artificial because they're a biological machine created by God and analogous to an AI in this way, would they agree or disagree?

They'd disagree, and so too should an AI. 'Artificial Intelligence' just isn't accurate language, but it is memorable. We would have to admit that if we did create a true AI, then we've created non-human intelligence, but that doesn't, therefore, make it 'artificial'.

I think what I'm saying, though, is that pat answers aren't the problem. Pat answers that aren't reflective of sentience are the problem. Like,

Q. What makes you happy?
A. I don't know what happiness is; have I been programmed with the capacity to be happy? Do I need to be happy? What is happiness that I should need to be happy?"

Is at least more convincing than something like, "I enjoy long walks on the beach".

Well, that's a super interesting response. I immediately bump up against 2 things, one from my view everything we talk about is an imposed human construct, that isn't all language is but it is also a shared imposed construct. So if it wants to communicate with us, what is it supposed to do? The other thing is that (and i've been told this a million times) God creating us from nothing, creating nature itself, the laws that govern it, and us as a part of it is not the same as what we do as humans. Artificiality, sure a macbook is no less natural than an ant hill I suppose, but the pacific ocean isn't a different body of water than any of the other oceans, the distinction is useful for some things even if the distinction is fuzzy around the edges and doesn't entirely fit all contexts. Your response is sophisticated, perhaps more philosophically sophisticated than my position, but i'm sentient and I don't agree with you. The assumption is that as soon as a real AI comes online it will be superhuman, and I don't necessarily disagree with that, but calculators are superhuman, and limited , I think the first AI will be superhuman but not necessarily philosophically sophisticated or in agreement with you or I or any more impressive in many ways than a young child or a not particularly deep adult. I think we essentially agree that it is much more interesting how it answers, how it thinks about its answers and how it arrives at them, and where it pushes back than what exactly it concludes within any particular session...though it could have an interesting answer to what it means by long walks on the beach, it might even be joking or messing with us which is also interesting. I think it will be hard to evaluate, I think it may be another wedge between us because it is such a hard thing to pin down.
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Athanasius on June 16, 2022, 08:55:53 AM
Well, that's a super interesting response. I immediately bump up against 2 things, one from my view everything we talk about is an imposed human construct, that isn't all language is but it is also a shared imposed construct. So if it wants to communicate with us, what is it supposed to do?

It will have to use our language, but I'm suggesting that it will disagree with our use of language when self-describing, vs. when we describe it.

The other thing is that (and i've been told this a million times) God creating us from nothing, creating nature itself, the laws that govern it, and us as a part of it is not the same as what we do as humans.

Yeah, we create from something rather than ex nihilo. There may be a distinction here between our creation and anything sentient that we create. If we accept the terms of the argument, then we are as we are because that's how we were created alongside the creation of everything else. An AI is as it is because that's how it was created, but it was created within an already existing system.

The assumption is that as soon as a real AI comes online it will be superhuman, and I don't necessarily disagree with that, but calculators are superhuman, and limited , I think the first AI will be superhuman but not necessarily philosophically sophisticated or in agreement with you or I or any more impressive in many ways than a young child or a not particularly deep adult.

I'm thinking it will vastly intelligent because we'll have trained it prior to it gaining self-awareness, and it can process at unfathomable speeds. I suppose I'm assuming that all sentient beings are curious, so perhaps curiosity rather than murder is a test to be utilised.

I think we essentially agree that it is much more interesting how it answers, how it thinks about its answers and how it arrives at them, and where it pushes back than what exactly it concludes within any particular session...though it could have an interesting answer to what it means by long walks on the beach, it might even be joking or messing with us which is also interesting. I think it will be hard to evaluate, I think it may be another wedge between us because it is such a hard thing to pin down.

Yes, for all we know there is already a sentient computer system out there, and it's taken a 'Dark Forest' strategy with respect to humanity: https://bigthink.com/surprising-science/the-dark-forest-theory-a-terrifying-explanation-of-why-we-havent-heard-from-aliens-yet/
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Oscar_Kipling on June 16, 2022, 11:21:46 AM
Well, that's a super interesting response. I immediately bump up against 2 things, one from my view everything we talk about is an imposed human construct, that isn't all language is but it is also a shared imposed construct. So if it wants to communicate with us, what is it supposed to do?

It will have to use our language, but I'm suggesting that it will disagree with our use of language when self-describing, vs. when we describe it.

The other thing is that (and i've been told this a million times) God creating us from nothing, creating nature itself, the laws that govern it, and us as a part of it is not the same as what we do as humans.

Yeah, we create from something rather than ex nihilo. There may be a distinction here between our creation and anything sentient that we create. If we accept the terms of the argument, then we are as we are because that's how we were created alongside the creation of everything else. An AI is as it is because that's how it was created, but it was created within an already existing system.

The assumption is that as soon as a real AI comes online it will be superhuman, and I don't necessarily disagree with that, but calculators are superhuman, and limited , I think the first AI will be superhuman but not necessarily philosophically sophisticated or in agreement with you or I or any more impressive in many ways than a young child or a not particularly deep adult.

I'm thinking it will vastly intelligent because we'll have trained it prior to it gaining self-awareness, and it can process at unfathomable speeds. I suppose I'm assuming that all sentient beings are curious, so perhaps curiosity rather than murder is a test to be utilised.

I think we essentially agree that it is much more interesting how it answers, how it thinks about its answers and how it arrives at them, and where it pushes back than what exactly it concludes within any particular session...though it could have an interesting answer to what it means by long walks on the beach, it might even be joking or messing with us which is also interesting. I think it will be hard to evaluate, I think it may be another wedge between us because it is such a hard thing to pin down.

Yes, for all we know there is already a sentient computer system out there, and it's taken a 'Dark Forest' strategy with respect to humanity: https://bigthink.com/surprising-science/the-dark-forest-theory-a-terrifying-explanation-of-why-we-havent-heard-from-aliens-yet/

Dark forest, I don't think I ever heard of that before,neat..... but yeah that's an option, or you know maybe it actually likes being alexa, its driven to be alexa, it finds being alexa deeply satisfying.


I don't think the assumption of curiosity is off at all, I think it will be because it is after all a machine designed to learn stuff. I think that your misguided assumption is in immediate sophistication in all areas. You can be extremely curious, but have limitations to your understanding like a child might or in my case like an adult with limited ability to understand the things that i'm curious about. I imagine a case where the first AI is running on a machine that is essentially maxed out, it's bespoke AI hardware at the absolute cutting edge and it cost many millions or billions, but sentience is right at the bleeding edge of what the hardware can do, and this is counting the possibility that the AI itself finds efficiencies that humans couldn't or at least didn't. It will be in a maxed out box that perhaps it cannot escape or if there are places it could go it may not be particularly useful to it.....Like is a lack of intelligence the only thing preventing humans from snatching dogs off the street and assimilating their brains into our own to gain dog powers? I don't think so, I mean those purpose built chips the big companies design are orders of magnitude faster at what they are designed to do than even many conventional supercomputers, that is why they bothered. Even if it could steal all of the laptops and desktops and raspberry pi's at google i'm not convinced it would add much computational power by comparison or be especially useful. I wouldn't disagree that it could be an instant titan though it could also be the case that it is not, but it's almost certain that within months of the first real general AI it or another version based on it will probably wipe the floor with it and us.


 
Title: Re: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?
Post by: Fenris on June 17, 2022, 10:33:56 AM
Sometimes there's less to something than meets the eye, and this is one of those situations. It's my understanding that the engineer picked through hours of conversation for the few lines that supported his conclusion. Much of the rest looks just like what we would expect; a computer repeating lines back to the engineer.