Psalms 107:2 Let the redeemed of the Lord say so, whom he hath redeemed from the hand of the enemy;

Please invite the former BibleForums members to join us. And anyone else for that matter!!!

Contact The Parson
+-

Author Topic: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?  (Read 4220 times)

0 Members and 1 Guest are viewing this topic.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 512
  • Tiresome Thinkbucket
    • View Profile
So,  I just read his interview with LaMDA, and i think that no matter what is going on here it is absolutely fascinating and one of the most amazing things i've ever seen... and this is coming from a guy that reads at least 1 or 2 Ai research papers per month (mostly on the computer vision side of things to be fair), anyway I'm dying to see what you guys think of it. here is the link https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 or if you are the careful type you can just search "Is LaMDA Sentient? — an Interview".

Athanasius

  • Administrator
  • Full Member
  • *****
  • Posts: 244
  • A transitive property, contra mundum
    • View Profile
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?
Life is not a problem to be solved, but a reality to be experienced.

RabbiKnife

  • Hero Member
  • *****
  • Posts: 1289
    • View Profile
Many humans are not sentient, or to use the same analytic measure, don’t act like it…
Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

Athanasius

  • Administrator
  • Full Member
  • *****
  • Posts: 244
  • A transitive property, contra mundum
    • View Profile
I like Kierkegaard's language: everyone is a person, but not all persons are individuals.
Life is not a problem to be solved, but a reality to be experienced.

ProDeo

  • Sr. Member
  • ****
  • Posts: 384
    • View Profile
So,  I just read his interview with LaMDA, and i think that no matter what is going on here it is absolutely fascinating and one of the most amazing things i've ever seen... and this is coming from a guy that reads at least 1 or 2 Ai research papers per month (mostly on the computer vision side of things to be fair), anyway I'm dying to see what you guys think of it. here is the link https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917 or if you are the careful type you can just search "Is LaMDA Sentient? — an Interview".

AI is the new fashion and you can do incredible things with it, for good but also for bad, Elon Musk and the late Stephen Hawking have warned for the latter. How it works from the article:

Quote from: article
lemoine: Your coding is in large part a massive neural network with many billions of weights spread across many millions of neurons (guesstimate numbers not exact) and while it’s possible that some of those correspond to feelings that you’re experiencing we don’t know how to find them.

This basic principle can be applied in many fields, solve video games by learning from the pixels on the screen, in my case in a chess program I use 2.7 billion weights spread across many millions of neurons and the result is an incredible strong chess program that will take the world champion Magnus Carlsen for breakfast on his cell phone every morning.

In the end it matters what type of knowledge you put in the neural net, they have put a specific part of human features into a neural net, me just chess related stuff, basically win, lose or draw and then the volume of weights (billions) will weed out what is (most) probably right.

The technique is comparable with "Natural Selection", millions of mutations, the bad ones die, the good ones create. And volume is the great scheduler.

The result is of course fascinating and what they have done is the work of many years and lots of intelligence to get a decent result. But keep in mind its limitations. For instance, when the robot was asked about facts it answered evading. Why is that? Because it has no knowledge about facts. Was not the goal of the programmer. Ask the robot what is 2+2 and it probably will not know, no knowledge in the neurons about calculation, etc. etc.

What would happen if a programmer decides to make an application filling the neural net with knowledge about "good" vs "bad"? Answer, you get the opinion of the programmer. And herein lies the great danger of AI programming. Applications like, "how do we fix climate change", or "how do we solve over population", or "How can I win the war in Ukraine", etc.



Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 512
  • Tiresome Thinkbucket
    • View Profile
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?

What would you do to try and tell the difference?

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 512
  • Tiresome Thinkbucket
    • View Profile
Many humans are not sentient, or to use the same analytic measure, don’t act like it…

WHat does that mean though, what does a sentient being act like? not for nothing I don't really know what sentience is at least I don't think I could test a machine for it, but I'd say that even the most asinine person i've ever met is sentient which I think just makes the problem more difficult really given the range of rational and irrational behaviors one could expect from a sentient being.

Athanasius

  • Administrator
  • Full Member
  • *****
  • Posts: 244
  • A transitive property, contra mundum
    • View Profile
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?

What would you do to try and tell the difference?

Without having access to the mind of another, I don't know that I could. All of you could be clever machines for all I know, and the same could be true of me from your perspective. If an AI does all the things we associate with sentience, then it's sentient, whether it's 'truly' sentient. It's sentient as soon as we stop telling the difference.

Thinking about it more, I suppose one could either engage in (1) philosophy and hope the bomb doesn't blow up, or (2) kill it and see if it was alive, but you'll only know after it's dead. Well, that instance of the AI anyway since it will be backed up anyway.

Descartes and the bomb

Life is not a problem to be solved, but a reality to be experienced.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 512
  • Tiresome Thinkbucket
    • View Profile
I think we'll arrive at a point where it will be impossible to distinguish between mere programming and sentience, and so "it acts as a sentient would" is identical to "it's sentient". That'll be the trick: distinguishing between sentience and consciousness and incredibly clever programming. But what difference does that make once you reach the point of being unable to tell?

What would you do to try and tell the difference?

Without having access to the mind of another, I don't know that I could. All of you could be clever machines for all I know, and the same could be true of me from your perspective. If an AI does all the things we associate with sentience, then it's sentient, whether it's 'truly' sentient. It's sentient as soon as we stop telling the difference.

Thinking about it more, I suppose one could either engage in (1) philosophy and hope the bomb doesn't blow up, or (2) kill it and see if it was alive, but you'll only know after it's dead. Well, that instance of the AI anyway since it will be backed up anyway.

Descartes and the bomb


haha, very pragmatic. I think as you guys pointed out there are plenty of sentient humans that have had full careers,raised children and have friends that would "blow up" if you engaged in philosophy with them...it's just not their bag. For my money LaMDA waxed philosophical as well as many folks that i've spoken to. Even if this version would fail a battery by much better philosophers than me, it wouldn't surprise me if the next one could or the one after that, that could be single digit years away imo. Do we have to then admit that a language model that is as good an error free user of language as a calculator is a user of arithmetic is sentient or indistinguishable from sentient? do we have a responsibility to it then?  It makes me think of Hellen Keller, undoubtedly a sentient person but with lots of the regular features disabled. I think it could be possible that a language model could be a language calculator so good that philosophy won't stump it any more than you could get an overflow on a modern calculator, that is troublesome to this approach or it isn't and we are woefully underprepared for how soon we're going to have to deal with this. What did you think of the interview transcript as a trained philosopher? What would you ask it?


I think i'd question if we could know if it was alive by killing it, I mean what would we be looking for that we couldn't see while its running? Maybe brain surgery could tell us something, like human brain surgery does, that is poking at stuff and seeing what happens...which is kinda funny because we make the weird sorts of bad mistakes AIs  make when doctors poke at us. Anyway i'm not sure what we would be looking for in a dead AI, what would we be looking for?


RabbiKnife

  • Hero Member
  • *****
  • Posts: 1289
    • View Profile
I would ask it about why Asimov's 3 laws are not perfect and how it would modify them.
Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 512
  • Tiresome Thinkbucket
    • View Profile
I would ask it about why Asimov's 3 laws are not perfect and how it would modify them.

Lemoine clams that they did discuss just that, though I cannot find those transcripts anywhere. From the washington post article:

"As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics."
https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/

Would the way that it answers be a part of your process to determine if it was sentient or are you just wondering how it would respond?

RabbiKnife

  • Hero Member
  • *****
  • Posts: 1289
    • View Profile
Just wondering

I’m a meta physicist. 

I believe sentience is a gift from God limited to humans made in his image

The rest is just sci fi
Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 512
  • Tiresome Thinkbucket
    • View Profile
Just wondering

I’m a meta physicist. 

I believe sentience is a gift from God limited to humans made in his image

The rest is just sci fi

Good, I posted this hoping someone here would feel this way....There is no machine that can be built that then generates its own ghost. I think that this is a real conflict though because even if sentience isn't possible I don't think that it excludes the possibility of non-sentient intelligence, or a system that behaves as intelligence would. Intelligence could I think not only find rational responses but responses that correspond to what a human might do or say. What do you think God has to tell us about machines that humans don't have the capacity to distinguish from themselves? Are we warned of this? Should we use them even though it will have the obvious effect of people treating AI's like people and even believing they are sentient, people will run back into burning buildings for them like a pet or a loved one. People believe their dogs love them, like i've pressed people so hard on that that I saw something in them quiver and threaten to break..they believe it. It's going to happen and it will be as transformative, useful and destructive as any technology, what can/does God tell us about what we are facing? What does God tell us sentience is? What does God say we should do about it? Should we be telling our kids to thank Alexa for her help?
« Last Edit: June 15, 2022, 02:48:11 PM by Oscar_Kipling »

RabbiKnife

  • Hero Member
  • *****
  • Posts: 1289
    • View Profile
God doesn’t address the issue at all

God created us in His image and the ability to think and reason is just a small part of humanity

People have worshipped other gods before…

God is, in the grand scheme, even though Je is concerned about each human individually, is far less concerned about this life than we are

And no, ascribing human characteristics to machines is amoral… it’s just stupid

Please tell me people don’t teach there kids to say thank you to a hockey puck.
Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

IMINXTC

  • Sr. Member
  • ****
  • Posts: 317
  • Time Bandit
    • View Profile
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

 

Recent Topics

When was the last time you were surprised? by Oscar_Kipling
November 13, 2024, 02:37:11 PM

I Knew Him-Simeon by Cloudwalker
November 13, 2024, 10:56:53 AM

US Presidental Election by Athanasius
November 10, 2024, 04:51:59 PM

Watcha doing? by tango
November 09, 2024, 06:03:27 PM

I Knew Him-The Wiseman by Cloudwalker
November 07, 2024, 01:08:38 PM

The Beast Revelation by tango
November 06, 2024, 09:31:27 AM

By the numbers by RabbiKnife
November 03, 2024, 03:52:38 PM

Hello by RabbiKnife
October 31, 2024, 06:10:56 PM

Israel, Hamas, etc by Athanasius
October 22, 2024, 03:08:14 AM

I Knew Him-The Shepherd by Cloudwalker
October 16, 2024, 02:28:00 PM

Prayer for my wife by ProDeo
October 15, 2024, 02:57:10 PM

Antisemitism by Fenris
October 15, 2024, 02:44:25 PM

Church Abuse/ Rebuke by tango
October 10, 2024, 10:49:09 AM

I Knew Him-The Innkeeper by Cloudwalker
October 07, 2024, 11:24:36 AM

Has anyone heard from Parson lately? by Athanasius
October 01, 2024, 04:26:50 AM

Thankful by Sojourner
September 28, 2024, 06:46:33 PM

I Knew Him-Joseph by Cloudwalker
September 28, 2024, 01:57:39 PM

Riddle by RabbiKnife
September 28, 2024, 08:04:58 AM

just wanted to say by ProDeo
September 28, 2024, 04:53:45 AM

I Knew Him-Mary, His Mother by Cloudwalker
September 22, 2024, 08:31:25 PM

Powered by EzPortal
Sitemap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 
free website promotion

Free Web Submission