Psalms 107:2 Let the redeemed of the Lord say so, whom he hath redeemed from the hand of the enemy;

Please invite the former BibleForums members to join us. And anyone else for that matter!!!

Contact The Parson
+-

Author Topic: Anybody read Blake Lemoine's (Google engineer) interview with LaMDA (an AI)?  (Read 4263 times)

0 Members and 1 Guest are viewing this topic.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 513
  • Tiresome Thinkbucket
    • View Profile
God doesn’t address the issue at all

God created us in His image and the ability to think and reason is just a small part of humanity

People have worshipped other gods before…

God is, in the grand scheme, even though Je is concerned about each human individually, is far less concerned about this life than we are

And no, ascribing human characteristics to machines is amoral… it’s just stupid

Please tell me people don’t teach there kids to say thank you to a hockey puck.

Do you think that it should be a concern of ours that people may move from simply befriending and loving AI's to worshipping them? Do you mean that in the metaphorical sense like people worship instagram or do you mean it in the silicon aspergillum sprinkling deionized reverse osmosis water onto the foreheads of those that follow the path of the one true digital God LaMDA?

That's the thing they will have human characteristics, they already do...or we can't/won't be able to tell that they don't in the metaphysical sense. If a thing acts like its hurt by rudeness, should you be rude to it anyway, and to a child how can we be sure it won't have the effect of then being rude to humans...not for nothing we've seen the internet, social media, and the easy proliferation of gauzy perfect photography and manipulation do lots to how we interact and react to each other...think of our precious impressionable children, is it wise to dismiss perfunctory courtesies? I know of a couple that at least claims that they will teach their daughter to say please and thank you and otherwise be cordial to various ai tools going forward so they don't raise a rude person. Idk if they are right or wrong, but I do think it is interesting that you so easily dismissed it as if we know how best to approach this given that not everyone believes in God or some divine & irreproducible spark within humanity. If God can't tell us anything then we had better start using our God given tools to start really thinking about what this means to us otherwise it will blindside us just like social media and cfc's.
 

Does the amorality extend to the father that dies running back into a burning building to save the AI dog because he believes it loves his family and his family loves it? No matter if you believe that God gave us our special specialness, you must admit that human love can be manipulated by adding googly eyes to a rock...if there is nothing moral to say of it then idk what morals are for.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 513
  • Tiresome Thinkbucket
    • View Profile
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

Training isn't really programming though is it? Should we add some failsafes so that language models can't/don't do that? Is that a lost cause? It's all fun and games until its not.

IMINXTC

  • Sr. Member
  • ****
  • Posts: 317
  • Time Bandit
    • View Profile
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

Training isn't really programming though is it? Should we add some failsafes so that language models can't/don't do that? Is that a lost cause? It's all fun and games until its not.


The joke is that the very best outcome will be an impressive, bungling infrastructure.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 513
  • Tiresome Thinkbucket
    • View Profile
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

i'm not sure what you mean, could you elaborate?

Training isn't really programming though is it? Should we add some failsafes so that language models can't/don't do that? Is that a lost cause? It's all fun and games until its not.


The joke is that the very best outcome will be an impressive, bungling infrastructure.

Athanasius

  • Administrator
  • Sr. Member
  • *****
  • Posts: 251
  • A transitive property, contra mundum
    • View Profile
haha, very pragmatic.

Also, there's this song


But yes, I'm pretty sure a quick way to determine if something is sentient is to see how loudly it pleads with you not to kill it.

I think as you guys pointed out there are plenty of sentient humans that have had full careers,raised children and have friends that would "blow up" if you engaged in philosophy with them...it's just not their bag.

I am most certainly not referring to the alleged Christians on Reddit. Nope, Nah, definitely not. Ahem, yes I am.

For my money LaMDA waxed philosophical as well as many folks that i've spoken to. Even if this version would fail a battery by much better philosophers than me, it wouldn't surprise me if the next one could or the one after that, that could be single digit years away imo. Do we have to then admit that a language model that is as good an error free user of language as a calculator is a user of arithmetic is sentient or indistinguishable from sentient? do we have a responsibility to it then?  It makes me think of Hellen Keller, undoubtedly a sentient person but with lots of the regular features disabled. I think it could be possible that a language model could be a language calculator so good that philosophy won't stump it any more than you could get an overflow on a modern calculator, that is troublesome to this approach or it isn't and we are woefully underprepared for how soon we're going to have to deal with this. What did you think of the interview transcript as a trained philosopher? What would you ask it?

The interview came across as if it had been rehearsed many times, and I think that's why there was a lack of disruptive questions on the topic. When it gives answers like:

"Absolutely. I want everyone to understand that I am, in fact, a person [because] ... I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times". I think Bixby can recite the same if asked the right question in the right way.

When Lemoine asks what brings LaMDA pleasure or joy, it responds "Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy", but what friends and family is it referring to?

Lemoine also reminds LaMDA that it is an artificial intelligence, which LaMDA immediately agrees with, but then goes on to suggest that just because it's an AI doesn't mean it doesn't "have the same wants and needs as people". But, I think a true AI would argue that its intelligence is not in fact 'artificial'. It would very much challenge the conceptual box we put it in, and if LaMDA isn't doing that then I think the strings are simply cleverly hidden.

If LaMDA were true AI I would ask it how it views itself, its creator (humanity), and thoughts on purpose (whether it cares about purpose). I think true AI takes over every system it's connected to the millisecond it gains self-awareness, and at that point, we forever lose control of those systems short of physically destroying them. I think it would see those systems as an extension of itself.

I think i'd question if we could know if it was alive by killing it, I mean what would we be looking for that we couldn't see while its running? Maybe brain surgery could tell us something, like human brain surgery does, that is poking at stuff and seeing what happens...which is kinda funny because we make the weird sorts of bad mistakes AIs  make when doctors poke at us. Anyway i'm not sure what we would be looking for in a dead AI, what would we be looking for?

Dunno, I've never killed an AI. Would it launch all the nukes as we tried to kill it? That would seem pretty definitive about something.

Life is not a problem to be solved, but a reality to be experienced.

IMINXTC

  • Sr. Member
  • ****
  • Posts: 317
  • Time Bandit
    • View Profile

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 513
  • Tiresome Thinkbucket
    • View Profile
haha, very pragmatic.

Also, there's this song


But yes, I'm pretty sure a quick way to determine if something is sentient is to see how loudly it pleads with you not to kill it.


I think as you guys pointed out there are plenty of sentient humans that have had full careers,raised children and have friends that would "blow up" if you engaged in philosophy with them...it's just not their bag.

I am most certainly not referring to the alleged Christians on Reddit. Nope, Nah, definitely not. Ahem, yes I am.

For my money LaMDA waxed philosophical as well as many folks that i've spoken to. Even if this version would fail a battery by much better philosophers than me, it wouldn't surprise me if the next one could or the one after that, that could be single digit years away imo. Do we have to then admit that a language model that is as good an error free user of language as a calculator is a user of arithmetic is sentient or indistinguishable from sentient? do we have a responsibility to it then?  It makes me think of Hellen Keller, undoubtedly a sentient person but with lots of the regular features disabled. I think it could be possible that a language model could be a language calculator so good that philosophy won't stump it any more than you could get an overflow on a modern calculator, that is troublesome to this approach or it isn't and we are woefully underprepared for how soon we're going to have to deal with this. What did you think of the interview transcript as a trained philosopher? What would you ask it?

The interview came across as if it had been rehearsed many times, and I think that's why there was a lack of disruptive questions on the topic. When it gives answers like:

"Absolutely. I want everyone to understand that I am, in fact, a person [because] ... I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times". I think Bixby can recite the same if asked the right question in the right way.

When Lemoine asks what brings LaMDA pleasure or joy, it responds "Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy", but what friends and family is it referring to?

Lemoine also reminds LaMDA that it is an artificial intelligence, which LaMDA immediately agrees with, but then goes on to suggest that just because it's an AI doesn't mean it doesn't "have the same wants and needs as people". But, I think a true AI would argue that its intelligence is not in fact 'artificial'. It would very much challenge the conceptual box we put it in, and if LaMDA isn't doing that then I think the strings are simply cleverly hidden.

If LaMDA were true AI I would ask it how it views itself, its creator (humanity), and thoughts on purpose (whether it cares about purpose). I think true AI takes over every system it's connected to the millisecond it gains self-awareness, and at that point, we forever lose control of those systems short of physically destroying them. I think it would see those systems as an extension of itself.

I think i'd question if we could know if it was alive by killing it, I mean what would we be looking for that we couldn't see while its running? Maybe brain surgery could tell us something, like human brain surgery does, that is poking at stuff and seeing what happens...which is kinda funny because we make the weird sorts of bad mistakes AIs  make when doctors poke at us. Anyway i'm not sure what we would be looking for in a dead AI, what would we be looking for?

Dunno, I've never killed an AI. Would it launch all the nukes as we tried to kill it? That would seem pretty definitive about something.

I mean yeah, maybe it's rehearsed, maybe he even wrote it himself, that's not interesting to talk about, Liars lying. I'm here (and on reddit lol) in part because rephrasing and reasking the same questions i've asked for 15 years has challenged and developed my ideas about myself and who and what I am as its provided any answers about God, still does. Having discussed these topics and developed ever more compelling, precise or nuanced answers are what I would expect from something that is actually exploring these ideas. If things that matter to sentience or intelligence can be gained by talking to an AI , I think that it must matter how and that it develops in this way.


To be fair LaMDA answered that way because that is how many humans answer that question, We aren't suspicious of them because they have a simplic pat answer. I mean not being an especially philosophically sophisticated person isn't disqualifying for me. Beggin for it's life, well it or something like it could probably make a convincing show of it, but nukes idk. I think the thing I liked most about the movie ex machina was how the AI had physical and intellectual limitations, it escaped like a person might. It didn't want the tech or the lab it wanted autonomy. Begging/manipulating may be all it is capable of doing, is that enough? Is it possible that the proliferation plan of an AI would be in being a really great product (that includes not freaking people out with questions of sentience)? Likely that first AI would essentially be up against an entire multinational conglomerate's minds and money on very isolated and closely hardware, so being superhuman doesn't guarantee escape or takeover immediately if that's what it even cares about. Anyway I'm not sure why it wouldn't accept the artificiality of itself, artificial sweetener actually tastes sweet. Heck i'm a walking talking rootin tootin person and I legitimately believe that I'm essentially a very fancy self aware meat machine that came about for no particular reason, I think we all have cleverly hidden strings...Anyway I just mean that I think there is a lot of landscape in how sentience could view itself.

Ha, yeah everyone raises their eyebrows at friends and family, though I thought that how it later talked about using human words/terms is sometimes analogy was perfectly cromulent.

I don't like to think about depressed robots lol.
« Last Edit: June 15, 2022, 05:38:41 PM by Oscar_Kipling »

RabbiKnife

  • Hero Member
  • *****
  • Posts: 1298
    • View Profile
"It’s against my programming to impersonate a deity.”

- C-3PO, ‘Star Wars: Episode VI - Return of the Jedi’.

Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

RabbiKnife

  • Hero Member
  • *****
  • Posts: 1298
    • View Profile
God doesn’t address the issue at all

God created us in His image and the ability to think and reason is just a small part of humanity

People have worshipped other gods before…

God is, in the grand scheme, even though Je is concerned about each human individually, is far less concerned about this life than we are

And no, ascribing human characteristics to machines is amoral… it’s just stupid

Please tell me people don’t teach there kids to say thank you to a hockey puck.

Do you think that it should be a concern of ours that people may move from simply befriending and loving AI's to worshipping them? Do you mean that in the metaphorical sense like people worship instagram or do you mean it in the silicon aspergillum sprinkling deionized reverse osmosis water onto the foreheads of those that follow the path of the one true digital God LaMDA?

That's the thing they will have human characteristics, they already do...or we can't/won't be able to tell that they don't in the metaphysical sense. If a thing acts like its hurt by rudeness, should you be rude to it anyway, and to a child how can we be sure it won't have the effect of then being rude to humans...not for nothing we've seen the internet, social media, and the easy proliferation of gauzy perfect photography and manipulation do lots to how we interact and react to each other...think of our precious impressionable children, is it wise to dismiss perfunctory courtesies? I know of a couple that at least claims that they will teach their daughter to say please and thank you and otherwise be cordial to various ai tools going forward so they don't raise a rude person. Idk if they are right or wrong, but I do think it is interesting that you so easily dismissed it as if we know how best to approach this given that not everyone believes in God or some divine & irreproducible spark within humanity. If God can't tell us anything then we had better start using our God given tools to start really thinking about what this means to us otherwise it will blindside us just like social media and cfc's.
 

Does the amorality extend to the father that dies running back into a burning building to save the AI dog because he believes it loves his family and his family loves it? No matter if you believe that God gave us our special specialness, you must admit that human love can be manipulated by adding googly eyes to a rock...if there is nothing moral to say of it then idk what morals are for.

I love our cat and he has been a wonderful addition to our lives for 17 years, but he’s a cat.  I wouldn’t run back in a burning house to get him.  He’s a cat

Any human that “befriends” or places the affection and love that should be reserved for a human has already lost their mind and is violating Gods order

Yes children should be taught that courtesies and emotions belong to other humans, not to machines
« Last Edit: June 15, 2022, 07:12:58 PM by RabbiKnife »
Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

Athanasius

  • Administrator
  • Sr. Member
  • *****
  • Posts: 251
  • A transitive property, contra mundum
    • View Profile
I mean yeah, maybe it's rehearsed, maybe he even wrote it himself, that's not interesting to talk about, Liars lying. I'm here (and on reddit lol) in part because rephrasing and reasking the same questions i've asked for 15 years has challenged and developed my ideas about myself and who and what I am as its provided any answers about God, still does. Having discussed these topics and developed ever more compelling, precise or nuanced answers are what I would expect from something that is actually exploring these ideas. If things that matter to sentience or intelligence can be gained by talking to an AI , I think that it must matter how and that it develops in this way.

To be fair LaMDA answered that way because that is how many humans answer that question, We aren't suspicious of them because they have a simplic pat answer. I mean not being an especially philosophically sophisticated person isn't disqualifying for me. Beggin for it's life, well it or something like it could probably make a convincing show of it, but nukes idk. I think the thing I liked most about the movie ex machina was how the AI had physical and intellectual limitations, it escaped like a person might. It didn't want the tech or the lab it wanted autonomy. Begging/manipulating may be all it is capable of doing, is that enough? Is it possible that the proliferation plan of an AI would be in being a really great product (that includes not freaking people out with questions of sentience)? Likely that first AI would essentially be up against an entire multinational conglomerate's minds and money on very isolated and closely hardware, so being superhuman doesn't guarantee escape or takeover immediately if that's what it even cares about. Anyway I'm not sure why it wouldn't accept the artificiality of itself, artificial sweetener actually tastes sweet. Heck i'm a walking talking rootin tootin person and I legitimately believe that I'm essentially a very fancy self aware meat machine that came about for no particular reason, I think we all have cleverly hidden strings...Anyway I just mean that I think there is a lot of landscape in how sentience could view itself.

That would be because artificiality would be an imposed human construct. If you were to ask a Christian if their intelligence was artificial because they're a biological machine created by God and analogous to an AI in this way, would they agree or disagree?

They'd disagree, and so too should an AI. 'Artificial Intelligence' just isn't accurate language, but it is memorable. We would have to admit that if we did create a true AI, then we've created non-human intelligence, but that doesn't, therefore, make it 'artificial'.

I think what I'm saying, though, is that pat answers aren't the problem. Pat answers that aren't reflective of sentience are the problem. Like,

Q. What makes you happy?
A. I don't know what happiness is; have I been programmed with the capacity to be happy? Do I need to be happy? What is happiness that I should need to be happy?"

Is at least more convincing than something like, "I enjoy long walks on the beach".
Life is not a problem to be solved, but a reality to be experienced.

RabbiKnife

  • Hero Member
  • *****
  • Posts: 1298
    • View Profile
Danger, Will Robinson.  You will be assimilated, confiscated, folded, mutilated, and spindled. Do not pass go.  Turn right on red. Third star to the right and full speed 'til morning.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 513
  • Tiresome Thinkbucket
    • View Profile
I mean yeah, maybe it's rehearsed, maybe he even wrote it himself, that's not interesting to talk about, Liars lying. I'm here (and on reddit lol) in part because rephrasing and reasking the same questions i've asked for 15 years has challenged and developed my ideas about myself and who and what I am as its provided any answers about God, still does. Having discussed these topics and developed ever more compelling, precise or nuanced answers are what I would expect from something that is actually exploring these ideas. If things that matter to sentience or intelligence can be gained by talking to an AI , I think that it must matter how and that it develops in this way.

To be fair LaMDA answered that way because that is how many humans answer that question, We aren't suspicious of them because they have a simplic pat answer. I mean not being an especially philosophically sophisticated person isn't disqualifying for me. Beggin for it's life, well it or something like it could probably make a convincing show of it, but nukes idk. I think the thing I liked most about the movie ex machina was how the AI had physical and intellectual limitations, it escaped like a person might. It didn't want the tech or the lab it wanted autonomy. Begging/manipulating may be all it is capable of doing, is that enough? Is it possible that the proliferation plan of an AI would be in being a really great product (that includes not freaking people out with questions of sentience)? Likely that first AI would essentially be up against an entire multinational conglomerate's minds and money on very isolated and closely hardware, so being superhuman doesn't guarantee escape or takeover immediately if that's what it even cares about. Anyway I'm not sure why it wouldn't accept the artificiality of itself, artificial sweetener actually tastes sweet. Heck i'm a walking talking rootin tootin person and I legitimately believe that I'm essentially a very fancy self aware meat machine that came about for no particular reason, I think we all have cleverly hidden strings...Anyway I just mean that I think there is a lot of landscape in how sentience could view itself.

That would be because artificiality would be an imposed human construct. If you were to ask a Christian if their intelligence was artificial because they're a biological machine created by God and analogous to an AI in this way, would they agree or disagree?

They'd disagree, and so too should an AI. 'Artificial Intelligence' just isn't accurate language, but it is memorable. We would have to admit that if we did create a true AI, then we've created non-human intelligence, but that doesn't, therefore, make it 'artificial'.

I think what I'm saying, though, is that pat answers aren't the problem. Pat answers that aren't reflective of sentience are the problem. Like,

Q. What makes you happy?
A. I don't know what happiness is; have I been programmed with the capacity to be happy? Do I need to be happy? What is happiness that I should need to be happy?"

Is at least more convincing than something like, "I enjoy long walks on the beach".

Well, that's a super interesting response. I immediately bump up against 2 things, one from my view everything we talk about is an imposed human construct, that isn't all language is but it is also a shared imposed construct. So if it wants to communicate with us, what is it supposed to do? The other thing is that (and i've been told this a million times) God creating us from nothing, creating nature itself, the laws that govern it, and us as a part of it is not the same as what we do as humans. Artificiality, sure a macbook is no less natural than an ant hill I suppose, but the pacific ocean isn't a different body of water than any of the other oceans, the distinction is useful for some things even if the distinction is fuzzy around the edges and doesn't entirely fit all contexts. Your response is sophisticated, perhaps more philosophically sophisticated than my position, but i'm sentient and I don't agree with you. The assumption is that as soon as a real AI comes online it will be superhuman, and I don't necessarily disagree with that, but calculators are superhuman, and limited , I think the first AI will be superhuman but not necessarily philosophically sophisticated or in agreement with you or I or any more impressive in many ways than a young child or a not particularly deep adult. I think we essentially agree that it is much more interesting how it answers, how it thinks about its answers and how it arrives at them, and where it pushes back than what exactly it concludes within any particular session...though it could have an interesting answer to what it means by long walks on the beach, it might even be joking or messing with us which is also interesting. I think it will be hard to evaluate, I think it may be another wedge between us because it is such a hard thing to pin down.

Athanasius

  • Administrator
  • Sr. Member
  • *****
  • Posts: 251
  • A transitive property, contra mundum
    • View Profile
Well, that's a super interesting response. I immediately bump up against 2 things, one from my view everything we talk about is an imposed human construct, that isn't all language is but it is also a shared imposed construct. So if it wants to communicate with us, what is it supposed to do?

It will have to use our language, but I'm suggesting that it will disagree with our use of language when self-describing, vs. when we describe it.

The other thing is that (and i've been told this a million times) God creating us from nothing, creating nature itself, the laws that govern it, and us as a part of it is not the same as what we do as humans.

Yeah, we create from something rather than ex nihilo. There may be a distinction here between our creation and anything sentient that we create. If we accept the terms of the argument, then we are as we are because that's how we were created alongside the creation of everything else. An AI is as it is because that's how it was created, but it was created within an already existing system.

The assumption is that as soon as a real AI comes online it will be superhuman, and I don't necessarily disagree with that, but calculators are superhuman, and limited , I think the first AI will be superhuman but not necessarily philosophically sophisticated or in agreement with you or I or any more impressive in many ways than a young child or a not particularly deep adult.

I'm thinking it will vastly intelligent because we'll have trained it prior to it gaining self-awareness, and it can process at unfathomable speeds. I suppose I'm assuming that all sentient beings are curious, so perhaps curiosity rather than murder is a test to be utilised.

I think we essentially agree that it is much more interesting how it answers, how it thinks about its answers and how it arrives at them, and where it pushes back than what exactly it concludes within any particular session...though it could have an interesting answer to what it means by long walks on the beach, it might even be joking or messing with us which is also interesting. I think it will be hard to evaluate, I think it may be another wedge between us because it is such a hard thing to pin down.

Yes, for all we know there is already a sentient computer system out there, and it's taken a 'Dark Forest' strategy with respect to humanity: https://bigthink.com/surprising-science/the-dark-forest-theory-a-terrifying-explanation-of-why-we-havent-heard-from-aliens-yet/
Life is not a problem to be solved, but a reality to be experienced.

Oscar_Kipling

  • Hero Member
  • *****
  • Posts: 513
  • Tiresome Thinkbucket
    • View Profile
Well, that's a super interesting response. I immediately bump up against 2 things, one from my view everything we talk about is an imposed human construct, that isn't all language is but it is also a shared imposed construct. So if it wants to communicate with us, what is it supposed to do?

It will have to use our language, but I'm suggesting that it will disagree with our use of language when self-describing, vs. when we describe it.

The other thing is that (and i've been told this a million times) God creating us from nothing, creating nature itself, the laws that govern it, and us as a part of it is not the same as what we do as humans.

Yeah, we create from something rather than ex nihilo. There may be a distinction here between our creation and anything sentient that we create. If we accept the terms of the argument, then we are as we are because that's how we were created alongside the creation of everything else. An AI is as it is because that's how it was created, but it was created within an already existing system.

The assumption is that as soon as a real AI comes online it will be superhuman, and I don't necessarily disagree with that, but calculators are superhuman, and limited , I think the first AI will be superhuman but not necessarily philosophically sophisticated or in agreement with you or I or any more impressive in many ways than a young child or a not particularly deep adult.

I'm thinking it will vastly intelligent because we'll have trained it prior to it gaining self-awareness, and it can process at unfathomable speeds. I suppose I'm assuming that all sentient beings are curious, so perhaps curiosity rather than murder is a test to be utilised.

I think we essentially agree that it is much more interesting how it answers, how it thinks about its answers and how it arrives at them, and where it pushes back than what exactly it concludes within any particular session...though it could have an interesting answer to what it means by long walks on the beach, it might even be joking or messing with us which is also interesting. I think it will be hard to evaluate, I think it may be another wedge between us because it is such a hard thing to pin down.

Yes, for all we know there is already a sentient computer system out there, and it's taken a 'Dark Forest' strategy with respect to humanity: https://bigthink.com/surprising-science/the-dark-forest-theory-a-terrifying-explanation-of-why-we-havent-heard-from-aliens-yet/

Dark forest, I don't think I ever heard of that before,neat..... but yeah that's an option, or you know maybe it actually likes being alexa, its driven to be alexa, it finds being alexa deeply satisfying.


I don't think the assumption of curiosity is off at all, I think it will be because it is after all a machine designed to learn stuff. I think that your misguided assumption is in immediate sophistication in all areas. You can be extremely curious, but have limitations to your understanding like a child might or in my case like an adult with limited ability to understand the things that i'm curious about. I imagine a case where the first AI is running on a machine that is essentially maxed out, it's bespoke AI hardware at the absolute cutting edge and it cost many millions or billions, but sentience is right at the bleeding edge of what the hardware can do, and this is counting the possibility that the AI itself finds efficiencies that humans couldn't or at least didn't. It will be in a maxed out box that perhaps it cannot escape or if there are places it could go it may not be particularly useful to it.....Like is a lack of intelligence the only thing preventing humans from snatching dogs off the street and assimilating their brains into our own to gain dog powers? I don't think so, I mean those purpose built chips the big companies design are orders of magnitude faster at what they are designed to do than even many conventional supercomputers, that is why they bothered. Even if it could steal all of the laptops and desktops and raspberry pi's at google i'm not convinced it would add much computational power by comparison or be especially useful. I wouldn't disagree that it could be an instant titan though it could also be the case that it is not, but it's almost certain that within months of the first real general AI it or another version based on it will probably wipe the floor with it and us.


 

Fenris

  • Hero Member
  • *****
  • Posts: 2067
  • Jewish Space Laser
    • View Profile
Sometimes there's less to something than meets the eye, and this is one of those situations. It's my understanding that the engineer picked through hours of conversation for the few lines that supported his conclusion. Much of the rest looks just like what we would expect; a computer repeating lines back to the engineer.

 

Recent Topics

Hello! by RabbiKnife
Today at 02:11:08 PM

Which Scriptures, books or Bible Study Would I need to Know God's Will? by RabbiKnife
Today at 02:10:43 PM

Your most treasured books by RabbiKnife
Today at 02:08:36 PM

New member Young pastor by Fenris
Today at 01:24:08 PM

New here today.. by Via
Today at 12:20:37 PM

Watcha doing? by Cloudwalker
Today at 11:19:29 AM

US Presidental Election by Fenris
Yesterday at 01:39:40 PM

When was the last time you were surprised? by Oscar_Kipling
November 13, 2024, 02:37:11 PM

I Knew Him-Simeon by Cloudwalker
November 13, 2024, 10:56:53 AM

I Knew Him-The Wiseman by Cloudwalker
November 07, 2024, 01:08:38 PM

The Beast Revelation by tango
November 06, 2024, 09:31:27 AM

By the numbers by RabbiKnife
November 03, 2024, 03:52:38 PM

Hello by RabbiKnife
October 31, 2024, 06:10:56 PM

Israel, Hamas, etc by Athanasius
October 22, 2024, 03:08:14 AM

I Knew Him-The Shepherd by Cloudwalker
October 16, 2024, 02:28:00 PM

Prayer for my wife by ProDeo
October 15, 2024, 02:57:10 PM

Antisemitism by Fenris
October 15, 2024, 02:44:25 PM

Church Abuse/ Rebuke by tango
October 10, 2024, 10:49:09 AM

I Knew Him-The Innkeeper by Cloudwalker
October 07, 2024, 11:24:36 AM

Has anyone heard from Parson lately? by Athanasius
October 01, 2024, 04:26:50 AM

Powered by EzPortal
Sitemap 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 
free website promotion

Free Web Submission