Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Google’s Utilitarianism

#1
Secular Sanity Offline
The Evolution of Consciousness

I loved this discussion.  If I’ve said it once, I’ve thought it a thousand times. I wish I had a crystal ball.  I always fret over the possibility of making poor decisions.  We’re notoriously bad at predicting the future.  

Jodi Halpern asked an interesting question during this talk.  If there was an electrode that could be planted in your brain that had the capacity to make better decisions for you, and provide the best possible outcome, so that you’d have a better life, would you do it?

Yuval Noah Harari pointed out something that I’ve never even thought about before.

"Humans are not very good at decision making, especially in the field of ethics, and the problem today is not so much that we lack values, but that we lack understanding of cause and effect.

In order to be really responsible, it is not enough to have values and responsibility.  You need to really understand the chain of cause and effect.  Now, our moral sense evolved when we were hunter gathers, and it was relatively easy to see the chain of cause and effect, but in many areas in the world today, it’s just too complicated.

I think that one of the issues, and this comes back to the issue of self, is that over history, we’ve built up this view of life as this drama of decision making.  What is human life?  It is a drama of decision making. All of art, novels, plays, films, etc., boils down to this big moment of making a decision.  The same with religion.  If we make the wrong decision, we will burn in hell for eternity, and it’s the same with modern ideologies.  This is why AI is so frightening.  If we shift the authority to make decisions to AI, the AI votes, the AI chooses, and then what does this leave us with?  But maybe the mistake was in framing life as a drama of decision making. Maybe this is not what human life is about. Maybe it was a necessary part of human life for thousands of years, but it’s really not what human life should be about."


Here I am telling Syne that I want freewill, which I’m not entirely sure that I even have, but would I be willing to give it up if artificial intelligence was able to provide me with the best possible outcome?  I don’t know.  Would you?

If we shifted our decision making to AI, would that reduce our accountabitly?  


https://www.youtube-nocookie.com/embed/XbOP0IKpsZ0
Reply
#2
Syne Offline
Best possible outcome of an effective zombie. How do you express your self/identity/personality other than making decisions? You have unique interests and preferences that cannot be optimized without degrading your freedom of expression. Yes, it might be the best outcome, but you would lose accountability as you lose autonomy. It would be like a form of locked-in syndrome. You'd be passively watching life instead of living it. You would also lose all sense of accomplishment or pride, since your choices are not the ones achieving your results. Sounds akin to depression.

I know no one wants to hear it, but religion does have the benefit of helping to instill a strong sense of consequences. But the more available information, the more accurately any person can gauge consequences. But we cannot remove individual preference in decision making, or perhaps even personal biases, without harming our sense of ourselves and, perhaps, our will to live altogether.
Reply
#3
Secular Sanity Offline
(Mar 2, 2018 12:48 AM)Syne Wrote: Best possible outcome of an effective zombie. How do you express your self/identity/personality other than making decisions? You have unique interests and preferences that cannot be optimized without degrading your freedom of expression. Yes, it might be the best outcome, but you would lose accountability as you lose autonomy. It would be like a form of locked-in syndrome. You'd be passively watching life instead of living it. You would also lose all sense of accomplishment or pride, since your choices are not the ones achieving your results. Sounds akin to depression.

I know no one wants to hear it, but religion does have the benefit of helping to instill a strong sense of consequences. But the more available information, the more accurately any person can gauge consequences. But we cannot remove individual preference in decision making, or perhaps even personal biases, without harming our sense of ourselves and, perhaps, our will to live altogether.

What if it allowed us more time to connect, and thus enabling us to live better together?

Do you ever worry about our competitive nature?  I think that individualistic systems tend to decrease interpersonal relationships through competition making us excessively critical.

Alain de Botton said that failure is becoming someone who needs others to fail—epicaricacy, it does exist.  He said that half the fear of failure is of the judgment of false friends we feel compelled to impress but don't even like.  I tend to agree.  In his book about status anxiety he said that we care about our status for a simple reason: because most people tend to be nice to us according to the amount of status we have.  Emotional rewards have been pegged to the acquisition of material things. Meritocracies imply that places where the rewards go are to those that merit them—the hardworking and clever among us.  If you really believe in a society where those at the top deserve to get there that has to mean that those at the bottom deserve to be there, too. He said that in Medieval England people used to call the poor, unfortunates.  Nowadays, especially in the United States they call them rather tellingly, losers. We scarcely believe in luck anymore.  Something that may explain where you end up but nobody will believe you.  

You know, there are cultural differences in decision making.  
Cross-cultural differences in decision making (wikipedia.org)

"Individuals from individualist's cultures tend to have independent self-construal and thus experience happiness as a socially disengaging emotion (e.g., pride), and those from collectivist's cultures tend to have interdependent self-construal and experience happiness as a socially engaging emotion (e.g., peace and harmony). The former are more likely to make decisions to fulfill personal accomplishment, whereas the latter are more likely to make decisions that promote social connectedness. This is reflected in their differences in the teamwork styles. A group composed of members with low independent self-construal prefer the cooperative strategy to the competitive one, whereas a group composed of members with high independent self-construal preferred the competitive strategy to the cooperative one.

Individuals from different cultures tend to have different views of the self, which affects individuals' cognition, goals in social interactions, and consequently influences their behavior and goals in decision making.

The literature on automatic cognition suggests that behavior is shaped by exposure to elements of the social world in a way that occurs below awareness or intention. We learn the stereotyped attitudes which later influence our decisions from the shared schematic representations in a certain culture. When an individual is primed with a concept, often by an implicit instruction to think about it, all the aspects of relevant information become activated and influence decision-making.

Day after day members of a cultural group are primed with a set of beliefs, attitudes, and behavioral patterns, which contributes to the building-up and storing of the cultural knowledge. Thus cultural knowledge is very accessible, even under high cognitive work load. Personal knowledge is a recording of a single experience and doesn't undergo so many repetitions. That's why it takes a deliberate attempt to access it, which requires more time and effort.”

Do you think that your belief in freewill helps to facilitate your efforts to achieve goals?  What if AI could provide you with a better outcome? Why couldn’t we be like other cultures and experience success and happiness as a socially engaging process rather than a disengaging emotion, such as pride?
Reply
#4
Syne Offline
(Mar 2, 2018 04:12 AM)Secular Sanity Wrote: What if it allowed us more time to connect, and thus enabling us to live better together?

Do you ever worry about our competitive nature?  I think that individualistic systems tend to decrease interpersonal relationships through competition making us excessively critical.

Alain de Botton said that failure is becoming someone who needs others to fail—epicaricacy, it does exist.  He said that half the fear of failure is of the judgment of false friends we feel compelled to impress but don't even like.  I tend to agree.  In his book about status anxiety he said that we care about our status for a simple reason: because most people tend to be nice to us according to the amount of status we have.  Emotional rewards have been pegged to the acquisition of material things. Meritocracies imply that places where the rewards go are to those that merit them—the hardworking and clever among us.  If you really believe in a society where those at the top deserve to get there that has to mean that those at the bottom deserve to be there, too. He said that in Medieval England people used to call the poor, unfortunates.  Nowadays, especially in the United States they call them rather tellingly, losers. We scarcely believe in luck anymore.  Something that may explain where you end up but nobody will believe you.  

You know, there are cultural differences in decision making.  

Do you think that your belief in freewill helps to facilitate your efforts to achieve goals?  What if AI could provide you with a better outcome? Why couldn’t we be like other cultures and experience success and happiness as a socially engaging process rather than a disengaging emotion, such as pride?

What if those you desired to connect with weren't conducive to the best outcome? Like women who date guys they know will hurt them but wouldn't find safe guys attractive. Some personality types are not harmonious together, but they may be more passionate. Things might be calmer, but they would likely also be blander and greyer.

Are decisions inherently competitive? That seems like a stretch.

I would agree that people arbitrarily limit their own self-expression out of fear of what others may think. But that same sense of social approval also works to maintain cooperation. I believe in a society where only the individual's own beliefs hold them back. It's not that those who don't have don't deserve. It's that those who don't have don't truly believe they deserve. Just like people arbitrarily limit their own self-expression out of fear of social failure, they also limit their own success out of fear of material failure. Luck is very often an excuse used to justify fears of failure.

Collectivist cultures usually do so by repressing individuality. Like in Japan, women cover their mouths when they laugh because it is seen as unseemly for a woman to open her mouth wide in public. That doesn't sound like a best outcome to me. Where people are more worried about social judgement than they are in individualistic cultures.

Providing outcome removes motivation. Guaranteed outcomes mean there's nothing to be gained by doing more, or improving, or learning, etc..
Pride is just the personal counterpart to morale. True morale requires pride. Collectivist cultures sacrifice pride for a superficial politeness that substitutes for morale.
Reply
#5
stryder Offline
There are certainly layers to the discussion on if such Cyborg symbiotic existence would actually be a good idea. While the face value might consider what we derive as self, or how we can potentially be bettered. There will always be those that will look to how they can turn those weaknesses to their advantage. Such a system could allow tracking, or manipulation of thought, at the very least the logging of mnemonics that would normally be cast aside as errant now becoming irrefutable proof of our intentions. (Minority Report?)

The one good thing about maintaining any such involvement of equipment externally is the ability to cast it aside should it prove to cause more trouble than good, that of course wouldn't be an option if it's deeply embedded within your brain through surgery.

It could even lead to questions about existential reality, would we trust what we see/hear/feel or would it all just be a very complex lie?
Reply
#6
Secular Sanity Offline
(Mar 2, 2018 08:57 PM)stryder Wrote: There are certainly layers to the discussion on if such Cyborg symbiotic existence would actually be a good idea.  While the face value might consider what we derive as self, or how we can potentially be bettered.  There will always be those that will look to how they can turn those weaknesses to their advantage.  Such a system could allow tracking, or manipulation of thought, at the very least the logging of mnemonics that would normally be cast aside as errant now becoming irrefutable proof of our intentions.  (Minority Report?)

The one good thing about maintaining any such involvement of equipment externally is the ability to cast it aside should it prove to cause more trouble than good, that of course wouldn't be an option if it's deeply embedded within your brain through surgery.  

It could even lead to questions about existential reality, would we trust what we see/hear/feel or would it all just be a very complex lie?

We’re already living a lie, Stryder.  We have objective reality, subjective reality, and intersubjective reality—something that everyone just agrees on.  Money, for example, has no objective value.  Property?  Well, right now in South Africa activists want to take land from white farmers without any compensation.  Justice?  Freedom?  Well, that’s a whole other story.  You’re right, though.  The responsibility lies with the biologists, not the computer geeks.

(Mar 2, 2018 05:22 AM)Syne Wrote: What if those you desired to connect with weren't conducive to the best outcome? Like women who date guys they know will hurt them but wouldn't find safe guys attractive. Some personality types are not harmonious together, but they may be more passionate. Things might be calmer, but they would likely also be blander and greyer.

Are decisions inherently competitive? That seems like a stretch.

I would agree that people arbitrarily limit their own self-expression out of fear of what others may think. But that same sense of social approval also works to maintain cooperation. I believe in a society where only the individual's own beliefs hold them back. It's not that those who don't have don't deserve. It's that those who don't have don't truly believe they deserve. Just like people arbitrarily limit their own self-expression out of fear of social failure, they also limit their own success out of fear of material failure. Luck is very often an excuse used to justify fears of failure.

Collectivist cultures usually do so by repressing individuality. Like in Japan, women cover their mouths when they laugh because it is seen as unseemly for a woman to open her mouth wide in public. That doesn't sound like a best outcome to me. Where people are more worried about social judgement than they are in individualistic cultures.

Providing outcome removes motivation. Guaranteed outcomes mean there's nothing to be gained by doing more, or improving, or learning, etc..

Pride is just the personal counterpart to morale. True morale requires pride. Collectivist cultures sacrifice pride for a superficial politeness that substitutes for morale.

I'm just being the devil's advocate, but we’ve discussed it before, and I know that you feel that everything cannot determined by antecedent factors, but what if an AI could determine a better outcome?  

I’ve argued that emotions and logic were intertwined in our decision making process.  You, on the other hand, were in favor of not allowing our emotions to impede on our analytical or critical thinking process, but here you are with an appeal to emotion.  What gives?

Are you trying to say that our identities not only play a role in our decision making, but our decision making capacity creates our identity?

So, Mr. Logic wouldn’t listen to the most accurate, knowledgeable, all-knowing oracle, an AI deity?  The great rebellion against god, eh, Syne?

Syne Wrote:I know no one wants to hear it, but religion does have the benefit of helping to instill a strong sense of consequences.

Religion?  Let’s flip the script, shall we?  During some of our ethical debates, what do you think the first thing I reached for was, hmm? Nobody likes to be wrong when they have world knowledge at their fingertips. And when the woman saw that the tree was to be desired to make one wise, she took of the fruit thereof.  Steve Jobs said that the name sounded fun, spirited, and not intimidating.  The shifting of authority to algorithms is already happening.  Look at how much personal information we hand over now, and for what, a little ego boost?

Back to the garden; that’s what everyone wants, right? To walk in the garden with a god that knows you so well that he knows what buttons to push in order to make you follow his advice, as Yuval Noah Harari put it. The first god we created wasn’t able to control our reptilian brains but this one…well, it might.
Reply
#7
Syne Offline
How would AI determining everything be different from a god determining everything? Wouldn't AI have less knowledge? I'm surprised you don't see the obvious parallel in hubris.

I know you respond more to appeals to emotion. And subjective things, like satisfaction, do engage emotion.
Who we are is what motivates our decisions, and that form of expression is what leads to a sense of satisfaction.

Yeah, like I said, the more information the better anyone's decisions. But that's a far cry from making no decisions at all. Not everyone is eager to sacrifice their privacy for social approval.
Only fools want to return to being apes in trees. Myths of the noble and peaceful savage.

You want to be deftly manipulated your whole life? Well, I guess you do already think you are a puppet to materialistic causes.

I just think all determinisms (AI, god, material) are essentially the same. A slave master. We cannot free those who do not know they are slaves.
Reply
#8
Secular Sanity Offline
(Mar 3, 2018 04:29 AM)Syne Wrote: Yeah, like I said, the more information the better anyone's decisions. But that's a far cry from making no decisions at all. Not everyone is eager to sacrifice their privacy for social approval.

You want to be deftly manipulated your whole life? Well, I guess you do already think you are a puppet to materialistic causes.

I just think all determinisms (AI, god, material) are essentially the same. A slave master. We cannot free those who do not know they are slaves.

Paul Harvey, hah! Appartantly, he didn’t read the bible.  "Do as you please?"  That’s the Robin Thicke version. Baby, it’s in your nature. Just let me liberate you. That’s not what hooked eve, silly boy.  Ye shall be as gods—that was the clincher.  The problem doesn't lie in our denial of god and his powers, it lies in our belief in one, worshiping one, and in our desire for godlike powers.  

Do you know what they were discussing in the first video that I posted?  Should we allow AI to determine right from wrong? Should we endow them with the knowledge of good and evil?  They asked, what would an AI have to be like in order to meet the requirements of moral agency?  It has to be vulnerable like us.  As of now, they’re immortal, and making them, so that they have to face their finitude of life in the same way that we do is a tall order, but they have to have skin in the game.

Behold, the AI is become as one of us, to know good and evil: and now, lest he put forth his hand, and take also of the tree of life, and eat, and live forever.  Big Grin

Brains are competitive, winner-take-all systems.  What would you do in order to win?  What if it was possible to become an expert in a short period of time?  What if you could optimize your performance without drugs?  What if you could enhance recall, relieve stress, depression, and solve all of our problems associated with brain disorders and disease?  What if you could experience orgasmic ecstasy by simply stimulating the left and right hippocampus at 3mA and 1mA or maybe your little transcendent experience, hmm?  Would you do it?

It's only twelve minutes.

https://www.youtube-nocookie.com/embed/eD_rThwybfs

My book came.  I couldn't put it down.  I loved it!

Homo Deus by Yuval Noah Harari

"Once the biologist concluded that organisms are algorithms, they dismantled the wall between organic and inorganic, and shifted the authority from individual humans to network algorithms. Some people are indeed horrified by this development, but the fact is that millions willingly embrace it. Already today many of us give up our privacy and our individuality by conducting much of our lives online, recording every action and becoming hysterical if the connection to the net is interrupted even for a few minutes.

What might replace desires and experiences as the source of all meaning and authority?  As of 2016, there is one candidate sitting in history’s reception room waiting for the job interview.  This candidate is information. The most interesting emerging religion is Dataism, which venerates neither gods nor man—it worships data."
Reply
#9
Syne Offline
(Mar 3, 2018 02:24 PM)Secular Sanity Wrote: Paul Harvey, hah! Appartantly, he didn’t read the bible.  "Do as you please?"  That’s the Robin Thicke version. Baby, it’s in your nature. Just let me liberate you. That’s not what hooked eve, silly boy.  Ye shall be as gods—that was the clincher.  The problem doesn't lie in our denial of god and his powers, it lies in our belief in one, worshiping one, and in our desire for godlike powers.  
Must have hit some cognitive dissonance there, since I never claimed Eve's motive was was otherwise, and it's brought on a non-sequitur railing against god.
Quote:Brains are competitive, winner-take-all systems.  What would you do in order to win?  What if it was possible to become an expert in a short period of time?  What if you could optimize your performance without drugs?  What if you could enhance recall, relieve stress, depression, and solve all of our problems associated with brain disorders and disease?  What if you could experience orgasmic ecstasy by simply stimulating the left and right hippocampus at 3mA and 1mA or maybe your little transcendent experience, hmm?  Would you do it?
No. It's not the destination, it's the journey...even in sex.
Reply
#10
Secular Sanity Offline
Syne Wrote:No. It's not the destination, it's the journey.

I agree.  Mistakes get a bad rap.  

(Mar 4, 2018 12:10 AM)Syne Wrote: Must have hit some cognitive dissonance there, since I never claimed Eve's motive was was otherwise, and it's brought on a non-sequitur railing against god.

Ah, I thought it was creative.  You’re such a stick in the mud.  

You throw that word around a lot (cognitive dissonance).  Where did you first here it?  Do you remember?

Well, anyhow, it was a good book.  You should read it.  It was better than his last book.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article AI outperforms conventional weather forecasting for the first time: Google study C C 0 84 Nov 16, 2023 01:23 AM
Last Post: C C
  Google fires employee who claimed AI system had become sentient Magical Realist 2 182 Oct 1, 2022 02:24 PM
Last Post: Zinjanthropos
  An FCC commissioner wants TikTok removed from Apple & Google App stores C C 0 68 Jun 30, 2022 07:15 AM
Last Post: C C
  LaMDA chatbot is “sentient”: Google places engineer on leave after he makes the claim C C 1 96 Jun 14, 2022 12:47 AM
Last Post: stryder
  Magnetic slime robot + Google autocomplete helps legitimize conspiracy theorists C C 0 52 Mar 31, 2022 07:42 PM
Last Post: C C
  What’s the real science behind Google’s time crystal? (quantum computing) C C 1 96 Sep 18, 2021 07:45 PM
Last Post: Syne
  Unique design for brain-like computations + Massive spying on users of Google Chrome C C 0 162 Jun 18, 2020 11:04 PM
Last Post: C C
  Google will publish people's movements to assist the Global Plague Police State ;-) C C 1 159 Apr 5, 2020 07:43 AM
Last Post: Yazata
  Google’s Sentiment Analyzer Thinks Being Gay Is Bad Syne 9 1,962 Nov 24, 2017 04:40 AM
Last Post: Syne
  IBM throws Google's quantum computer a curveball + Why do devices slow down over time C C 1 714 Nov 4, 2017 01:20 PM
Last Post: RainbowUnicorn



Users browsing this thread: 1 Guest(s)