Mar 1, 2018 11:27 PM
The Evolution of Consciousness
I loved this discussion. If I’ve said it once, I’ve thought it a thousand times. I wish I had a crystal ball. I always fret over the possibility of making poor decisions. We’re notoriously bad at predicting the future.
Jodi Halpern asked an interesting question during this talk. If there was an electrode that could be planted in your brain that had the capacity to make better decisions for you, and provide the best possible outcome, so that you’d have a better life, would you do it?
Yuval Noah Harari pointed out something that I’ve never even thought about before.
"Humans are not very good at decision making, especially in the field of ethics, and the problem today is not so much that we lack values, but that we lack understanding of cause and effect.
In order to be really responsible, it is not enough to have values and responsibility. You need to really understand the chain of cause and effect. Now, our moral sense evolved when we were hunter gathers, and it was relatively easy to see the chain of cause and effect, but in many areas in the world today, it’s just too complicated.
I think that one of the issues, and this comes back to the issue of self, is that over history, we’ve built up this view of life as this drama of decision making. What is human life? It is a drama of decision making. All of art, novels, plays, films, etc., boils down to this big moment of making a decision. The same with religion. If we make the wrong decision, we will burn in hell for eternity, and it’s the same with modern ideologies. This is why AI is so frightening. If we shift the authority to make decisions to AI, the AI votes, the AI chooses, and then what does this leave us with? But maybe the mistake was in framing life as a drama of decision making. Maybe this is not what human life is about. Maybe it was a necessary part of human life for thousands of years, but it’s really not what human life should be about."
Here I am telling Syne that I want freewill, which I’m not entirely sure that I even have, but would I be willing to give it up if artificial intelligence was able to provide me with the best possible outcome? I don’t know. Would you?
If we shifted our decision making to AI, would that reduce our accountabitly?
I loved this discussion. If I’ve said it once, I’ve thought it a thousand times. I wish I had a crystal ball. I always fret over the possibility of making poor decisions. We’re notoriously bad at predicting the future.
Jodi Halpern asked an interesting question during this talk. If there was an electrode that could be planted in your brain that had the capacity to make better decisions for you, and provide the best possible outcome, so that you’d have a better life, would you do it?
Yuval Noah Harari pointed out something that I’ve never even thought about before.
"Humans are not very good at decision making, especially in the field of ethics, and the problem today is not so much that we lack values, but that we lack understanding of cause and effect.
In order to be really responsible, it is not enough to have values and responsibility. You need to really understand the chain of cause and effect. Now, our moral sense evolved when we were hunter gathers, and it was relatively easy to see the chain of cause and effect, but in many areas in the world today, it’s just too complicated.
I think that one of the issues, and this comes back to the issue of self, is that over history, we’ve built up this view of life as this drama of decision making. What is human life? It is a drama of decision making. All of art, novels, plays, films, etc., boils down to this big moment of making a decision. The same with religion. If we make the wrong decision, we will burn in hell for eternity, and it’s the same with modern ideologies. This is why AI is so frightening. If we shift the authority to make decisions to AI, the AI votes, the AI chooses, and then what does this leave us with? But maybe the mistake was in framing life as a drama of decision making. Maybe this is not what human life is about. Maybe it was a necessary part of human life for thousands of years, but it’s really not what human life should be about."
Here I am telling Syne that I want freewill, which I’m not entirely sure that I even have, but would I be willing to give it up if artificial intelligence was able to provide me with the best possible outcome? I don’t know. Would you?
If we shifted our decision making to AI, would that reduce our accountabitly?