Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Value pluralism -- the alt theory to both moral relativism & absolutism

#1
C C Offline
https://www.nytimes.com/interactive/2022...rview.html

INTRO: Interview with computer scientist Yejin Choi, a 2022 recipient of the prestigious MacArthur “genius” grant who has been doing groundbreaking research on developing common sense and ethical reasoning in A.I.

EXCERPTS: [...] So what’s most exciting to you right now about your work in A.I.?

I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what?

Yeah, this is something I don’t understand. How could you possibly teach A.I. to make moral decisions when almost every rule or truth has exceptions?

A.I. should learn exactly that: There are cases that are more clean-cut, and then there are cases that are more discretionary. It should learn uncertainty and distribution of opinions. Let me ease your discomfort here a little by making a case through the language model and A.I. The way to train A.I. there is to So, given a past context, which word comes next? There’s no one universal truth about which word comes next. Sometimes there is only one word that could possibly come, but almost always there are multiple words. There’s this uncertainty, and yet that training turns out to be powerful because when you look at things more globally, A.I. does learn through statistical distribution the best word to use, the distribution of the reasonable words that could come next. I think moral decision-making can be done like that as well. Instead of making binary, clean-cut decisions, it should sometimes make decisions based on This looks really bad. Or you have your position, but it understands that, well, half the country thinks otherwise.

Is the ultimate hope that A.I. could someday make ethical decisions that might be sort of neutral or even contrary to its designers’ potentially unethical goals — like an A.I. designed for use by social media companies that could decide not to exploit children’s privacy? Or is there just always going to be some person or private interest on the back end tipping the ethical-value scale?

The former is what we wish to aspire to achieve. The latter is what actually inevitably happens. In fact, is left-leaning in this regard because many of the crowd workers who do annotation for us are a little bit left-leaning. Both the left and right can be unhappy about this, because for people on the left Delphi is not left enough, and for people on the right it’s potentially not inclusive enough. But Delphi was just a first shot. There’s a lot of work to be done, and I believe that if we can somehow solve value pluralism for A.I., that would be really exciting. To have A.I. values not be one systematic thing but rather something that has multidimensions just like a group of humans.

What would it look like to “solve” value pluralism?

I am thinking about that these days, and I don’t have clear-cut answers. I don’t know what “solving” should look like, but what I mean to say for the purpose of this conversation is that A.I. should respect value pluralism and the diversity of people’s values, as opposed to enforcing some normalized moral framework onto everybody... (MORE - missing details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article We must put an end to scientism (philosophy of mind alt proposal) C C 0 65 Feb 20, 2024 08:47 AM
Last Post: C C
  alt-views: "Why psychology is failing men" + Mimic China's promotion of masculinity C C 19 445 Mar 6, 2023 04:40 AM
Last Post: Syne
  Vacuum energy in alt black hole model as source of dark energy's acceleration role C C 0 89 Feb 18, 2023 07:26 PM
Last Post: C C
  Bigfoot has a very simple explanation, scientist says (alt theory) C C 0 70 Feb 1, 2023 08:35 PM
Last Post: C C
  A theory of theories (effective field theory) C C 0 78 Jan 14, 2023 02:16 AM
Last Post: C C
  Carl Sagan was wrong: ordinary evidence is enough (alt opinions about platitudes) C C 4 133 Jan 13, 2023 01:18 AM
Last Post: confused2
  What do longtermists want? (alt to short-term, current altruism & special interests) C C 0 123 Oct 31, 2022 05:46 PM
Last Post: C C
  Phthalate fears may be premature (alt perspectives) C C 0 73 Mar 24, 2022 09:33 PM
Last Post: C C
  Propagandist foretold Putin's justification for Ukraine invasion (alt manipulation) C C 0 64 Mar 12, 2022 06:30 AM
Last Post: C C
  Meat-eating extends human life expectancy worldwide (alt to it's unhealthy, immoral) C C 1 110 Feb 25, 2022 08:02 AM
Last Post: Magical Realist



Users browsing this thread: 1 Guest(s)