Value pluralism -- the alt theory to both moral relativism & absolutism

#1
C C Offline
https://www.nytimes.com/interactive/2022...rview.html

INTRO: Interview with computer scientist Yejin Choi, a 2022 recipient of the prestigious MacArthur “genius” grant who has been doing groundbreaking research on developing common sense and ethical reasoning in A.I.

EXCERPTS: [...] So what’s most exciting to you right now about your work in A.I.?

I’m excited about value pluralism, the fact that value is not singular. Another way to put it is that there’s no universal truth. A lot of people feel uncomfortable about this. As scientists, we’re trained to be very precise and strive for one truth. Now I’m thinking, well, there’s no universal truth — can birds fly or not? Or social and cultural norms: Is it OK to leave a closet door open? Some tidy person might think, always close it. I’m not tidy, so I might keep it open. But if the closet is temperature-controlled for some reason, then I will keep it closed; if the closet is in someone else’s house, I’ll probably behave. These rules basically cannot be written down as universal truths, because when applied in your context versus in my context, that truth will have to be bent. Moral rules: There must be some moral truth, you know? Don’t kill people, for example. But what if it’s a mercy killing? Then what?

Yeah, this is something I don’t understand. How could you possibly teach A.I. to make moral decisions when almost every rule or truth has exceptions?

A.I. should learn exactly that: There are cases that are more clean-cut, and then there are cases that are more discretionary. It should learn uncertainty and distribution of opinions. Let me ease your discomfort here a little by making a case through the language model and A.I. The way to train A.I. there is to So, given a past context, which word comes next? There’s no one universal truth about which word comes next. Sometimes there is only one word that could possibly come, but almost always there are multiple words. There’s this uncertainty, and yet that training turns out to be powerful because when you look at things more globally, A.I. does learn through statistical distribution the best word to use, the distribution of the reasonable words that could come next. I think moral decision-making can be done like that as well. Instead of making binary, clean-cut decisions, it should sometimes make decisions based on This looks really bad. Or you have your position, but it understands that, well, half the country thinks otherwise.

Is the ultimate hope that A.I. could someday make ethical decisions that might be sort of neutral or even contrary to its designers’ potentially unethical goals — like an A.I. designed for use by social media companies that could decide not to exploit children’s privacy? Or is there just always going to be some person or private interest on the back end tipping the ethical-value scale?

The former is what we wish to aspire to achieve. The latter is what actually inevitably happens. In fact, is left-leaning in this regard because many of the crowd workers who do annotation for us are a little bit left-leaning. Both the left and right can be unhappy about this, because for people on the left Delphi is not left enough, and for people on the right it’s potentially not inclusive enough. But Delphi was just a first shot. There’s a lot of work to be done, and I believe that if we can somehow solve value pluralism for A.I., that would be really exciting. To have A.I. values not be one systematic thing but rather something that has multidimensions just like a group of humans.

What would it look like to “solve” value pluralism?

I am thinking about that these days, and I don’t have clear-cut answers. I don’t know what “solving” should look like, but what I mean to say for the purpose of this conversation is that A.I. should respect value pluralism and the diversity of people’s values, as opposed to enforcing some normalized moral framework onto everybody... (MORE - missing details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article The self both exists and it is an illusion C C 2 258 Aug 27, 2025 11:34 PM
Last Post: confused2
  Article Stephen Hawking's radical final theory + Do apes have a theory of mind? C C 1 548 Feb 7, 2025 01:18 AM
Last Post: Magical Realist
  A band’s sophomore slump? A figment of music critics, study finds. (alt POV) C C 0 449 Dec 3, 2024 09:50 PM
Last Post: C C
  Research Cash is king: The surprising alt truth about spending habits in a cashless world C C 2 599 Nov 16, 2024 04:13 PM
Last Post: confused2
  Article We must put an end to scientism (philosophy of mind alt proposal) C C 0 376 Feb 20, 2024 08:47 AM
Last Post: C C
  Article The moral imperative to learn from diverse phenomenal experiences C C 1 442 Dec 17, 2023 09:28 PM
Last Post: Magical Realist
  alt-views: "Why psychology is failing men" + Mimic China's promotion of masculinity C C 19 2,390 Mar 6, 2023 04:40 AM
Last Post: Syne
  Vacuum energy in alt black hole model as source of dark energy's acceleration role C C 0 400 Feb 18, 2023 07:26 PM
Last Post: C C
  Bigfoot has a very simple explanation, scientist says (alt theory) C C 0 365 Feb 1, 2023 08:35 PM
Last Post: C C
  A theory of theories (effective field theory) C C 0 340 Jan 14, 2023 02:16 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)