Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Key questions about artificial sentience (philosophy) + Electronic skin for robots

#1
C C Offline
Key questions about artificial sentience: an opinionated guide
https://experiencemachines.substack.com/...tience?s=r

EXCERPTS: Are today's ML systems already sentient? Most experts seem to think “probably not”, and it doesn’t seem like there’s currently a strong argument that today’s large ML systems are conscious.1

But AI systems are getting more complex and more capable with every passing week. And we understand sufficiently little about consciousness that we face huge uncertainty about whether, when, and why AI systems will have the capacity to have conscious experiences, including especially significant experiences like suffering or pleasure. We have a poor understanding of what possible AI experiences could be like, and how they would compare to human experiences.

[...] There could be very, very many sentient artificial beings. Jamie Harris (2021) argues that “the number of [artificially sentient] beings could be vast, perhaps many trillions of human-equivalent lives on Earth and presumably even more lives if we colonize space or less complex and energy-intensive artificial minds are created.” There’s lots of uncertainty here: but given large numbers of future beings, and the possibility for intense suffering, the scale of AI suffering could dwarf the already mind-bogglingly large scale of animal suffering from factory farming.

It would be nice if we had a clear outline for how to avoid catastrophic scenarios from AI suffering, something like: here are our best computational theories of what it takes for a system, whether biological or artificial, to experience conscious pleasure or suffering, and here are the steps we can take to avoid engineering large-scale artificial suffering. Such a roadmap would help us prepare to wisely share the world with digital minds.

For example, you could imagine a consciousness researcher, standing up in front of a group of engineers at DeepMind or some other top AI lab, and giving a talk that aims to prevent them creating suffering AI systems. This talk might give the following recommendations:
  • Do not build an AI system that (a) is sufficiently agent-like and (b) has a global workspace and reinforcement learning signals that © are broadcast to that the workspace and (d) play a certain computational role in shaping learning and goals and (e) are associated with avoidant and self-protective behaviors.

  • And here is, precisely, in architectural and computational terms, what it means for a system to satisfy conditions a-e—not just these vague English terms.

  • Here are the kinds of architectures, training environments, and learning processes that might give rise to such components.

  • Here are the behavioral 'red flags' of such components, and here are the interpretability methods that would help identify such components—all of which into take into account the fact that AIs might have incentives to deceive us about such matters.
So, why can't I go give that talk to DeepMind right now?

First, I’m not sure that components a-e are the right sufficient conditions for artificial suffering. I’m not sure if they fit with our best scientific understanding of suffering as it occurs in humans and animals. Moreover, even if I were sure that components a-e are on the right track, I don’t know how to specify them in a precise enough way that they could guide actual engineering, interpretability, or auditing efforts.

Furthermore, I would argue that no one, including AI and consciousness experts who are far smarter and more knowledgeable than I am, is currently in a position to give this talk—or something equivalently useful—at DeepMind.

What would we need to know in order for such talk to be possible? (MORE - missing details)


Electronic skin anticipates & perceives touch from different directions for 1st time
https://www.tu-chemnitz.de/tu/pressestel...l/11215/en

RELEASE: A research team from Chemnitz and Dresden has taken a major step forward in the development of sensitive electronic skin (e-skin) with integrated artificial hairs. E-skins are flexible electronic systems that try to mimic the sensitivity of their natural human skin counterparts. Applications range from skin replacement and medical sensors on the body to artificial skin for humanoid robots and androids. Tiny surface hairs can perceive and anticipate the slightest tactile sensation on human skin and even recognize the direction of touch. Modern electronic skin systems lack this capability and cannot gather this critical information about their vicinity.

A research team led by Prof. Dr. Oliver G. Schmidt, head of the Professorship of Material Systems for Nanoelectronics as well as Scientific Director of the Research Center for Materials, Architectures and Integration of Nanomembranes (MAIN) at Chemnitz University of Technology, has explored a new avenue to develop extremely sensitive and direction-dependent 3D magnetic field sensors that can be integrated into an e-skin system (active matrix). The team used a completely new approach for miniaturization and integration of 3D device arrays and made a major step towards mimicking the natural touch of human skin. The researchers have reported their results in the current issue of the journal Nature Communications.

Christian Becker, PhD student in Prof. Schmidt's research group at MAIN and first author of the study says: "Our approach allows a precise spatial arrangement of functional sensor elements in 3D that can be mass-produced in a parallel manufacturing process. Such sensor systems are extremely difficult to generate by established microelectronic fabrication methods."

New approach: Elegant origami technology integrates 3D sensors with microelectronic circuitry. The core of the sensor system presented by the research team is a so-called anisotropic magnetoresistance (AMR) sensor. An AMR sensor can be used to precisely determine changes in magnetic fields. AMR sensors are currently used, for example, as speed sensors in cars or to determine the position and angle of moving components in a variety of machines.

To develop the highly compact sensor system, the researchers took advantage of the so-called "micro-origami process." This process is used to fold AMR sensor components into three-dimensional architectures that can resolve the magnetic vector field in three dimensions. Micro-origami allows a large number of microelectronic components to fit into small space and arrange them in a geometry that is not achievable by any conventional microfabrication technologies. "Micro-origami processes were developed more than 20 years ago, and it is wonderful to see how the full potential of this elegant technology can now be exploited for novel microelectronic applications," says Prof. Oliver G. Schmidt.

The research team integrated the 3D micro-origami magnetic sensor array into a single active matrix, where each individual sensor can be conveniently addressed and read-out by microelectronic circuitry. "The combination of active-matrix magnetic sensors with self-assembling micro-origami architectures is a completely new approach to miniaturize and integrate high-resolution 3D sensing systems," says Dr. Daniil Karnaushenko, who contributed decisively towards the concept, design and implementation of the project.

Tiny hairs anticipate and perceive direction of touch in real time. The research team has succeeded in integrating the 3D magnetic field sensors with magnetically rooted fine hairs into an artificial e-skin. The e-skin is made of an elastomeric material into which the electronics and sensors are embedded -- similar to organic skin, which is interlaced with nerves.

When the hair is touched and bends, the movement and exact position of the magnetic root can be detected by the underlying 3D magnetic sensors. The sensor matrix is therefore not only able to register the bare movement of the hair, but also determines the exact direction of the movement. As with real human skin, each hair on an e-skin becomes a full sensor unit that can perceive and detect changes in the vicinity. The magneto-mechanical coupling between 3D magnetic sensor and magnetic hair root in real-time provides a new type of touch-sensitive perception by an e-skin system. This capability is of great importance when humans and robots work closely together. For instance, the robot can sense interactions with a human companion well in advance with many details just before an intended contact or an unintended collision is about to take place.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Can ChatGPT answer Jewish law questions? (Is AI kosher as a rabbinic lit scholar?) C C 0 72 Jul 3, 2023 03:44 AM
Last Post: C C
  Racism toward robots + NRO's classified artificial super-brain for data analysis C C 1 247 Aug 3, 2019 12:01 AM
Last Post: Syne
  The genius who might hold the key to true AI (free energy principle) C C 0 293 Dec 6, 2018 02:19 AM
Last Post: C C
  Electronic synapses that can learn: Towards an artificial brain? C C 0 400 Apr 4, 2017 03:40 AM
Last Post: C C
  Deep questions in AI + The future of work + Snakes & adders C C 0 380 May 14, 2016 05:30 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)