AI consciousness: scientists say we urgently need answers
https://www.nature.com/articles/d41586-023-04047-6
EXCERPTS: Could artificial intelligence (AI) systems become conscious? A coalition of consciousness scientists says that, at the moment, no one knows — and it is expressing concern about the lack of inquiry into the question.
In comments to the United Nations, members of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?
Such concerns have been mostly absent from recent discussions about AI safety...
[...] It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. “Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress,” says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.
Such concerns are no longer just science fiction...
[...] The resulting information gap is outlined in the AMCS’s submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology....
[...] But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we don’t recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: “We don’t really have a great track record of extending moral consideration to entities that don’t look and act like us.” Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that don’t need protection.
[...] Some of the questions raised by the AMCS to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes... (MORE - missing details)
Multiple ChatGPT instances combine to figure out chemistry
https://arstechnica.com/science/2023/12/...chemistry/
EXCERPTS: Despite rapid advances in artificial intelligence, AIs are nowhere close to being ready to replace humans for doing science. But that doesn't mean that they can't help automate some of the drudgery out of the daily grind of scientific experimentation. For example, a few years back, researchers put an AI in control of automated lab equipment and taught it to exhaustively catalog all the reactions that can occur among a set of starting materials.
While useful, that still required a lot of researcher intervention to train the system in the first place. A group at Carnegie Mellon University has now figured out how to get an AI system to teach itself to do chemistry. The system requires a set of three AI instances, each specialized for different operations. But, once set up and supplied with raw materials, you just have to tell it what type of reaction you want done, and it'll figure it out.
The researchers indicate that they were interested in understanding what capacities large language models (LLMs) can bring to the scientific endeavor. So all of the AI systems used in this work are LLMs, mostly GPT-3.5 and GPT-4, although some others—Claude 1.3 and Falcon-40B-Instruct—were tested as well. (GPT-4 and Claude 1.3 performed the best.) But, rather than using a single system to handle all aspects of the chemistry, the researchers set up distinct instances to cooperate in a division of labor setup and called it "Coscientist."
[...] The researchers conclude that Coscientist has several notable capabilities:
https://www.nature.com/articles/d41586-023-04047-6
EXCERPTS: Could artificial intelligence (AI) systems become conscious? A coalition of consciousness scientists says that, at the moment, no one knows — and it is expressing concern about the lack of inquiry into the question.
In comments to the United Nations, members of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?
Such concerns have been mostly absent from recent discussions about AI safety...
[...] It is unknown to science whether there are, or will ever be, conscious AI systems. Even knowing whether one has been developed would be a challenge, because researchers have yet to create scientifically validated methods to assess consciousness in machines, Mason says. “Our uncertainty about AI consciousness is one of many things about AI that should worry us, given the pace of progress,” says Robert Long, a philosopher at the Center for AI Safety, a non-profit research organization in San Francisco, California.
Such concerns are no longer just science fiction...
[...] The resulting information gap is outlined in the AMCS’s submission to the UN High-Level Advisory Body on Artificial Intelligence, which launched in October and is scheduled to release a report in mid-2024 on how the world should govern AI technology....
[...] But humans should also consider the possible needs of conscious AI systems, the researchers say. Could such systems suffer? If we don’t recognize that an AI system has become conscious, we might inflict pain on a conscious entity, Long says: “We don’t really have a great track record of extending moral consideration to entities that don’t look and act like us.” Wrongly attributing consciousness would also be problematic, he says, because humans should not spend resources to protect systems that don’t need protection.
[...] Some of the questions raised by the AMCS to highlight the importance of the consciousness issue are legal: should a conscious AI system be held accountable for a deliberate act of wrongdoing? And should it be granted the same rights as people? The answers might require changes to regulations and laws, the coalition writes... (MORE - missing details)
Multiple ChatGPT instances combine to figure out chemistry
https://arstechnica.com/science/2023/12/...chemistry/
EXCERPTS: Despite rapid advances in artificial intelligence, AIs are nowhere close to being ready to replace humans for doing science. But that doesn't mean that they can't help automate some of the drudgery out of the daily grind of scientific experimentation. For example, a few years back, researchers put an AI in control of automated lab equipment and taught it to exhaustively catalog all the reactions that can occur among a set of starting materials.
While useful, that still required a lot of researcher intervention to train the system in the first place. A group at Carnegie Mellon University has now figured out how to get an AI system to teach itself to do chemistry. The system requires a set of three AI instances, each specialized for different operations. But, once set up and supplied with raw materials, you just have to tell it what type of reaction you want done, and it'll figure it out.
The researchers indicate that they were interested in understanding what capacities large language models (LLMs) can bring to the scientific endeavor. So all of the AI systems used in this work are LLMs, mostly GPT-3.5 and GPT-4, although some others—Claude 1.3 and Falcon-40B-Instruct—were tested as well. (GPT-4 and Claude 1.3 performed the best.) But, rather than using a single system to handle all aspects of the chemistry, the researchers set up distinct instances to cooperate in a division of labor setup and called it "Coscientist."
[...] The researchers conclude that Coscientist has several notable capabilities:
- Planning chemical synthesis using public information
- Navigating and processing technical manuals for complicated hardware
- Using that knowledge to control a range of laboratory equipment
- Integrating these hardware-handling capabilities into a lab workflow
- Analyzing its own reactions and using that information to design improved reaction conditions.