
https://www.wsj.com/tech/ai/a-powerful-a..._permalink
alternative source (MSN): https://www.msn.com/en-us/health/other/a...r-AA1ub20X
EXCERPT: Humans have always acted on the conviction that the universe is full of underlying order—even if they debated whether the source of that order was divine. Modern AI is in a sense yet another validation of the idea that every scientist since Copernicus really was onto something.
Modern AI has long been good at recognizing patterns in information. But previous approaches put serious limits on what more it could do. With language, for example, most AI systems could only process words one at a time, and evaluate them only in the sequence they were read, which limited their ability to understand what those words meant.
The Google researchers who wrote that seminal 2017 paper were focused on the process of translating languages. They realized that an AI system that could digest all the words in a piece of writing, and put more weight on the meanings of some words than others—in other words, read in context—could make much better translations.
For example, in the sentence “I arrived at the bank after crossing the river,” a transformer-based AI that knows the sentence ends in “river” instead of “road” can translate “bank” as a stretch of land, not a place to put your money.
In other words, transformers work by figuring out how every single piece of information the system takes in relates to every other piece of information it’s been fed, says Tim Dettmers, an AI research scientist at the nonprofit Allen Institute for Artificial Intelligence.
That level of contextual understanding enables transformer-based AI systems to not only recognize patterns, but predict what could plausibly come next—and thus generate their own new information. And that ability can extend to data other than words.
“In a sense, the models are discovering the latent structure of the data,” says Alexander Rives, chief scientist of EvolutionaryScale, which he co-founded last year after working on AI for Meta Platforms, the parent company of Facebook.
EvolutionaryScale is training its AI on the published sequences of every protein the company’s researchers can get their hands on, and all that we know about them. Using that data, and with no assistance from human engineers, his AI is able to determine the relationship between a given sequence of molecular building blocks, and how the protein that it creates functions in the world.
Earlier research related to this topic, which was more focused on the structure of proteins rather than their function, is the reason that Google AI chief Demis Hassabis shared the 2024 Nobel Prize in chemistry. The system he and his team developed, called AlphaFold, is also based on transformers.
Already, EvolutionaryScale has created one proof-of-concept molecule. It’s a protein that functions like the one that makes jellyfish light up, but its AI-invented sequence is radically different than anything nature has yet to invent.
The company’s eventual goal is to enable all sorts of companies—from pharmaceutical makers producing new drugs to synthetic chemistry companies working on new enzymes—to come up with substances that would be impossible without their technology. That could include bacteria equipped with novel enzymes that could digest plastic, or new drugs tailored to individuals’ particular cancers.
Karol Hausman’s goal is to create a universal AI that can power any robot. “We want to build a model that can control any robot to do any task, including all the robots that exist today, and robots that haven’t even been developed yet,” he says... (MORE - missing details)
alternative source (MSN): https://www.msn.com/en-us/health/other/a...r-AA1ub20X
alternative source (MSN): https://www.msn.com/en-us/health/other/a...r-AA1ub20X
EXCERPT: Humans have always acted on the conviction that the universe is full of underlying order—even if they debated whether the source of that order was divine. Modern AI is in a sense yet another validation of the idea that every scientist since Copernicus really was onto something.
Modern AI has long been good at recognizing patterns in information. But previous approaches put serious limits on what more it could do. With language, for example, most AI systems could only process words one at a time, and evaluate them only in the sequence they were read, which limited their ability to understand what those words meant.
The Google researchers who wrote that seminal 2017 paper were focused on the process of translating languages. They realized that an AI system that could digest all the words in a piece of writing, and put more weight on the meanings of some words than others—in other words, read in context—could make much better translations.
For example, in the sentence “I arrived at the bank after crossing the river,” a transformer-based AI that knows the sentence ends in “river” instead of “road” can translate “bank” as a stretch of land, not a place to put your money.
In other words, transformers work by figuring out how every single piece of information the system takes in relates to every other piece of information it’s been fed, says Tim Dettmers, an AI research scientist at the nonprofit Allen Institute for Artificial Intelligence.
That level of contextual understanding enables transformer-based AI systems to not only recognize patterns, but predict what could plausibly come next—and thus generate their own new information. And that ability can extend to data other than words.
“In a sense, the models are discovering the latent structure of the data,” says Alexander Rives, chief scientist of EvolutionaryScale, which he co-founded last year after working on AI for Meta Platforms, the parent company of Facebook.
EvolutionaryScale is training its AI on the published sequences of every protein the company’s researchers can get their hands on, and all that we know about them. Using that data, and with no assistance from human engineers, his AI is able to determine the relationship between a given sequence of molecular building blocks, and how the protein that it creates functions in the world.
Earlier research related to this topic, which was more focused on the structure of proteins rather than their function, is the reason that Google AI chief Demis Hassabis shared the 2024 Nobel Prize in chemistry. The system he and his team developed, called AlphaFold, is also based on transformers.
Already, EvolutionaryScale has created one proof-of-concept molecule. It’s a protein that functions like the one that makes jellyfish light up, but its AI-invented sequence is radically different than anything nature has yet to invent.
The company’s eventual goal is to enable all sorts of companies—from pharmaceutical makers producing new drugs to synthetic chemistry companies working on new enzymes—to come up with substances that would be impossible without their technology. That could include bacteria equipped with novel enzymes that could digest plastic, or new drugs tailored to individuals’ particular cancers.
Karol Hausman’s goal is to create a universal AI that can power any robot. “We want to build a model that can control any robot to do any task, including all the robots that exist today, and robots that haven’t even been developed yet,” he says... (MORE - missing details)
alternative source (MSN): https://www.msn.com/en-us/health/other/a...r-AA1ub20X