Can AI help us talk to animals? + AI discovered an alternate physics

C C Offline
Can artificial intelligence really help us talk to the animals?

EXCERPT: . . . Until recently, decoding has mostly relied on painstaking observation. But interest has burgeoned in applying machine learning to deal with the huge amounts of data that can now be collected by modern animal-borne sensors. “People are starting to use it,” says Elodie Briefer, an associate professor at the University of Copenhagen who studies vocal communication in mammals and birds. “But we don’t really understand yet how much we can do.”

Briefer co-developed an algorithm that analyses pig grunts to tell whether the animal is experiencing a positive or negative emotion. Another, called DeepSqueak, judges whether rodents are in a stressed state based on their ultrasonic calls. A further initiative – Project CETI (which stands for the Cetacean Translation Initiative) – plans to use machine learning to translate the communication of sperm whales.

Yet ESP says its approach is different, because it is not focused on decoding the communication of one species, but all of them. While Raskin acknowledges there will be a higher likelihood of rich, symbolic communication among social animals – for example primates, whales and dolphins – the goal is to develop tools that could be applied to the entire animal kingdom. “We’re species agnostic,” says Raskin. “The tools we develop… can work across all of biology, from worms to whales.”

The “motivating intuition” for ESP, says Raskin, is work that has shown that machine learning can be used to translate between different, sometimes distant human languages – without the need for any prior knowledge.

This process starts with the development of an algorithm to represent words in a physical space. In this many-dimensional geometric representation, the distance and direction between points (words) describes how they meaningfully relate to each other (their semantic relationship). For example, “king” has a relationship to “man” with the same distance and direction that “woman’ has to “queen”. (The mapping is not done by knowing what the words mean but by looking, for example, at how often they occur near each other.)

It was later noticed that these “shapes” are similar for different languages. And then, in 2017, two groups of researchers working independently found a technique that made it possible to achieve translation by aligning the shapes. To get from English to Urdu, align their shapes and find the point in Urdu closest to the word’s point in English. “You can translate most words decently well,” says Raskin.

ESP’s aspiration is to create these kinds of representations of animal communication – working on both individual species and many species at once – and then explore questions such as whether there is overlap with the universal human shape. We don’t know how animals experience the world, says Raskin, but there are emotions, for example grief and joy, it seems some share with us and may well communicate about with others in their species. “I don’t know which will be the more incredible – the parts where the shapes overlap and we can directly communicate or translate, or the parts where we can’t.”

He adds that animals don’t only communicate vocally. Bees, for example, let others know of a flower’s location via a “waggle dance”. There will be a need to translate across different modes of communication too.

The goal is “like going to the moon”, acknowledges Raskin, but the idea also isn’t to get there all at once. Rather, ESP’s roadmap involves solving a series of smaller problems necessary for the bigger picture to be realised. This should see the development of general tools that can help researchers trying to apply AI to unlock the secrets of species under study.

For example, ESP recently published a paper (and shared its code) on the so called “cocktail party problem” in animal communication, in which it is difficult to discern which individual in a group of the same animals is vocalising in a noisy social environment... (MORE - missing details)

An AI just independently discovered alternate physics

EXCERPT: . . . a new AI program developed by researchers at Columbia University has seemingly discovered its own alternative physics. After being shown videos of physical phenomena on Earth, the AI didn't rediscover the current variables we use; instead, it actually came up with new variables to explain what it saw.

To be clear, this doesn't mean our current physics are flawed or that there's a better fit model to explain the world around us. (Einstein's laws have proved incredibly robust.) But those laws could only exist because they were built on the back of a pre-existing 'language' of theory and principles established by centuries of tradition.

Given an alternative timeline where other minds tackled the same problems with a slightly different perspective, would we still frame the mechanics that explain our Universe in the same way?

Even with new technology imaging black holes and detecting strange, distant worlds, these laws have held up time and time again (side note: quantum mechanics is a whole other story, but let's stick to the visible world here).

This new AI only looked at videos of a handful of physical phenomena, so it's in no way placed to come up with new physics to explain the Universe or try to best Einstein. This wasn't the goal here.
Skip advert

"I always wondered, if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the Universe in a different way?" says roboticist Hod Lipson from the Creative Machines Lab at Columbia.

"In the experiments, the number of variables was the same each time the AI restarted, but the specific variables were different each time. So yes, there are alternative ways to describe the Universe and it is quite possible that our choices aren't perfect."

[...] "Without any prior knowledge of the underlying physics, our algorithm discovers the intrinsic dimension of the observed dynamics and identifies candidate sets of state variables," the researchers write in their paper. This suggests that in the future, AI could potentially help us to identify variables that underpin new concepts we're not currently aware of... (MORE - missing details)

Possibly Related Threads…
Thread Author Replies Views Last Post
  Experimental robot surgeon can operate without human help C C 1 31 Jan 29, 2022 10:22 PM
Last Post: confused2
  Designing customized “brains” for robots + AI can help identify fake news C C 0 78 Jan 21, 2021 10:17 PM
Last Post: C C

Users browsing this thread: 1 Guest(s)