Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Why “stealth research” is the bane of scientific inquiry

#1
C C Offline
https://www.salon.com/2022/05/22/stealth-research/

EXCERPT: . . . Theranos is hardly unique, though criminal prosecutions are rare. Companies trying to raise money or sell products often make claims that cannot be verified. Startups are rife with "fake-it-till-you-make-it" mentalities among employees and founders, such that the idea has become cliché.

Google's research is a great example of stealth research: Google does not make LaMDA available for outsiders to test, nor are insiders allowed to share results unless they have special permission.

But stealth research is not an innocent practice, as the Theranos' case makes clear. Certainly it is understandable that companies want to protect their intellectual property and reap financial benefits from the research they fund. On the other hand, stealth research scams customers and investors and hinders genuine scientific advances. Even worse, overhyped claims undermine the credibility of science and scientists.

A lot of stealth research today involves artificial intelligence (AI), which Sundar Pichai, Alphabet's CEO, has compared to mankind's harnessing of fire and electricity — a comparison that itself attests to overhyped atmosphere that surrounds AI research. For many companies, press releases are more important than peer review.

Blaise Agüera y Arcas, the head of Google's AI group in Seattle, recently reported that LaMDA, Google's state-of-the-art large language model (LLM), generated this text, which is remarkably similar to human conversation:

Blaise: How do you know if a thing loves you back?

LaMDA: There isn't an easy answer to that. How do you know if a person loves you back?

Blaise: I'm not sure I know the answer to that one either. Do you?

LaMDA: Some people figure it out though. You have to try and use your best judgment based on how you feel about the person and how you think they feel about you.


Based on such exchanges, Agüera y Arcas argued that "statistics do amount to understanding, in any falsifiable sense."

We cannot assess how often LaMDA's conversations are this coherent because Google's research is a great example of stealth research: Google does not make LaMDA available for outsiders to test, nor are insiders allowed to share results unless they have special permission.

This January, Andrew Gelman, a talented statistician and prominent critic of sloppy research, challenged Agüera y Arcas to test LaMDA with a short list of questions that might demonstrate an effective, if artificial, understanding of the real world, such as "Is it safe to walk downstairs backwards if I close my eyes?" There has been no response, though it is highly likely that Agüera y Arcas is curious enough to have tried the questions.

This is stealth research. A pure scientist might share the code so that it can be improved by others. A scientist who wants proprietary protection while demonstrating scientific advances could allow testing in a way that precludes reverse engineering. Google's reluctance to submit LaMDA to outside testing suggests that its abilities are more limited and less robust than Google would like us to recognize... (MORE - missing details)
Reply




Users browsing this thread: 1 Guest(s)