Scivillage.com Casual Discussion Science Forum

Full Version: All the ways AI could suck in 2024
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://gizmodo.com/all-the-ways-ai-coul...1851138040

INTRO: As 2024 begins, there has been plenty of speculation about what lies ahead for artificial intelligence. AI was the hottest industry in the world last year and it will likely continue to be so throughout 2024—and maybe the rest of your godforsaken life. That said, many of the concerns about this startling new technology have not been resolved or mitigated. While AI promises bold new capabilities for companies and web users, there are tons of potential harms that it could inflict upon us over the next twelve months. On that topic, here are some of the ways that AI could totally suck this year... (MORE - missing details)

COVERED:

More people could lose their jobs because of AI

AI will continue to be a major disinformation generator

AI will continue to make the entertainment industry more annoying

Get ready for more cloying enthusiasm from the worst parts of the tech world

Police technologies will get much creepier
Theres alot of hype that I don't listen too anymore.

Bots have been with us for a while, running websites online for years has shown a mixture of automated bots, spiders and agents. IRC and Newsgroups were littered with bots for years (Egg droppers etc) and in recent years there was the automated phone messages etc.

It's like when all these things happened, they happened in a vaccum and now suddenly all the cool kids think that something "new" is actually "New".
(Jan 9, 2024 11:15 PM)stryder Wrote: [ -> ]Theres alot of hype that I don't listen too anymore.

Bots have been with us for a while, running websites online for years has shown a mixture of automated bots, spiders and agents.  IRC and Newsgroups were littered with bots for years (Egg droppers etc) and in recent years there was the automated phone messages etc.

It's like when all these things happened, they happened in a vaccum and now suddenly all the cool kids think that something "new" is actually "New".

Did they adapt and learn?
(Jan 10, 2024 12:57 AM)Secular Sanity Wrote: [ -> ]
(Jan 9, 2024 11:15 PM)stryder Wrote: [ -> ]Theres alot of hype that I don't listen too anymore.

Bots have been with us for a while, running websites online for years has shown a mixture of automated bots, spiders and agents.  IRC and Newsgroups were littered with bots for years (Egg droppers etc) and in recent years there was the automated phone messages etc.

It's like when all these things happened, they happened in a vaccum and now suddenly all the cool kids think that something "new" is actually "New".

Did they adapt and learn?

Some did, however their limitation was memory and hardware. To be honest it's still the problem today, however the amount of hardware available is possible through neural networking it (using many systems as nodes) There is still limitations however in how a neural network functions, while attempts to deal with various attacks like Spectre were made to secure processing units from injections in certain OSes, it's still potential for a neural networks to have been developed with the absence of a secure method to identify its nodes. (It's dependent on if the node is run directly on an OS, or straight from a BIOS configuration)

That would lead the AGI's using those networks prone to various overflow and timing attacks. I guess it's really dependent on if the developers want an Achilles heel. (or AGI leash)