
No surprise that adult-daycare state California is top of the list. Paranoia generated from watching too many AI boogeyman films.
- - - - - - - - - - -
Applying the precautionary principle to AI will kill tech progress
https://reason.com/2024/05/03/ai-regulat...han-is-ai/
EXCERPTS: Deploying the precautionary principle is a laser-focused way to kill off any new technology. As it happens, a new bill in the Hawaii Legislature explicitly applies the precautionary principle in regulating artificial intelligence (AI) technologies
[...] With his own considerable foresight, the brilliant political scientist Aaron Wildavsky anticipated how the precautionary principle would actually end up doing more harm than good. "The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all," he wrote in his brilliant 1988 book Searching for Safety. "An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards….Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials."
Among myriad other opportunities, AI could greatly reduce current harms by speeding up the development of new medications and diagnostics, autonomous driving, and safer materials.
R Street Institute Technology and Innovation Fellow Adam Thierer notes the proliferation of over 500 state AI regulation bills like the one in Hawaii threatens to derail the AI revolution. He singles out California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act as being egregiously bad... (MORE - missing details)
- - - - - - - - - - -
Applying the precautionary principle to AI will kill tech progress
https://reason.com/2024/05/03/ai-regulat...han-is-ai/
EXCERPTS: Deploying the precautionary principle is a laser-focused way to kill off any new technology. As it happens, a new bill in the Hawaii Legislature explicitly applies the precautionary principle in regulating artificial intelligence (AI) technologies
[...] With his own considerable foresight, the brilliant political scientist Aaron Wildavsky anticipated how the precautionary principle would actually end up doing more harm than good. "The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all," he wrote in his brilliant 1988 book Searching for Safety. "An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards….Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials."
Among myriad other opportunities, AI could greatly reduce current harms by speeding up the development of new medications and diagnostics, autonomous driving, and safer materials.
R Street Institute Technology and Innovation Fellow Adam Thierer notes the proliferation of over 500 state AI regulation bills like the one in Hawaii threatens to derail the AI revolution. He singles out California's Safe and Secure Innovation for Frontier Artificial Intelligence Models Act as being egregiously bad... (MORE - missing details)