Cynical Sindee: Europe seems determined to bureaucratically handicap itself in yet another area. Unfortunately, there are impressionable primates in the New World that may start imitating these arboreal gesticulations of the EU and UN, if they sense such being yet another fashionable venue to score virtue points for themselves and opportunity to invent a new, (unnecessary?) paperwork-pushing and "monitor/manage the restrictions" jobs domain.
- - - - - - -
AI developers often ignore safety in the pursuit of a breakthrough – so how do we regulate them without blocking progress?
https://theconversation.com/ai-developer...ess-155825
INTRO: Ever since artificial intelligence (AI) made the transition from theory to reality, research and development centers across the world have been rushing to come up with the next big AI breakthrough.
This competition is sometimes called the "AI race." In practice, though, there are hundreds of "AI races" heading towards different objectives. Some research centers are racing to produce digital marketing AI, for example, while others are racing to pair AI with military hardware. Some races are between private companies and others are between countries.
Because AI researchers are competing to win their chosen race, they may overlook safety concerns in order to get ahead of their rivals. But safety enforcement via regulations is undeveloped, and reluctance to regulate AI may actually be justified: it may stifle innovation, reducing the benefits that AI could deliver to humanity.
Our recent research, carried out alongside our colleague Francisco C. Santos, sought to determine which AI races should be regulated for safety reasons, and which should be left unregulated to avoid stifling innovation. We did this using a game theory simulation.
AI supremacy. The regulation of AI must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the development of autonomous weapons. But the benefits of AI, like better cancer diagnosis and smart climate modeling, might not exist if AI regulation were too heavy-handed. Sensible AI regulation would maximize its benefits and mitigate its harms.
But with the US competing with China and Russia to achieve "AI supremacy"—a clear technological advantage over rivals—regulations have thus far taken a back seat. This, according to the UN, has thrust us into "unacceptable moral territory".
AI researchers and governance bodies, such as the EU, have called for urgent regulations to prevent the development of unethical AI. Yet the EU's white paper on the issue has acknowledged that it's difficult for governance bodies to know which AI race will end with unethical AI, and which will end with beneficial AI... (MORE)
- - - - - - -
AI developers often ignore safety in the pursuit of a breakthrough – so how do we regulate them without blocking progress?
https://theconversation.com/ai-developer...ess-155825
INTRO: Ever since artificial intelligence (AI) made the transition from theory to reality, research and development centers across the world have been rushing to come up with the next big AI breakthrough.
This competition is sometimes called the "AI race." In practice, though, there are hundreds of "AI races" heading towards different objectives. Some research centers are racing to produce digital marketing AI, for example, while others are racing to pair AI with military hardware. Some races are between private companies and others are between countries.
Because AI researchers are competing to win their chosen race, they may overlook safety concerns in order to get ahead of their rivals. But safety enforcement via regulations is undeveloped, and reluctance to regulate AI may actually be justified: it may stifle innovation, reducing the benefits that AI could deliver to humanity.
Our recent research, carried out alongside our colleague Francisco C. Santos, sought to determine which AI races should be regulated for safety reasons, and which should be left unregulated to avoid stifling innovation. We did this using a game theory simulation.
AI supremacy. The regulation of AI must consider the harms and the benefits of the technology. Harms that regulation might seek to legislate against include the potential for AI to discriminate against disadvantaged communities and the development of autonomous weapons. But the benefits of AI, like better cancer diagnosis and smart climate modeling, might not exist if AI regulation were too heavy-handed. Sensible AI regulation would maximize its benefits and mitigate its harms.
But with the US competing with China and Russia to achieve "AI supremacy"—a clear technological advantage over rivals—regulations have thus far taken a back seat. This, according to the UN, has thrust us into "unacceptable moral territory".
AI researchers and governance bodies, such as the EU, have called for urgent regulations to prevent the development of unethical AI. Yet the EU's white paper on the issue has acknowledged that it's difficult for governance bodies to know which AI race will end with unethical AI, and which will end with beneficial AI... (MORE)