Oct 20, 2025 09:36 PM
https://www.eurekalert.org/news-releases/1102568
INTRO: As local governments adopt new technologies that automate many aspects of city services, there is an increased likelihood of tension between the ethics and expectations of citizens and the behavior of these “smart city” tools. Researchers are proposing an approach that will allow policymakers and technology developers to better align the values programmed into smart city technologies with the ethics of the people who will be interacting with them.
“Our work here lays out a blueprint for how we can both establish what an AI-driven technology’s values should be and actually program those values into the relevant AI systems,” says Veljko Dubljević, corresponding author of a paper on the work and Joseph D. Moore Distinguished Professor of Philosophy at North Carolina State University.
At issue are smart cities, a catch-all term that covers a variety of technological and administrative practices that have emerged in cities in recent decades. Examples include automated technologies that dispatch law enforcement when they detect possible gunfire, or technologies that use automated sensors to monitor pedestrian and auto traffic to control everything from street lights to traffic signals.
“These technologies can pose significant ethical questions,” says Dubljević, who is part of the Science, Technology & Society program at NC State.
“For example, if AI technology presumes it detected a gunshot and sends a SWAT team to a place of business, but the noise was actually something else, is that reasonable?” Dubljević asks. “Who decides to what extent people should be tracked or surveilled by smart city technologies? Which behaviors should mark someone out as an individual who should be under escalated surveillance? These are reasonable questions, and at the moment there is no agreed upon procedure for answering them. And there is definitely not a clear procedure for how we should train AI to answer these questions.”
To address this challenge, the researchers looked to something called the Agent Deed Consequence (ADC) model... (MORE - details, no ads)
INTRO: As local governments adopt new technologies that automate many aspects of city services, there is an increased likelihood of tension between the ethics and expectations of citizens and the behavior of these “smart city” tools. Researchers are proposing an approach that will allow policymakers and technology developers to better align the values programmed into smart city technologies with the ethics of the people who will be interacting with them.
“Our work here lays out a blueprint for how we can both establish what an AI-driven technology’s values should be and actually program those values into the relevant AI systems,” says Veljko Dubljević, corresponding author of a paper on the work and Joseph D. Moore Distinguished Professor of Philosophy at North Carolina State University.
At issue are smart cities, a catch-all term that covers a variety of technological and administrative practices that have emerged in cities in recent decades. Examples include automated technologies that dispatch law enforcement when they detect possible gunfire, or technologies that use automated sensors to monitor pedestrian and auto traffic to control everything from street lights to traffic signals.
“These technologies can pose significant ethical questions,” says Dubljević, who is part of the Science, Technology & Society program at NC State.
“For example, if AI technology presumes it detected a gunshot and sends a SWAT team to a place of business, but the noise was actually something else, is that reasonable?” Dubljević asks. “Who decides to what extent people should be tracked or surveilled by smart city technologies? Which behaviors should mark someone out as an individual who should be under escalated surveillance? These are reasonable questions, and at the moment there is no agreed upon procedure for answering them. And there is definitely not a clear procedure for how we should train AI to answer these questions.”
To address this challenge, the researchers looked to something called the Agent Deed Consequence (ADC) model... (MORE - details, no ads)
