Here’s what young voters want from Joe Biden and Kamala Harris in the next four years
https://www.cnbc.com/2020/11/20/heres-wh...rris-.html
EXCERPT: ... So, with more young people engaged in the political system, what do they want to see Joe Biden and Kamala Harris deliver on in the next four years? [...] In a critical year for politics, many students said they voted for change this 2020 election — and for change to happen soon. They are hopeful for the future but will also be holding the Biden administration accountable... (MORE -details)
The ethical questions that haunt facial-recognition research
https://www.nature.com/articles/d41586-020-03187-3
EXCERPTS: In September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity.
China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang — which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces.
As a result, many researchers found it disturbing that academics had tried to build such algorithms — and that a US journal had published a research paper on the topic. And the 2018 study wasn’t the only one: journals [...] had also published peer-reviewed papers that describe using facial recognition to identify ... members of other Chinese minority groups.
The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research. [...] In the past year, there has been growing scrutiny of universities’ partnerships with companies or research programmes linked to mass surveillance in Xinjiang.
[...] Questionable research projects have popped up in the United States, too. On 5 May, Harrisburg University in Pennsylvania posted a press release declaring that researchers there had developed facial-recognition software “capable of predicting whether someone is likely going to be a criminal”, with “80 percent accuracy and no racial bias”. The announcement triggered a wave of criticism, as had previous studies that hark back to the discredited work of nineteenth-century physiognomists. One notorious 2016 study reported that a machine-learning algorithm could spot the difference between images of non-criminals and those of convicted criminals that were supplied by a Chinese police department.
[...] Researchers who work on technology that recognizes or analyses faces point out that it has many uses, such as to find lost children, track criminals, access smartphones and cash machines more conveniently, help robots to interact with humans by recognizing their identities and emotions and, in some medical studies, to help diagnose or remotely track consenting participants. “There are a number of lawful and legitimate applications of face and biometric recognition which we need in our society,” says Jain.
But researchers must also recognize that a technology that can remotely identify or classify people without their knowledge is fundamentally dangerous — and should try to resist it being used to control or criminalize people, say some scientists... (MORE - details)
https://www.cnbc.com/2020/11/20/heres-wh...rris-.html
EXCERPT: ... So, with more young people engaged in the political system, what do they want to see Joe Biden and Kamala Harris deliver on in the next four years? [...] In a critical year for politics, many students said they voted for change this 2020 election — and for change to happen soon. They are hopeful for the future but will also be holding the Biden administration accountable... (MORE -details)
The ethical questions that haunt facial-recognition research
https://www.nature.com/articles/d41586-020-03187-3
EXCERPTS: In September 2019, four researchers wrote to the publisher Wiley to “respectfully ask” that it immediately retract a scientific paper. The study, published in 2018, had trained algorithms to distinguish faces of Uyghur people, a predominantly Muslim minority ethnic group in China, from those of Korean and Tibetan ethnicity.
China had already been internationally condemned for its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang — which the government says are re-education centres aimed at quelling a terrorist movement. According to media reports, authorities in Xinjiang have used surveillance cameras equipped with software attuned to Uyghur faces.
As a result, many researchers found it disturbing that academics had tried to build such algorithms — and that a US journal had published a research paper on the topic. And the 2018 study wasn’t the only one: journals [...] had also published peer-reviewed papers that describe using facial recognition to identify ... members of other Chinese minority groups.
The complaint, which launched an ongoing investigation, was one foray in a growing push by some scientists and human-rights activists to get the scientific community to take a firmer stance against unethical facial-recognition research. [...] In the past year, there has been growing scrutiny of universities’ partnerships with companies or research programmes linked to mass surveillance in Xinjiang.
[...] Questionable research projects have popped up in the United States, too. On 5 May, Harrisburg University in Pennsylvania posted a press release declaring that researchers there had developed facial-recognition software “capable of predicting whether someone is likely going to be a criminal”, with “80 percent accuracy and no racial bias”. The announcement triggered a wave of criticism, as had previous studies that hark back to the discredited work of nineteenth-century physiognomists. One notorious 2016 study reported that a machine-learning algorithm could spot the difference between images of non-criminals and those of convicted criminals that were supplied by a Chinese police department.
[...] Researchers who work on technology that recognizes or analyses faces point out that it has many uses, such as to find lost children, track criminals, access smartphones and cash machines more conveniently, help robots to interact with humans by recognizing their identities and emotions and, in some medical studies, to help diagnose or remotely track consenting participants. “There are a number of lawful and legitimate applications of face and biometric recognition which we need in our society,” says Jain.
But researchers must also recognize that a technology that can remotely identify or classify people without their knowledge is fundamentally dangerous — and should try to resist it being used to control or criminalize people, say some scientists... (MORE - details)