
There is no difference between computer art and human art
https://aeon.co/ideas/there-is-no-such-t...l-just-art
EXCERPT: [...] So it’s increasingly not just dorm-room hackers and cloistered academics pecking at computer art to show off their chops or get papers published. Last month, the Google Brain team announced Magenta, a project to use machine learning for exactly the purposes described here, and asked the question: ‘Can we use machine learning to create compelling art and music?’ (The answer is pretty clearly already ‘Yes,’ but there you go.) The project follows in the footsteps of Google’s Deep Dream Generator, which reimagines images in arty, dreamy (or nightmarish) ways, using neural networks.
But the honest-to-God truth, at the end of all of this, is that this whole notion is in some way a put-on: a distinction without a difference. ‘Computer art’ doesn’t really exist in an any more provocative sense than ‘paint art’ or ‘piano art’ does. The algorithmic software was written by a human, after all, using theories thought up by a human, using a computer built by a human, using specs written by a human, using materials gathered by a human, at a company staffed by humans, using tools built by a human, and so on. Computer art is human art – a subset rather than a distinction....
Ethics of Artificial Intelligence Conference at NYU : Ethics Etc
http://ethics-etc.com/2016/07/08/ethics-...ce-at-nyu/
EXCERPT: The NYU Center for Bioethics and the NYU Center for Mind, Brain and Consciousness will host a conference on “The Ethics of Artificial Intelligence” this October at NYU.
Recent progress in artificial intelligence (AI) makes questions about the ethics of AI more pressing than ever. Existing AI systems already raise numerous ethical issues: for example, machine classification systems raise questions about privacy and bias. AI systems in the near-term future raise many more issues: for example, autonomous vehicles and autonomous weapons raise questions about safety and moral responsibility. AI systems in the long-term future raise more issues in turn: for example, human-level artificial general intelligence systems raise questions about the moral status of the systems themselves.
This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including: What ethical principles should AI researchers follow? Are there restrictions on the ethical use of AI? What is the best way to design morally beneficial AI? Is it possible or desirable to build moral principles into AI systems? When AI systems cause benefits or harm, who is morally responsible? Are AI systems themselves potential objects of moral concern? What moral framework is best used to assess questions about the ethics of AI?
Speakers and panelists will include...
https://aeon.co/ideas/there-is-no-such-t...l-just-art
EXCERPT: [...] So it’s increasingly not just dorm-room hackers and cloistered academics pecking at computer art to show off their chops or get papers published. Last month, the Google Brain team announced Magenta, a project to use machine learning for exactly the purposes described here, and asked the question: ‘Can we use machine learning to create compelling art and music?’ (The answer is pretty clearly already ‘Yes,’ but there you go.) The project follows in the footsteps of Google’s Deep Dream Generator, which reimagines images in arty, dreamy (or nightmarish) ways, using neural networks.
But the honest-to-God truth, at the end of all of this, is that this whole notion is in some way a put-on: a distinction without a difference. ‘Computer art’ doesn’t really exist in an any more provocative sense than ‘paint art’ or ‘piano art’ does. The algorithmic software was written by a human, after all, using theories thought up by a human, using a computer built by a human, using specs written by a human, using materials gathered by a human, at a company staffed by humans, using tools built by a human, and so on. Computer art is human art – a subset rather than a distinction....
Ethics of Artificial Intelligence Conference at NYU : Ethics Etc
http://ethics-etc.com/2016/07/08/ethics-...ce-at-nyu/
EXCERPT: The NYU Center for Bioethics and the NYU Center for Mind, Brain and Consciousness will host a conference on “The Ethics of Artificial Intelligence” this October at NYU.
Recent progress in artificial intelligence (AI) makes questions about the ethics of AI more pressing than ever. Existing AI systems already raise numerous ethical issues: for example, machine classification systems raise questions about privacy and bias. AI systems in the near-term future raise many more issues: for example, autonomous vehicles and autonomous weapons raise questions about safety and moral responsibility. AI systems in the long-term future raise more issues in turn: for example, human-level artificial general intelligence systems raise questions about the moral status of the systems themselves.
This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including: What ethical principles should AI researchers follow? Are there restrictions on the ethical use of AI? What is the best way to design morally beneficial AI? Is it possible or desirable to build moral principles into AI systems? When AI systems cause benefits or harm, who is morally responsible? Are AI systems themselves potential objects of moral concern? What moral framework is best used to assess questions about the ethics of AI?
Speakers and panelists will include...