There are two questions about the ethics of artiﬁcial intelligence (AI) which are central:
* How can we build an ethical AI?
* Can we build an AI ethically?
The ﬁrst question concerns the kinds of AI we might achieve — moral, immoral or amoral. The second concerns the ethics of our achieving such an AI. They are more closely related than a ﬁrst glance might reveal. For much of technology, the National Riﬂe Association’s neutrality argument might conceivably apply: “guns don’t kill people, people kill people.” But if we build a genuine, autonomous AI, we arguably will have to have built an artiﬁcial moral agent, an agent capable of both ethical and unethical behavior. The possibility of one of our artifacts behaving unethically raises moral problems for their development that no other technology can. Both questions presume a positive answer to a prior question: Can we build an AI at all? We shall begin our review there.