Software Capstone II
Ethical Dilemmas of Automated Weapons
We are at the point technologically where it is possible to employ robots to be able to target and kill people, some governments such as Israel and South Korea already have automated weapons systems that only require a “OK” from a human in order to be used after a target has been identified. The ethical dilemmas posed by this are large because of the implications of this technology and the potential for misuse. Should this technology be used? And if so, by who, for what purpose and who is liable for when things go wrong?
1. Should we trust programmers to be able to program software that can make split-second ethical decisions?
2. Who would be legally culpable for when the software acts not as intended? Programmer? Manufacturer, government or corporation?
3. Is it moral to even implement this technology? Should we always maintain an area of human control within these systems?