Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Empathy Through Design: Developing an AI Ethics Design Toolkit For Accountable, Responsible and…

Blog: Empathy Through Design: Developing an AI Ethics Design Toolkit For Accountable, Responsible and…

Accountability, Responsibility and Transparency Statement

This inquiry declares the need for AI ethics design to practice what it seeks — to be accountable, responsible and transparent. In the spirit of that declaration, this proposal is an early draft, written as part of an application to join the ART-AI doctoral program at the University of Bath. I welcome your comments, questions and inquiries via Twitter at @bxmx. This draft was posted online after submitted in application to the University Bath ART-AI doctoral program.

Research Questions

AI ethics standards, guidelines and declarations have moved from straightforward robot-based “Laws” (Asimov, 1950, Murphy and Wood, 2009, Boden, Bryson et al., 2011, Prescott and Szollosy, eds., 2017), towards principles-based AI standards, guidelines and declarations (Asilomar principles, Future of Life Institute, 2017, Association for Computing Machinery, 2017, Japanese Society for Artificial Intelligence, 2017, Future Society Science, Law and Society Initiative Economou, 2017, the Montreal Declaration, 2017, IEEE General Principles of Ethical Autonomous and Intelligent Systems, 2017, UNI Global Union, 2017, Pichai, AI for Google, 2018, Ethically Aligned Design, First Edition IEEE, 2019).

While many of the later principles and declarations documents emerge from the earlier works, there is an ever-growing list of individual standards, principles and declarations which nascent AI ethics are called to be based upon. This burgeoning corpus of AI ethics standards has led to calls for ethics to be “normalized in the education and research of AI.” (Choudhury, Lee, Kurenkov, 2019). The hoped-for outcome of normalized AI ethics will be AI system deployments which are accountable, responsible and transparent — vital to support human flourishing as AI develops and matures.

Normalization of AI ethics education faces steep challenges while presenting tremendous opportunities to evolve AI ethics design processes themselves in ways that will foster closer collaboration and human empathy; key needs for human flourishing. AI’s ways of “knowing us” can, if we design the process for designing accountable, responsible and transparent AI systems well, profoundly enhance our ways of knowing each other. To harness this tremendous opportunity, we need to move past standards, beyond design practices guidelines, and into designing AI ethics design practices which build trusted communities, those which are themselves accountable, responsible and transparent.

Current AI ethics standards, guidelines and declarations are complex, often contradictory documents, developed in different jurisdictions, with varying local, national and international scales and hazy enforcement mechanisms. The Montreal Declaration itself has over 64 standards and principles. Collectively, 2017 alone saw the publication of over 200 different AI ethics standards statements. This sheer volume of AI ethics standards creates complexity, when complexity is already one of the “black box problems” with AI itself (Floridi, 2017).

Additionally, despite AI development in China attracting the largest amounts of venture capital globally in 2017, and the powerful impact of sheer population size on Chinese machine learning data sets (Saiidi, 2018) ,the Chinese Association for Artificial Intelligence has only just begun to start to set up draft AI ethics guidelines, a move echoed by Chinese private sector leaders (Jing, 2019). Some current non-Chinese AI ethics standards, guidelines and declarations appear to be in direct cultural conflict with Chinese values. This conflict threatens to widen gulfs between centres of AI development, deployment and AI ethics practice.

While grappling with the vast current and potential impacts of AI on society, the complexity of these issues has in turn created a complex standards environment that is in and of itself a form of “black box”. The so-called “AI race” is mirrored by an AI ethics standards race. As Virginia Dignum reminds us, competition here is not the answer; there is no AI race. Either we all win together, or we all lose together (Dignum, 2019).

Additionally, many of the standards statements, while born of good intentions, focus on avoiding (so-called) AI failures, instead of focussing on achieving AI outcomes that foster human flourishing. “AI failures” are rarely soley failures of the algorithms themselves (Calvin, 2018). The failures are instead most often the result of the compounding of systemic failures that the use of AI highlights. AI is neither the sole root cause of biased parole decisions (Corbett-Davies, et al. 2017) nor a fatal collision involving a self-driving car (Neidermeyer, 2019). AI instead, acts like a lens which calls into focus the existence of larger complex, interacting, multi-systemic failures.

Yet, in blaming AI for the failure of human systems, it is often anthropomorphized by imbuing it with a form of pseudo-moral agency. Overly humanizing AI results in studies and articles calling out “killer autonomous cars” through trolley problem discussions which needlessly explore what decisions a self-driving car should make (Awad, Rahwan, 2018) or declaring that biased judge-bots rein in courtrooms.(Angwin, Julia et al. 2016, Noughton, 2016) Anthropomorphizing AI misses the collective systemic inequalities that led to circumstances in which failure was essentially inevitable (Corbett-Davies, 2017 Neidermeyer, 2019) distracting us from acting to start to address these inequalities. Further, shifting the locus of moral agency to AI disempowers humans as the creators of artifacts like artificial intelligence and the systems we create which use AI, in part, to solve problems (Bryson, 2010, Floridi, 2019).

“Calling a robot a moral agent is not only false, but an abrogation of our own responsibility.” (Bryson, 2010)

This abrogation of human responsibility to machines (Bryson, 2010) through the trolley problem, and other practices which mistakenly envision machines as moral agents creates additional systems-based failure risks. (Bar-Haim, et al., 2007, Bachnio et al., 2017) Removing human connection to the decision making process reduces the vital emotional connection to ethical decisions. (Greene, 2001)

Calls for ethics by design (Craiglia et al. 2018, Floridi, 2018) as key to evolving our use of AI are made in many of the standards, guideline and declaration processes of the past few years. Ethical design in these complex, multi-system environments requires more than standards and more than design process guidelines. It requires revolutionary process design approaches which engage those who are the human components of these systems. Ecotone design approaches offer much towards informing a new innovation-focused, human-centred design process in mangrove-like, grey area ecology spaces (Pendleton-Julian, 2009). Ecotone design approaches mirror those found in Agile processes (Beck, Kent et al, 2001, Singh 2008), but presents a matured, user-centred, outcome-focused elastic process built on a foundation of accountability, responsibility and transparency.

“The twenty-first century is one that promises perpetual and persistent change. The ecotone analogy, as more than a metaphor — as a structural and operational construct — is invaluable as a model that uses disturbance and change to develop talent that can sustain itself and thrive on disturbance and change. This learning environment is intended to cultivate the education/evolution of… new capacities, behaviors, and tendencies that are open, adaptable and elastic.” (Pendleton-Julian, 2009).

Pendleton-Julian’s ecotone design approach above mirrors the second Agile principle “(w)elcome changing requirements, even late in development. Agile processes harness change for the customer’s competitive advantage.” (Beck, Kent et al, 2001)

Agile development processes place the customer/user at the centre. This same user-focus is echoed in UX design principles (Unger and Chandler, 2012), and in calls for users to be at the centre of engagement, technology and AI designs (Salvo 2001, Couldry, 2003, Findeli 2018, Floridi, 2019).

“Ludifying and enveloping are a matter of designing, or sometimes re-designing, the realities with which we deal (Floridi 2019). So, the foreseeable future of AI will depend on our design abilities and ingenuity.” (Floridi, 2019)

Emerging AI ethics design principles handbooks, such as IBM’s Everyday Ethics for Artificial Intelligence: A practical guide for designers & developers (Cutler, Pribić and Humphrey, 2018) and the IEEE’s Ethically Aligned Design, First Edition: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems (Havens et al, 2019) are the current steps in creating design processes. Courageously iterative, these guides are aimed at designers, developers and members of technology associations and standards bodies, but remain outside of the corpus of policy makers, business leaders and most importantly, end users.

For all humans to harness AI’s potential to enhance our flourishing we must be able to foster trust through how we design and deploy these tools. In order to foster trust we must build empathy with each other as essential parts of AI systems. Working together broadly is the only way forward. “Collaboration requires sufficient commonality of collaborating intelligences to create empathy — the capacity to model the other’s goals based on one’s own.” Bryson, IEEE. p 102, 2017).

Moving from standards, through design guidelines, to designing AI ethics processes themselves calls us to answer three key research questions:

  1. How do we align ethical AI design principles and processes and accountable, responsible, and transparent AI to build collaboration, connection and human empathy?
  2. Where are AI ethics standards, guidelines and declarations similar, where are they different, and can they be successfully unified into a straightforward, deconflicted design document?
  3. What does a design-based process for ethical, accountable, responsible and transparent artificial intelligence look like?

Literature Review | The Current State of AI Ethics

Ethics-based approaches to developing Artificial Intelligence (AI) systems, and the deployment of these systems, have been with us since Turing’s seminal paper on “thinking machines” (Turing, 1950). As advances in Machine Learning, Natural Language Processing (NLP), Deep Learning, and other forms of algorithmic decision-making supported processes continue, AI-associated artifacts are increasingly deployed on our roads, in our doctors’ offices, in our marketplaces, our places of work, our homes, our courts and our banks. AI system (AIS) failures, along with the resulting fears and concerns of additional failures have driven a large increase in academic and news calls for “AI ethics” (Choudhury, Lee and Kurenkov, 2019). AI ethics — particularly design-based solutions — are held as a panacea for the AI-risks (Floridi, 2019).

AI risks and opportunities are onlife, infosphere risks and opportunities (Floridi, 2014, 2019). The current advancing state of AI is the result of several concurrent technological developments, including the global internet, advances in graphics processing units, due in large part to the rise of online gaming and breakthroughs in the development of neural nets and deep learning, all of which would not exist as they do without global, ubiquitous, connected, digital infrastructure, protocols and systems. AI problems are onlife, infosphere problems and must be addressed as such.

AI risks fall into two central categories (World Economic Forum, 2017); firstly, the attention grabbing, existential risk of the catastrophic development and deployment of a form of artificial super-intelligence in a “singularity” — type scenario (Bostrom, 2014), and secondly, the far more mundane, but already partially-realized risk of AI acting as an inequality accelerator; a tool of doubts which are born of unreliability.

Regardless of scale, AI risks of both type, ultimately, are risks of unreliability. “Algorithmic bias”, the “black box” problem, SkyNet, and the fatal collision between an autonomous uber and a pedestrian crossing the street are all problems of doubt and trust. In writing about physicians distrust of IBM’s Watson For Oncology, Bloomberg explains that “(I)f people don’t know how AI comes up with its decisions, they won’t trust it” (Bloomberg, 2018).

Doubt, as a risk to artificial intelligence systems’ promise, has been with us long before Alan Turing (1950) and Gödel (1931) debated (Copeland, 2008) the fundamental abilities of computers to resolve uncertainty. Yet Turing’s famed test, and his description of the primarily emotional responses of humans outsmarting Turing machines (Turing, 1950) in relieving their machine — fostered doubts, are descriptions of doubt’s risk. Floridi’s description of the damage fake news causes in the infosphere is a description of the same kind of unease that Turing imparts to his “imitation game” participants and that AI doubt creates.

“At some point, you don’t know what is what. And that is the real damage….”

(Floridi, 2017)

With AI maturing the internet, we, as the society which lives in an ever-new world, are on the forefront of a new way of living in near perpetual doubt. We are a “4th revolution” information society, living in a maximal infosphere (Floridi, 2014) that is increasingly autonomous. In the Onlife Manifesto Floridi, Taddeo, Ess, Brodent, Lobet-Maris and others (2015) think together about the challenges, opportunities and fundamental changes that our onlife existences bring to human capacity for empathy (Dewandre, 2015), attention (Broadbent & Lobet-Marie, 2015) and understanding (Ess, 2015). Changes that threaten to reduce our capacity for empathy, attention and understanding aren’t created by AI, but by our current post-hoc “hacker way” (Zuckerberg, 2012) approach to technology development. Again, these risks aren’t inherent in algorithms themselves, but are born of the system development and business processes which deploy them without regard to end users (Kirsch, 2017).

Our existence in the infosphere is an interconnected, networked existence, and the risks that AI standards, declarations and guidelines seek to mitigate are risks unique to the infosphere. These connections locate us in a transitional space, a digital mangrove (Floridi, 2014) or ecotone (Pendleton-Julian, 2009) environment defined by continuous (Pendleton-Julian, 2009), seemingly chaotic (Magretts, et al., 2016) upheaval. Onlife doubt, as a central fallout from AI failures, is part of this upheaval caused by the conflict between complex systems.

Explainability is being held as of the core requirements of ethical AI systems which are appropriate to deployment (Gunning, DARPA 2017), whether in the form of counterfactuals (Wachter, 2018) or simply to make algorithmic outcomes acceptable to users (Kirsch, 2018) businesses and other organizations (Chander, Srinivasan, Chelian et al, 2018). But as, Wachter, Kirch and Danish Tech Ambassador Casper Klynge point out “one of the largest risks is that we lose faith in the power of technology and its ability to raise the human condition.” (Klynge, Azzar 16:10–16:29, 2018). AI doubts are crisis of faith.

In speaking directly to designers and developers, IBM’s Everyday Ethics for Artificial Intelligence: A practical guide for designers & developers handbook declares “(t)o create and foster trust between humans and machines, you must understand the ethical resources and standards available for reference during the designing, building, and maintenance of AI.” (Cutler, 2018). To truly build trust in AI, we must go further than understanding guidelines and standards. Rory Sutherland reminds us that“even experts can lose the plot. We fall back on a communication schema which is primarily about transmitting information, rather than as a means of generating and arousing emotions: trust, confidence, affection (Sutherland, 2017).

Using design as research (Findeli, 2008) to explore how we can design AI ethics that generate trust, confidence and empathy is key to deploying user-centred, outcome focussed systems. The very “unknowable” nature of complex systems requires that these design approaches be robustly innovative (Floridi, 2019) and elastic (Pendleton-Julian, 2009), while founded on the same outcomes that we are seeking; those build of accountability, responsibility and transparency.

Goal, Outline and Methodology

The goal of this inquiry is the creation of an adaptable, globally accessible AI ethics design process toolkit that builds trust in AI ethics design processes and fosters empathy for humans as users and key components of AI systems.


Chapter 1 | The Case for Design Ethics in AI | Accountable, Responsible, Transparent AI Design in the Autonomous Infosphere

Chapter 2 | Unifying the complex, complementary and conflicting approaches to AI ethics in standards, guidelines, and declarations documents world-wide; why contextual outcomes matter for innovation ecotones

Chapter 3 | Context-based, User Experience Design For AI in Ecotone Environments

Chapter 4 | Ground-truthing, Co-design and Building an Al Design Ethics Community of Excellence

Chapter 5 | Analysis of Qualitative and Quantitative Design Inputs

Chapter 6 | Empathy Through Design: The Design Ethics For AI Toolkit

Chapter 7 | Measuring Impacts and Next Steps

Research Design and Methodology


This inquiry is designed using matured, user experience (UX) informed, Agile approaches matured through an elastic, innovation ecotone model (Pendleton-Julian, 2009) to engagement and problem solving. This approach means that the phase stages will overlap, goals will flex and adjust with data findings, engagement feedback and ongoing developments in the field. This process is purposely, openly and transparently iterative so to help foster trust in AI, the AI ethics design process toolkit and empathy for humans using AI.

Data Sources

Data utilized includes the text of current and emerging AI ethics standards, guidelines and declarations, outputs of Natural Language Processing and Word Clustering, qualitative and quantitative measures drawn from inputs including social media engagements, interviews, discussions and UX co-design sessions.

Project Phases

Phase One | Declaring project intention and outlining the accountable, responsible and transparent approach. Ethics approval will be established at this stage.

Phase Two | Preliminary unification and deconfliction of current and emerging AI ethics standards, guidelines and declarations, using Python, NLP, word clustering maps and/or R.

Phase Three | Active Outreach, Inputs Gathering and Community Building.

Phase Four | Collaborative AI Ethics Design Process co-design. The preliminary toolkit will be published online at this point.

Phase Five | Analysis. While the qualitative and quantitative inputs will dictate approaches, it is currently anticipated that a Fruchterman–Reingold visualization algorithm, and/or LEMAN geometric deep learning will be used to identify the patterns of connection between the various nodes.

Phase Six | Reflection and adjustments.

Phase Seven | AI Ethics Design Toolkit launch.

Data Inputs — And Impact — Through Engagement

This inquiry will build trust in AI.

It will build trust in AI by fostering empathy and understanding among those who design and deploy and the humans who are key components of AI systems.

It will build trust in AI by creating a collaborative, participatory AI ethics design process in which users, designers, developers, experts and technologists work together to forge the path ahead in a process which itself is accountable, responsible and transparent. By creating a formalized ethics-based design toolkit through this engagement process this project will model the same approaches to AI ethics development needed in AI systems development and deployment.

Data will be gathered across each design phase through literature reviews, during elastic design through broad onlife engagement feedback and Agile, UX-styled co-design sessions. Both qualitative and quantitative analysis, assisted by NLP, machine learning and word clustering will augment all the inputs in the process from the literature review, to the co-design sessions, to the engagement events themselves.

Engagement is vital process in this inquiry and key to all success. Engagements will be the source for much of the data collected and will guide the use of that data to build the AI ethics design toolkit while simultaneously building an AI design ethics community of practice.

Engagement Plan

Developing a design process to help deliver accountable, responsible and transparent AI must in and of itself be accountable, responsible and transparent. This approach requires collaboration through elastic, responsive engagement (Pendleton-Julian, 2009) — not mere consultation. While utilizing technical skills in programing and design, the design process itself and toolkit must be understandable and useable by non-technical experts. The design process must be able to simplify complex processes and information to foster clear understanding, and vitally, build trust in AI, technology, and the process itself (Floridi, Cowls, Dignum et. al, 2018).

Engaging the wider, global AI community requires an extensive, on-going and flexibly iterative communications approach to gather inputs that build trust. Building on the Agile design process already familiar in software development the engagement design model employed builds on emerging AI ethics design principles.

As the University of Bath ART-AI program is an emerging centre of excellence, a focus both on this particular research project as well as the projects which form the corpus of inquiry in the Impact of AI on Society project is required to place this inquiry within the wider context.

Social Media Engagement

Following the user-centred iterative processes proposed by various designers (Couldry 2003, Cutler, 2018, Pendleton-Julian, 2009, Zeller & Cortise, 2020, Findeli, 2018) in response to complex, transitional and/or AI deployment environments, “onlife” style engagement is a key tool for this inquiry. Connecting with experts and non-experts alike through a combination of online, social media based, academic and community-based outreach actions throughout the infosphere is both a key data source and a key outcome for this inquiry.

Key social media-based engagement channels will include:


ART-AI Design Ethics Facebook

ART-AI Podcast & Vlog

ART-AI Design Ethics LinkedIn

#AIEthicsDesign Instagram

ART-AI Design Ethics Github

#AIEthicsDesign Blog

ART-AI Design Ethics on Wikipedia

ART-AI Design Ethics Blog on Medium.com

Academic Engagement

Journal presentations | Project chapters will be submitted to relevant journals such as Philosophy of Technology, Science, Technology, & Human Values, Technical Communication Quarterly and Design Issues.

ART-AI Annual Conference | The ART-AI program announced the annual ART-AI conference which brings together students with peers, experts and wider peers.

Conferences | Project findings will be proposed to relevant conferences including NeurIPS, AI For Good, FAT and All Tech is Human. Specific focus in years 2 and 3 will be on attending Chinese AI ethics conferences.

Co-drafting | Opportunities for co-drafting papers and conference submissions with potential partners (listed below) will be explored.

ART-AI Masterclasses | The ART-AI program calls for students to participate in yearly masterclasses with senior experts.

Interviews, Twitter chats, Presentations and Knowledge Exchanges | Additional engagements with academic partners outside the traditional realms of academic publication and presentation will help connect powerful thinkers and research excellence with the diverse and collaborative developing AI design ethics community of practice, proactively extending our impact, influence and partnerships far beyond academic institutions.

Community Engagement | Multiple direct engagements with the wider community that bring participants directly into process co-design will be fostered.

Coffee with a computer scientist

AI Challenge Co-Creation Workshop

AI Design Ethics public lecture series

Design Display

UX public co-design sessions

ART-AI Placement and Research Visit

Community AI Design Ethics co-design training

IEEE P7000 Standards Series participation (underway)

Partnership Outreach | Partnerships will be sought with:

Digital Ethics Lab at the Oxford Internet Institute

IBM’s AI Design Group

Turning Institute

IEEE P7000 Series AI Ethics Standards participants

Stanford Centre for Human Centered Artificial Intelligence

Laura Sherling and Andrew DeRosa of Pratt Design Ethics group

Government of Canada Digital Service

Chen Xiaoping professor and director of the Robotics Laboratory at the University of Science and Technology of China

Office of Denmark’s Technology Ambassador

ART-AI fellow cohort students, faculty and industry partners


Year 1 MRes

Course work, skills building.

Launch of Social Media, Academic and Community Engagements including ART-AI Design Ethics Blog and Podcast.

Proposal completed.

AI Ethics Standards analysis and unification process starts.

Conference attendance.

Year 2 PhD

AI Ethics Standards analysis and unification process completed.

Research visits.

Social Media, Academic and Community Engagements continue.

Inputs analysis.

Preliminary draft AI Design Ethics Toolkit pre-beta published.

Improvements review.

Conference attendance.

Year 3

Inputs analysis continues.

Social Media, Academic and Community Engagements continue.

AI Design Ethics Toolkit beta published.

AI Design Ethics Toolkit beta review.

AI Design Ethics Toolkit Improvements.

Dissertation Drafting Begins.

AI Design Ethics Toolkit pre1.0 release.

Conference attendance.

Year 4

Dissertation drafting, review, edits.

Social Media, Academic and Community Engagements continue.

AI Design Ethics Toolkit 1.0 release.

Follow-up and review.

Conference attendance.

Dissertation drafting completed.

Dissertation edits and defence.


  1. Angwin, Julia et al. Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks. ProPublica, 2016. Accessed March 2019: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  2. Asimov, Issac. I, Robot. Spectra, 2004.
  3. Association for Computing Machinery. Influential Computing Researchers and Practitioners Announce Steps to Prevent Algorithmic Bias: ACM US Public Policy Council Issues Seven Principles to Foster Algorithmic Transparency and Accountability. ACM. 2017. Accessed March 2019: https://www.acm.org/media-center/2017/january/usacm-statement-on-algorithmic-accountability
  4. Association for Computing Machinery. Statement on Algorithmic Transparency and Accountability. January 2017. Accessed March 2019: https://www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf
  5. Association for Computing Machinery. ACM Code of Ethics and Professional Conduct. June 2018. Accessed March 2019: https://ethics.acm.org/
  6. Awad, Edmond, Sohan Dsouza et al. The Moral Machine Experiment. Nature Nature volume 563, pages 59–64 (2018).
  7. Bachnio, Agata, Aneta Prezpiorka & Igor Pantic. “Association between Facebook addiction, self-esteem and life satisfaction: A cross-sectional study,” Computer In Human Behaviour 2017.(https://www.sciencedirect.com/science/article/pii/S0747563215302041)
  8. Bar-Haim, Yair, Lamy, Dominique, Pergamin, Lee,Bakermans-Kranenburg, Marian J.,van IJzendoorn, Marinus H. Threat-related attentional bias in anxious and nonanxious individuals: A meta-analytic study. Psychological Bulletin, Vol 133(1), Jan 2007, 1–24
  9. Beck, Kent et al. Agile Manifesto , 2001. http://agilemanifesto.org/
  10. Beck, Kent et al. Resources, Agilealliance.com. Accessed March 2019: http://Agilealliance.com
  11. Bloomberg, Jason. Don’t Trust AI? Time To Open the AI Black Box. Bloomberg, 2018. Accessed March, 2019: https://www.forbes.com/sites/jasonbloomberg/2018/09/16/dont-trust-artificial-intelligence-time-to-open-the-ai-black-box/#6b5651933b4a
  12. Bollen, Johan, Huina Mao, and Alberto Pepe. “Modeling public mood and emotion: Twitter sentiment and socio-economic phenomena.” Icwsm 11 (2011): 450–453.
  13. Borak, Masha. China wants to make its own rules for AI ethics. Abacusnews.com. March 2019: https://www.abacusnews.com/future-tech/china-wants-make-its-own-rules-ai-ethics/article/3001025
  14. Bostrom, Nick Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford, 2014.
  15. Boyes, Hugh, Bil Halq et al. The industrial internet of things (IIoT): An analysis framework. Computers in Industry, 101, Jan 2018. https://doi.org/10.1016/j.compind.2018.04.015
  16. Bradshaw, Susan & Philip N. Howard. Challenging Truth and Trust: A Global Inventory
    of Organized Social Media Manipulation. Computational Propaganda Research Project, Oxford, 2018.
  17. Broadbent, Stefana & Claire Lobet-Marie. Towards a Grey Ecology. The Onlife Manifesto. Luciano Floridi eds. Springer, 2015.
  18. Bronstein, Michael. LEMAN — Deep LEarning on MANifolds and graphs. Abstract, 2017. Accessed October 2018: https://search.usi.ch/en/projects/952/leman-deep-learning-on-manifolds-and-graphs
  19. Bronstein, Michael. Geometric deep learning. Presentation, 2018. https://www.dropbox.com/s/oj3olyzxvnchrqs/SGP%202018.pdf?dl=0
  20. Bryson, Joanna J. et al. Affective Computing. Ethically Aligned Design. First Edition. A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. IEEE. 2017. pp. 90–109.
  21. Bryson, Joanna J. Artificial Intelligence and Pro-Social Behaviour. Collective Agency and Cooperation in Natural and Artificial Systems. Explanation, Implementation and Simulation. Catrin Misselhorn, ed. Springer, 2015. pp.281–306.
  22. Bryson, Joanna J. et al. Engineering and Physical Sciences Research Council Principles of Robotics: Regulating Robots in the Real World. Sept. 2011. Accessed March, 2019: https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/
  23. Bryson, Joanna J. Robots Should Be Slaves. Close Engagements with Artificial Companions: Key social, psychological, ethical and design issue, Yorick Wilks (ed.), John Benjamins (chapter 11, pp 63–74) 2010.
  24. Burns, Judith. “Fake news harms children’s self-esteem and trust, say MPs” BBC News, June 2018. Accessed October 2018. https://www.bbc.com/news/education-44454844
  25. Campolo, Alex. Madelyn​ ​Sanfilippo, Meredith​ ​Whittaker and Kate Crawford. AI​ ​Now​ ​2017​ ​Report. AI Now, 2017.
  26. Chander, Ajay et al. Working with Beliefs: AI Transparency in the Enterprise. ExSS March, 2018.
  27. Choudhury, Shushman. Michelle Lee, and Andrey Kurenkov. In Favor of Ethical Best Practices in AI Research. The Gradient. Feb 2019. Accessed March, 2019: https://thegradient.pub/in-favor-of-developing-ethical-best-practices-in-ai-research/
  28. Copeland, Jack. The Mathematical Objection: Turing, Gödel, and Penrose on the Mind. Lecture, 2008.
  29. Couldry, Nick (2003) Digital divide or discursive design? On the emerging ethics of information space. Ethics and information technology, 5 (2), pp. 89–97.
  30. Coviello L, Sohn Y, Kramer ADI, Marlow C, Franceschetti M, Christakis NA, et al. (2014) Detecting Emotional Contagion in Massive Social Networks. PLoS ONE 9(3): e90315. https://doi.org/10.1371/journal.pone.0090315
  31. Corbett-Davies, Sam, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In Proceedings of KDD ’17, Halifax, NS, Canada, August 13–17, 2017, 10 pages. DOI: 10.1145/3097983.3098095
  32. Cubitt, Sean. Digital Aesthetics. SAGE, London 1998.
  33. Cutler, Adam, Pribić and Humphrey. Everyday Ethics for Artificial Intelligence: A practical guide for designers & developers. IBM Watson, 2018. Accessed March, 2019: https://www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf
  34. Craglia, Massimo et al. Artificial Intelligence: A European Perspective European Commission. Publications Office of the European Union. Luxembourg, 2018.
  35. Crawford, Kate, Vladan Joler. Anatomy of an AI System: The Amazon Echo as an anatomical map of human labor, data and planetary resources. Accessed March, 2019. Anatomyof.ai.
  36. Darczewska, Jolanta & Piotr Żochowski. Russia’s ‘Activity’ toward the West — Confrontation by Choice. Russian Analytical Digest, 212. Dec, 2017.
  37. Dewandre, Nicole. Rethinking the Human Condition in a Hyperconnected Era: Why Freedom is Not About Sovereignty But About Beginnings. The Onlife Manifesto. Luciano Floridi, ed. Springer, 2015.
  38. Dignum, Virginia. There is no AI — race and if there is it’s the wrong one to run. ALLAI. March, 2019. Accessed March 2019. http://allai.nl/there-is-no-ai-race/
  39. Dominici, P. For an inclusive innovation. Healing the fracture between the human and the technological in the hypercomplex society. Eur J Futures Res (2018) 6: 3. https://doi.org/10.1007/s40309-017-0126-4
  40. Economou, Nicholas. A ‘principled’ artificial intelligence could improve justice. ABA. 2017. Accessed March, 2019: http://www.abajournal.com/legalrebels/article/a_principled_artificial_intelligence_could_improve_justice
  41. Ehrenfeld, J.M. WannaCry, Cybersecurity and Health Information Technology: A Time to Act. J Med Syst (2017) 41: 104. https://doi.org/10.1007/s10916-017-0752-1
  42. Ess, Charles. The Onlife Manifesto: Philosophical Backgrounds, Media Usages, and the Futures of Democracy and Equality. The Onlife Manifesto. Luciano Floridi, ed. Springer, 2015.
  43. Fang, Wu and Bernardo A. Huberman. Novelty and collective attention. PNAS November 6, 2007 104 (45) 17599–17601; https://doi.org/10.1073/pnas.0704916104
  44. Ferrara E, Yang Z (2015) Measuring Emotional Contagion in Social Media. PLoS ONE 10 (11): e0142390. doi:10.1371/journal.pone.0142390
  45. Findeli, Alain & Brouillet, Denis & Martin, Sophie & Moineau, Christophe & Tarrago, Richard. (2008). Research Through Design and Transdisciplinarity: A Tentative Contribution to the Methodology of Design Research.
  46. Findeli, Alain. Rethinking Design Education for the 21st Century: Theoretical, Methodological, and Ethical Discussion. Design Issues: Volume 17, Number 1 Winter 2001. Pages 5–17.
  47. Floridi, Luciano. The 4th Revolution. Oxford University Press, 2014.** More Floridi and Taddeo to add.
  48. Floridi, Luciano, ed.The Onlife Manifesto. Springer, 2015.
  49. Floridi, Luciano. Children of the Fourth Revolution. Philos. Technol. (2011) 24:227–232
  50. Floridi, Luciano. Hyperhistory and the Philosophy of Information Policies. Philos. Technol. (2012) 25:129–131
  51. Floridi, Luciano. Artificial Intelligence, Deepfakes and a Future of Ectypes. Philos. Technol. (2018) 31:317–321
  52. Floridi, Luciano. Soft Ethics and the Governance of the Digital. Philos. Technol. (2018) 31: 1. https://doi.org/10.1007/s13347-018-0303-9
  53. Floridi, Luciano. What the Near Future of Artificial Intelligence Could Be. Philosophy and Technology. March, 2019.
  54. Floridi, Luciano, Josh Cowls et. al. An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations (Preprint) Minds and Machines, December 2018.
  55. Fox, Elaine, Riccardo Russo & Kevin Dutton “Attentional bias for threat: Evidence for delayed disengagement from emotional faces”. Cognition and Emotion, Pages 355–379 | Published online: 09 Sep 2010 https://doi.org/10.1080/02699930143000527
  56. Future of Life Institute. Asilomar principles for beneficial AI. 2017. Accessed March 2019: https://futureoflife.org/ai-principles/
  57. Future World of Work. Top 10 Principles for Ethical Artificial Intelligence. UNI Global Union, 2017. Accessed March 2019: http://www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf
  58. Gödel, Kurt Uber formal unentscheidbare Statze der Principia mathematica und verwandter Systeme I. Monatshefte fur Mathematik und Physik , 37, 173–198 (1931). Translation in S. Feferman et al., eds., Kurt Godel. Collected Works. Volume I: Publications 1929–1936. New York: Oxford University Press, 1986, pp. 116–195.
  59. Greene, Joshua et al. An fMRI Investigation of Emotional Engagement in Moral Judgment. SCIENCE Volume 14 SEP 2001: 2105–2108
  60. Gunning, David. Explainable Artificial Intelligence Update. DARPA, 2017.
  61. Hale SA, John P, Margetts H, Yasseri T. How digital design shapes political participation: A natural experiment with social information. PLoS ONE 13(4): e0196068. https://doi.org/10.1371/journal.pone.0196068, 2018.
  62. Haythornthwaite, Carol. Strong, Weak, and Latent Ties and the Impact of New Media. The Information Society: An International Journal. Pages 385–401, Published online: 19 Jan 2011. https://doi.org/10.1080/01972240290108195
  63. Holt, David. United States of America vs. Elena Khusyaynova. Accessed October 2018 at https://www.lawfareblog.com/document-justice-department-charges-russian-national-2018-election-meddling, 2018.
  64. House of Lords Select Committee on Artificial Intelligence. AI in the UK: ready, willing and able? Report of Session 2017–2019. April 2018. Accessed March, 2019: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf
  65. Issac, M & Shane S. Facebook’s Russia-Linked Ads Came in Many Disguises. The New York Times, 2017.
  66. Isaac, M. & Wakabayashi, D. Russian Influence Reached 126 Million Through Facebook Alone. The New York Times, 2017.
  67. Japanese Society For Artificial Intelligence. Ethical Guidelines. 2017. Accessed March, 2019: http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf
  68. Jichang Zhao, Junjie Wu, and Ke Xu. Weak ties: Subtle role of information diffusion in online social networks. PHYSICAL REVIEW E, 2010.
  69. Jing, Meng. China’s tech billionaires back ethical rules to guide development of AI and other technologies. South China Morning Post. March, 2019.
  70. Kaplan Andreas M. & Michael Haenlein. The early bird catches the news: Nine things you should know about micro-blogging. Business Horizons (2011) 54, 105–113. doi:10.1016/j.bushor.2010.09.004
  71. Khan, Matthew. Document: Justice Department Charges Russian National for 2018 Election Meddling. Lawfare. https://www.lawfareblog.com/document-justice-department-charges-russian-national-2018-election-meddling, 2018.
  72. Kirsch, Alexandra. Explain to whom? Putting the User in the Center of Explainable AI. Proceedings of the First International Workshop on Comprehensibility and Explanation in AI and ML 2017 co- located with 16th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2017), 2017, Bari, Italy. <hal-01845135>
  73. Klynge, Casper. Diplomacy in the Age of GAFA: The Exponential View Podcast with Azeem Azhar. December 2018.
  74. LeBar, Mark and Slote, Michael, “Justice as a Virtue”, The Stanford Encyclopedia of Philosophy (Spring 2016 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/spr2016/entries/justice-virtue/.
  75. Linvill, Darren & Patrick L. Warren. Troll Factories: The Internet Research Agency and State-Sponsored Agenda Building. (pre-review copy, 2018). Accessed here: http://pwarren.people.clemson.edu/Linvill_Warren_TrollFactory.pdf
  76. Liotsiou, Mimi (Dimitra) et al. Junk News Aggregator. Computational Propaganda Project, Oxford Internet Institute, Oxford. 2018. https://newsaggregator.oii.ox.ac.uk/
  77. Margetts, Helen. Professor Helen Margetts: “The Data Science of Politics”, lecture at the Alan Turing Institute, Youtube. 2016 Available at: https://www.youtube.com/watch?v=LH3vvA7PL1U accessed September 2018.
  78. Margetts, Helen. The Computational Social Science of (Turbulent) Politics. Presentation. GEISS Computational Social Science Winter Symposium. Nov 2016. Accessed Nov 2018. https://www.gesis.org/fileadmin/upload/events/CSS_Wintersymposium/keynotes/margetts_cssws16.pdf
  79. Margetts, Helen, Yassari & Hale. Political Turbulence: How Social Media Shape Collective Action. Princeton University Press, Princeton, 2015.
  80. Margaret Boden, Joanna Bryson, Darwin Caldwell, Kerstin Dautenhahn, Lilian Edwards, Sarah Kember, Paul Newman, Vivienne Parry, Geoff Pegman, Tom Rodden, Tom Sorrell, Mick Wallis, Blay Whitby & Alan Winfield (2017) Principles of robotics: regulating robots in the real world, Connection Science, 29:2, 124–129, DOI: 10.1080/09540091.2016.1271400
  81. Mikyoung, Kim, Michael Chou. Civic Spaces in an Age of Hyper-Complexity: From Protest to Reverie, Outline and Abstract. Harvard University Graduate School of Design. Accessed at: https://www.gsd.harvard.edu/course/civic-spaces-in-an-age-of-hyper-complexity-from-protest-to-reverie-fall-2017/, 2017
  82. Mittlestadt, Brent. Ethical auditing for automated decision-making, ongoing project. Oxford Internet Institute. Project description. https://www.oii.ox.ac.uk/research/projects/ethical-auditing-for-automated-decision-making/ Accessed Nov, 2018.
  83. Mittlestadt, Brent. Introduction to Data Ethics. The Alan Turing Institute Youtube channel. https://youtu.be/qVo9oApl4Rs Accessed Nov, 2018.
  84. Mohurle, Savita; Patil, Manisha. A brief study of Wannacry Threat: Ransomware Attack 2017. International Journal of Advanced Research in Computer Science; Udaipur Vol. 8, Iss. 5, (May 2017).
  85. Murphy, Robin R. and David D. Woods. Beyond Asimov: The Three Laws of Responsible Robotics.IEEE Intelligent Systems, 2009.
  86. Narayanan, Vidya, Vlad Barash, John Kelly, Bence Kollanyi, Lisa-Maria Neudert & Phillip N. Howard. Polarization, Partisanship and Junk News Consumption over Social Media in the US. Oxford Computational Propaganda Project, Oxford, 2018. http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/02/Polarization-Partisanship-JunkNews.pdf Accessed, October 2018.
  87. National Literacy Trust. Fake news and critical literacy: The final report of the Commission on Fake News and the Teaching of Critical Literacy in Schools. London, 2018.
  88. Naughton, John. Even algorithms are biased against black men: A study on offenders in Florida refutes the notion that computers are more objective than people. The Guardian. June 2016.
  89. Nimmo, Ben. How Robots Joined the Battle in the Gulf. Columbia Journal of International Affairs. https://jia.sipa.columbia.edu/robot-wars-how-bots-joined-battle-gulf. 2018.
  90. Nimmo, Ben and Kanishk Karan, for the Digital Forensic Research Lab. #TrollTracker: Favorite Russian Troll Farm Sources. Medium.com. Accessed October 2018:https://medium.com/dfrlab/trolltracker-favorite-russian-troll-farm-sources-48dc00cdeff
  91. Niedermeyer, Edward. 10 Lessons From Uber’s Fatal Self-Driving Car Crash. The Drive. March, 2019. Accessed March, 2019: https://www.thedrive.com/tech/27023/10-lessons-from-ubers-fatal-self-driving-car-crash
  92. Orge Castellano, Social Media Giants Are Hacking Your Brain — This is How. Medium, 2017. (https://medium.com/@orge/your-brain-is-being-hacked-by-social-media-584ac1d2083c)
  93. Pendleton-Jullian, Ann. Design Innovation and Innovation Ecotones. Ohio State University. Accessed at: https://fourplusone.files.wordpress.com/2010/03/apj_paper_14.pdf 2010
  94. Pichai, Sundar. AI at Google: Our Principles. June, 2018.
  95. Pribić, Milena. Everyday Ethics for Artificial Intelligence. Medium.com. Sept. 2018. Accessed March, 2019: https://medium.com/design-ibm/everyday-ethics-for-artificial-intelligence-75e173a9d8e8
  96. Putnam, Tonya L & David D. Elliot. International Responses to Cyber Crime. : Cyber DP5 HPCYBE0200 06–25-:1 11:57:25. http://media.hoover.org/sites/default/files/documents/0817999825_35.pdf Accessed October 2018.
  97. Roeder, Oliver. Why We’re Sharing 3 Million Russian Troll Tweets. Five Thirty Eight. July, 2018. https://fivethirtyeight.com/features/why-were-sharing-3-million-russian-troll-tweets/
  98. Samuel, Alexandra. To Fix Fake News, Look To Yellow Journalism. JSTOR Daily, 2016. Accessed October 2018: https://daily.jstor.org/to-fix-fake-news-look-to-yellow-journalism/
  99. Schroeder, Ralph. “Ethics and Social Issues in Shared Virtual Environments Revisited”. Cyberselves Symposium, podcast, 2015.
  100. M. Singh, “U-SCRUM: An Agile Methodology for Promoting Usability,” Agile 2008 Conference, Toronto, ON, 2008, pp. 555–560. doi: 10.1109/Agile.2008.33
  101. Soroush Vosoughi, Deb Roy, Sinan Aral. The spread of true and false news online. Science 09 Mar 2018: Vol. 359, Issue 6380, pp. 1146–1151 DOI: 10.1126/science.aap9559
  102. Spinney, Laura. How Facebook, fake news and friends are warping your memory. Nature, 2017. https://www.nature.com/news/how-facebook-fake-news-and-friends-are-warping-your-memory-1.21596
  103. Sutherland, Rory. Reliable signals in a post-truth world. Barb, 2017. Accessed March, 2019: https://www.barb.co.uk/viewing-report/reliable-signals-in-a-post-truth-world/
  104. Taddeo, Mariarosaria. The Limits of Deterrence Theory in Cyberspace. Philos. Technol. (2018) 31:339–355.
  105. Taddeo, Mariarosaria. Deterrence and Norms to Foster Stability in Cyberspace. Philosophy & Technology (2018) 31:323–329
  106. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Ethically Aligned Design: A Vision For Prioritizing Wellbeing With Artificial Intelligence And Autonomous Systems, Version 1. IEEE, 2016. http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html.
  107. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, First Edition. IEEE, 2019. https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/ autonomous-systems.html
  108. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2. IEEE, 2017. http://standards.ieee.org/develop/indconn/ec/autonomous_ systems.html.
  109. Turing, Alan M. Computing Machinery and Intelligence. Mind. Vol xiv, №236. 1950.
  110. Twitter Policy. Update on Twitter’s Review of the 2016 U.S. Election. (2018). Available at: https://blog.twitter.com/official/en_us/topics/co mpany/2018/2016-election-update.html. (Accessed: 30 January 2018)
  111. Unger, Russ & Carolyn Chandler. A Project Guide to UX Design, Second Edition. New Riders, Berkeley California, 2012.
  112. United States Air Force, Cyberspace Information and Operations Study Centre. What are Information Operations. http://www.au.af.mil/info-ops/what.htm 2006. Accessed 2018.
  113. University of Montréal. Montréal Declaration for Responsible AI. University of Montréal. Montréal, 2017. Accessed March 2019: https://www.montrealdeclaration-responsibleai.com/
  114. Vian Bakir & Andrew McStay (2018) Fake News and The Economy of Emotions, Digital Journalism, 6:2, 154–175, DOI: 10.1080/21670811.2017.1345645
  115. Volokh, Eugene. Chief Justice Robots. Reason.com. Jan, 2019. Accessed March 2019: https://reason.com/volokh/2019/01/14/chief-justice-robots
  116. Von Goethe, Johann Wolfgang. Theory of Colours. 1810. Translated from German, with notes by Charles Lock Eastlake, 1840. Online edition. Accessed here: https://theoryofcolor.org/Theory+of+Colors. November 2018.
  117. Wachter, Sandra, Brett Mittelstadt and Chris Russell. Counterfactual Explanations Without Opening The Black Box: Automated Decisions And The GDPR. Oxford Internet Institute. Oxford, 2018.
  118. Williams, James. Stand Out of Our Light: Freedom and Persuasion in the Attention Economy. Excerpts. Nine Dots Prize. Accessed at: https://ninedotsprize.org/extracts-stand-light-freedom-persuasion-attention-economy/ 2017.
  119. Woodward, Ashley. For an Aesthetic Definition of Information, presentation. Abertay University, 2017. Accessed at: https://scot-cont-phil.org/files/2017/03/For-an-Aesthetic-Definition-of-Information.pdf
  120. World Economic Forum. 3.2 Assessing the Risk of Artificial Intelligence. Global Risks Report 2017. 2017. Accessed March 2019: http://reports.weforum.org/global-risks-2017/part-3-emerging-technologies/3-2-assessing-the-risk-of-artificial-intelligence/
  121. Winfield, Alan. A Round Up of Robotics and AI ethics. Alan Winfield’s Web Log. Dec 2017. Accessed March, 2019: http://alanwinfield.blogspot.com/2017/12/a-round-up-of-robotics-and-ai-ethics.html
  122. Xinhua. AI association to draft ethics guidelines. Xinhuanet. January, 2019. Accessed March, 2019: http://www.xinhuanet.com/english/2019-01/09/c_137731216.htm
  123. Yasseri, Taha & Hale, Scott & Margetts, Helen. Modeling the Rise in Internet-based Petitions. Physics and Society, 2013. arXiv:1308.0239v3
  124. Zhao, Dejin, Mary Beth Rosson, Tara Matthews & Thomas Moran. Microblogging’s impact on collaboration awareness: A field study of microblogging within and between project teams. International Conference on Collaboration Technologies and Systems (CTS). 10.1109/CTS.2011.5928662 https://ieeexplore.ieee.org/document/5928662/
  125. Zhang, Phoebe. China’s top AI scientist drives development of ethical guidelines. South China Morning Post. Jan 2019. Accessed March 2019: https://www.scmp.com/news/china/science/article/2181573/chinas-top-ai-scientist-drives-development-ethical-guidelines
  126. Zuckerberg, Mark. Facebook’s letter from Mark Zuckerberg — full text. The Guardian. 2012. Accessed October 2018: https://www.theguardian.com/technology/2012/feb/01/facebook-letter-mark-zuckerberg-text

Source: Artificial Intelligence on Medium

(Visited 36 times, 1 visits today)
Post a Comment