Blog: Weeknotes S1E8
My name is Jess. I am AI lead at the UK Department of Health and Social Care (DHSC)(in the process of becoming NHSX). I’m also an MSc Student at the University of Oxford’s Internet Institute (OII) and a Research Assistant at the OII’s Digital Ethics Lab (DELab). I am supervised by Professor Luciano Floridi.
This has been an absolutely mammoth week. Accurately summed up by this tweet:
Anyway, here goes:
Things that happened
- We launched our State of the Data-Driven Ecosystem survey. This has been my baby for the last couple of months, thinking about the information we need to start to go through some of the hype in the ‘AI for Health conversation’ and how we might go about getting that. This is not at all to imply that I think the survey is the only answer to that question, merely that it is a source of information that can help us start to develop an evidence-base for decisions about what we do next from the development of standards, policies and assurances processes to where we invest. It won’t stay open for that long (we’re hoping to have early results ready for CogX and London Tech Week in mid-June) so it’s definitely going to be a challenge to get a good range of responses. But, provided we are cognisant of the limitations and don’t try to oversell the results or make decisions based on insignificant or non-reproducible results, it should help us move our plans along significantly. Indra and I blogged about this more here and I put the full list of survey questions here.
- One Health Tech Oxford got a Twitter Account thanks to our lovely social media volunteer and we got people to volunteer for some other roles too, including: Comms; Speaker Bookings; Venue Bookings; Managing Partnership Opportunities; Seeking Sponsorship; Funding Applications. Having dedicated people who are committed to making the absolute most of the Hub is vital if we are to achieve our two aims of:
- Making stuff happen: HealthTech is a wonderfully exciting world to operate in, it is also sometimes MADDENINGLY FRUSTRATING. The gap between possibility and on-the-ground reality seems huge and daunting. Thus, we would very much like to focus on making stuff happen, getting productive conversations going, understanding challenges and being able to make suggestions to solutions.
- Fully embracing diversity: Being a woman in tech is awesome (obviously) but it’s also an exclusionary label that not everyone identifies with and it groups us all together rather than celebrating the fact that difference is what makes us strong. So we really want to focus on the ONE meaning everyONE and creating an inclusive community that brings everyone together behind a common goal (see above) whilst allowing their individuality to really shine through.
- Very excitingly we (me, Luciano, Libby and Anat) put the pre-print of our “From What to How. An Overview of AI Ethics Tools, Methods and Research to Translate Principles into Practices” paper on arXiv. This has taken a lot of work but it’s also been really fun, and I hope it’s the start of an interesting conversation in the Responsible AI community. It’s definitely where I am trying to get us to in health, so that we can actually start helping ensure people develop, deploy and use AI for Healthcare in a way that is ethical and safe, rather than just signalling that this is what we want to happen. This is the full abstract of the paper:
The debate about the ethical implications of Artificial Intelligence dates from the 1960s (Wiener, 1960) (Samuel, 1960). However, in recent years symbolic AI has been complemented and sometimes replaced by (Deep) Neural Networks and Machine Learning (ML) techniques. This has vastly increased its potential utility and impact on society, with the consequence that the ethical debate has gone mainstream. Such a debate has primarily focused on principles — the ‘what’ of AI ethics (beneficence, non-maleficence, autonomy, justice and explicability) — rather than on practices, the ‘how.’ Awareness of the potential issues is increasing at a fast rate, but the AI community’s ability to take action to mitigate the associated risks is still at its infancy. Therefore, our intention in presenting this research is to contribute to closing the gap between principles and practices by constructing a typology that may help practically-minded developers ‘apply ethics’ at each stage of the pipeline, and to signal to researchers where further work is needed. The focus is exclusively on Machine Learning, but it is hoped that the results of this research may be easily applicable to other branches of AI. The article outlines the research method for creating this typology, the initial findings, and provides a summary of future research needs.
The debate about the ethical implications of Artificial Intelligence dates from the 1960s. However, in recent years…arxiv.org
The full typology is available here.
- On Monday, UCL PhDR UK hosted an event about AI in Healthcare ‘Making Algorithms That Work.’ It was part panel discussion, part workshop with a lot of audience participation, including table discussions about what ‘transparency and ethics mean to you’ and musical panel members with Maxine asking two different members of the audience to come and join us and add new perspectives. I really enjoyed it, I rarely get to be on panels when I actually get to be properly nerdy and technical, often I have to stay at the super high-level of policy so it was a lot of fun to actually talk about the benefits of explainability techniques like LIME and SHAP; the myths that have evolved from equating transparency with accountability; and why ethical data sharing/use has got to come down to a fair return on investment, solving a real problem or genuinely improving an aspect of care/efficiency ‘don’t do something with the data just because it’s cool and you can.’
- On Wednesday I gave a lecture to some 4th year Medics at Imperial College Medical School who have been taking a module in computer science, on ‘Governing the Ecosystem for AI in Healthcare.’ Although very strange to be on the other side of the lecturing podium it was also a great conversation. I can’t see how ‘AI’ will deliver any of the potential benefits for healthcare unless clinicians and patients are involved in its design. I don’t think this means that everyone needs to understand exactly how machine learning works but I think there need to be enough workforce champions with the skills that are able to support the culture change that is needed. This is why I am really keen to support initiatives such as this one from Imperial. However, I did get asked the million dollar question ‘if an automated decision making software makes the wrong diagnosis and something happens to the patient, who is responsible.’ If you know the answer, stick it on a post-it note and send it to me please…Link to full slide deck is here, but a few of my favourites are below:
Things I finished
- For a while we have been working with Future Advocacy on how (see there’s a theme to my work) developers can ‘comply’ with Principle 7 of our Code of Conduct. We’ve now completed the review and this week I finished going over the feedback, incorporating it into our thinking and planning for the next steps.
- I also got the feedback from anonymous reviewers from another paper I have been working on with a collaborator about digital health and so finished revising and re-submitting based on this. You’re never quite sure how you’re going to feel when someone comments on your work, especially if you’re like me and you put all your effort into everything you do, but I actually really enjoyed the process and I think the revised version is much better. Now it’s just a case of waiting to see what happens.
Things I continued to work on
- I continued to work on my paper about the need to take a proactive ‘digital ethics’ approach to these Governance challenges posed by digitising the NHS so that the transition from an evaluation of what is morally good to what is politically feasible and legally enforceable (Floridi, 2018) happens before an ethical mistake leads to social rejection (Mittelstadt, 2019), and leaves the NHS unable to benefit from the so-called ‘dual advantage’ (where opportunities are capitalised on and risks mitigated (Floridi et al., 2018)) of an ethical approach to governance (Floridi et al., 2018). I argue that in order to do this, we have to move away from focusing solely on the impacts of the individual and take a systems approach to analysis. Here’s a snippet
Reflection on the ethical implications of medical intervention has been a feature of delivering medical care since antiquity (Mann, Savulescu, & Sahakian, 2016) and medical practitioners’ promise ‘to do no harm’ to their individual patients. As such the bioethical principles of beneficence, non-maleficence, justice and autonomy (Beauchamp & Childress, 2013) are well established in the medical literature and have recently been adopted (along with the new principle ‘explicability’) by the ‘ethical Artificial Intelligence (AI)’ community (Floridi et al., 2018a) in one of many attempts to encourage the development of algorithmic systems that are fair, accountable and transparent (Lepri et al., 2018). This coming together of bioethics and AI ethics is essential given the vast array of harms related to the potential for AI to: replicate or exacerbate bias and behave in unexpected risky ways (Wachter, Mittelstadt, & Floridi, 2017); alter the interaction between patients and healthcare professionals; change people’s perception of their responsibility in managing their own body (Verbeek, 2009); and, use hugely personal information to manipulate patient behaviour without their realising it (Berdichevsky & Neuenschwander, 1999). However, this focus on the bioethical principles has prompted governance responses (in terms of policy and regulation) that focus solely on individual level impacts.
To effectively manage the risks associated with making the NHS into a heterogeneous organisations (where interactions happen between human, non-human agents), and ensure the NHS as a whole can benefit from the dual advantage of ethical governance and keep to its commitment of belonging to all, a different Level of Abstraction (LoA) (Floridi, 2008) is required. The appropriate LoA is one that looks at the systems level and considers the entire human, social and organisational infrastructure that data-driven health and care technologies are being embedded in (Macrae, 2019) and involves public voices (Gonzalez-Polledo, 2018) so that the societal implications become clear (O’Doherty et al., 2016). A systems-level analysis, as set out below, will highlight the emergent impacts on fairness, accountability and transparency (Lepri et al., 2018) that result from the interaction between connected system components (Rebhan, 2017) and produce a more holistic understanding of the governance challenges facing an informationally-maturing NHS (Crawford & Calo, 2016) than possible when analysed at the individual LoA.
- I also continued to work with the excellent analysts of the AHSN AI initiative to come up with the data analysis plans for the survey. These will change over time depending on how many responses we get but I feel we’re getting to a good place.
- Adam Steventon from The Health Foundation wrote a great piece following the roundtable they hosted about the pros/cons of data is the… analogies including ‘data is the new oil.’ I personally hate this phrase, it’s so over-used and I think inaccurate. Adam and I were exchanging emails about this and he’s asked me to write a separate blog post in response to his original which I’m working on with Indra. Hopefully it will go live in a couple of weeks but for now two of the main high level points:
If we focus too much on the individual commodity perspective that the oil analogy encourages, those who are data-rich (e.g. people who follow the teachings of the quantified self) are likely to benefit significantly more from developments in e.g. P4 medicine and group level impact is likely to be hugely inequitable. If instead, the data generated by the data-rich are seen as a public good, everyone can benefit (yes there will be issues with bias etc. and generalisability that we would have to take care of but that is something we can overcome using specific statistical techniques). What we would want from those using the data would be a fair at scale return on investment e.g. the development of new treatments available for everyone, better quality and better managed health services etc.
Finally, my point around whether data is the new anything. I’m increasingly convinced that making comparisons is not helpful because if we are being realistic what we are actually facing in healthcare is a paradigm shift. When extrapolated to its furthest extent it is clear that, by increasingly relying on data, we are fundamentally changing every single aspect of healthcare. This is not the new anything. It is just new. Or if we have to rely on a metaphor for health I think it’s closer to the discovery of germs and epidemiology e.g. cholera and John Snow (not the GoT one) which completely changed the way we approached health.
Things I learned
- Mostly everything I learned this week is from this great paper from members of the Digital Ethics Lab including Josh, Rosaria and Luciano about the ethical factors that are essential for future AI for social good initiatives. As explained in the paper these are: (1) falsifiability and incremental deployment; (2) safeguards against the manipulation of predictors; (3) receiver-contextualised intervention; (4) receiver-contextualised explanation and transparent purposes; (5) privacy protection and data subject consent; (6) situational fairness; and (7) human-friendly semanticisation. I want to spend some time thinking about these and how we can ensure these factors are in accounted for in our various AI for Health initiatives. In the meantime, I encourage anyone reading this to go read the full paper. Abstract is below.
The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to address social problems effectively through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies (Cath et al. 2018). This article addresses this gap by extrapolating seven ethical factors that are essential for future AI4SG initiatives from the analysis of 27 case studies of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Things I thought about
- As it was Mental Health Awareness Week this week I primarily thought about mental health.
A lot of people comment when I post weeknotes saying ‘I can’t believe you do so much.’ I do, do a lot but it also works for me, I genuinely enjoy sitting in the library and reading, thinking and writing about complex topics that interest me. I can’t think of much I’d prefer to do, other than what I do at NHSx which is about making good things happen (even if it doesn’t always seem like that from the outside). The fact that I get to do all of it is a privilege but I also have a fairly low need for human interaction and so spending lots of hours on my own does not have any particularly detrimental effect on my mental health. This is not to say that this has not always been the case.
I have always suffered from quite acute anxiety which peaked a couple of years ago when I effectively had a nervous breakdown in my old job. It was appalling at the time but it made me face up to what I was living with and gave me the kick I needed to go back to doing what I loved: working in health, not just working in tech. It also made me realise that I had to go and get professional help, something I had put off for far longer than I should have out of sheer stubbornness. I did a course of CBT and I started taking anti-anxiety medication (same as anti-depressants) which I have continued to take ever since.
It was only once my body calmed down that I realised quite how hampering the effects of being that anxious had been, and how completely abnormal the level of panic and anxiety I had been living with was. To give an example, I once spent two weeks convinced (for no reason) that somebody had stolen my identity, was committing crimes in my name, and that I was going to be arrested and go to jail. I spent an entire day in the office during this period of time looking up how to get a new identity. This is funny to me now but it definitely wasn’t then.
Basically this is quite a long way of saying, I couldn’t function at the level I do now if I hadn’t got help. If you’re suffering, don’t wait.
Things I read
Adjerid, I., Acquisti, A., Telang, R., Padman, R., & Adler-Milstein, J. (2016). The Impact of Privacy Regulation and Technology Incentives: The Case of Health Information Exchanges. Management Science, 62(4), 1042–1063. https://doi.org/10.1287/mnsc.2015.2194
Ashmore, R., Calinescu, R., & Paterson, C. (2019). Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges. ArXiv:1905.04223 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1905.04223
Bizzego, A., Bussola, N., Chierici, M., Maggio, V., Francescatto, M., Cima, L., … Furlanello, C. (2019). Evaluating reproducibility of AI algorithms in digital pathology with DAPPER. PLOS Computational Biology, 15(3), e1006269. https://doi.org/10.1371/journal.pcbi.1006269
Boaz, A., Chambers, M., & Stuttaford, M. (2014). Public participation: more than a method? Comment on “Harnessing the potential to quantify public preferences for healthcare priorities through citizens’ juries”. International Journal of Health Policy and Management, 3(5), 291–293. https://doi.org/10.15171/ijhpm.2014.102
Carmel-Gilfilen, C., & Portillo, M. (2016). Designing With Empathy: Humanizing Narratives for Inspired Healthcare Experiences. HERD: Health Environments Research & Design Journal, 9(2), 130–146. https://doi.org/10.1177/1937586715592633
Cheng, J., Burke, M., & Davis, E. G. (2019). Understanding Perceptions of Problematic Facebook Use: When People Experience Negative Life Impact and a Lack of Control. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems — CHI ’19, 1–13. https://doi.org/10.1145/3290605.3300429
Covert, I., Krishnan, B., Najm, I., Zhan, J., Shore, M., Hixson, J., & Po, M. J. (2019a). Temporal Graph Convolutional Networks for Automatic Seizure Detection. ArXiv:1905.01375 [Cs, Eess, Stat]. Retrieved from http://arxiv.org/abs/1905.01375
Covert, I., Krishnan, B., Najm, I., Zhan, J., Shore, M., Hixson, J., & Po, M. J. (2019b). Temporal Graph Convolutional Networks for Automatic Seizure Detection. ArXiv:1905.01375 [Cs, Eess, Stat]. Retrieved from http://arxiv.org/abs/1905.01375
Dong, W., Guan, T., Lepri, B., & Qiao, C. (2019). PocketCare: Tracking the Flu with Mobile Phones using Partial Observations of Proximity and Symptoms. ArXiv:1905.02607 [Cs]. https://doi.org/10.1145/3328912
Downes, L. (2009). The laws of disruption: harnessing the new forces that govern life and business in the digital age. New York: Basic Books.
Floridi, L. (2019). Establishing the rules for building trustworthy AI. Nature Machine Intelligence. https://doi.org/10.1038/s42256-019-0055-y
Gadepally, V., Goodwin, J., Kepner, J., Reuther, A., Reynolds, H., Samsi, S., … Martinez, D. (2019). AI Enabling Technologies: A Survey. ArXiv:1905.03592 [Cs]. Retrieved from http://arxiv.org/abs/1905.03592
Huang, Y., Zhang, Z., Wang, N., Li, N., Du, M., Hao, T., & Zhan, J. (2019). A new direction to promote the implementation of artificial intelligence in natural clinical settings. ArXiv:1905.02940 [Cs]. Retrieved from http://arxiv.org/abs/1905.02940
Keyworth, C., Hart, J., Armitage, C. J., & Tully, M. P. (2018). What maximizes the effectiveness and implementation of technology-based interventions to support healthcare professional practice? A systematic literature review. BMC Medical Informatics and Decision Making, 18(1), 93. https://doi.org/10.1186/s12911-018-0661-3
Khan, M., Fernandes, G., Sarawgi, U., Rampey, P., & Maes, P. (2019). PAL: A Wearable Platform for Real-time, Personalized and Context-Aware Health and Cognition Support. ArXiv:1905.01352 [Cs]. Retrieved from http://arxiv.org/abs/1905.01352
Lang, A. (2019). Collaborative Governance in Health and Technology Policy: The Use and Effects of Procedural Policy Instruments. Administration & Society, 51(2), 272–298. https://doi.org/10.1177/0095399716664163
Lennox-Chhugani, N. (2018). A User-Centred Design Approach to Integrated Information Systems — A Perspective. International Journal of Integrated Care, 18(2), 15. https://doi.org/10.5334/ijic.4182
Mathewson, K. W. (2019). A Human-Centered Approach to Interactive Machine Learning. ArXiv:1905.06289 [Cs]. Retrieved from http://arxiv.org/abs/1905.06289
McGregor, L., Murray, D., & Ng, V. (2019). INTERNATIONAL HUMAN RIGHTS LAW AS A FRAMEWORK FOR ALGORITHMIC ACCOUNTABILITY. International and Comparative Law Quarterly, 68(2), 309–343. https://doi.org/10.1017/S0020589319000046
McIntyre-Mills, J. (2010). Participatory Design for Democracy and Wellbeing: Narrowing the Gap Between Service Outcomes and Perceived Needs. Systemic Practice and Action Research, 23(1), 21–45. https://doi.org/10.1007/s11213-009-9145-9
Ngiam, K. Y., & Khor, I. W. (2019). Big data and machine learning algorithms for health-care delivery. The Lancet Oncology, 20(5), e262–e273. https://doi.org/10.1016/S1470-2045(19)30149-4
Nguyen, H. D., Tran, K. P., Zeng, X., Koehl, L., & Tartare, G. (2019). Wearable Sensor Data Based Human Activity Recognition using Machine Learning: A new approach. ArXiv:1905.03809 [Cs, Eess]. Retrieved from http://arxiv.org/abs/1905.03809
Oborn, E., & Barrett, S. (2016). Digital health and citizen engagement: Changing the face of health service delivery. Health Services Management Research, 29(1–2), 16–20. https://doi.org/10.1177/0951484816637749
Pacifico Silva, H., Lehoux, P., Miller, F. A., & Denis, J.-L. (2018). Introducing responsible innovation in health: a policy-oriented framework. Health Research Policy and Systems, 16(1), 90. https://doi.org/10.1186/s12961-018-0362-5
Popkes, A.-L., Overweg, H., Ercole, A., Li, Y., Hernández-Lobato, J. M., Zaykov, Y., & Zhang, C. (2019). Interpretable Outcome Prediction with Sparse Bayesian Neural Networks in Intensive Care. ArXiv:1905.02599 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1905.02599
Reis, J., Santo, P. E., & Melão, N. (2019). Artificial Intelligence in Government Services: A Systematic Literature Review. In Á. Rocha, H. Adeli, L. P. Reis, & S. Costanzo (Eds.), New Knowledge in Information Systems and Technologies (Vol. 930, pp. 241–252). https://doi.org/10.1007/978-3-030-16181-1_23
Salman, S., Payrovnaziri, S. N., Liu, X., & He, Z. (2019). Interpretable Deep Neural Networks for Patient Mortality Prediction: A Consensus-based Approach. ArXiv:1905.05849 [Cs, Stat]. Retrieved from http://arxiv.org/abs/1905.05849
Scott, K., Jessani, N., Qiu, M., & Bennett, S. (2018). Developing more participatory and accountable institutions for health: identifying health system research priorities for the Sustainable Development Goal-era. Health Policy and Planning, 33(9), 975–987. https://doi.org/10.1093/heapol/czy079