Blog: AI.Westminster #10
Government publishes Government Technology Innovation Strategy and a guide to using artificial intelligence in the public sector
What happened: The government has published its Government Technology Innovation Strategy, setting out how it plans to utilise emerging technologies, including AI, in public services and within government. The strategy has three key themes: people, processes, and data and technology:
People means attracting new talent, especially through apprenticeship, and upskilling existing staff, along with potential for civil servant secondments to industry to bring back ‘culture’. The strategy notes a lack of those able to build data infrastructure and increasing reliance on 3rd party contractors.
While the strategy emphasises a coordinated approach and improving the presentation of digital government careers, it misses the elephant in the room. Salary. To a degree, an appeal to the virtues of public service can work to attract talent but to develop exceptional individuals and prevent losing them to industry later down the line, government needs to offer salaries to data scientists competitive with the tech giants. Which in machine learning now routinely means six-figures for those being tempted away by Silicon Valley.
This is of course a significant challenge in the face of austerity, which has been driving process automation across government. This document is intended to guide government departments as they prepare their plans for the upcoming Spending Review, where allocations of budgets are decided for upcoming years. The Institute for Government has a great summary of the role of spending reviews and the importance of the next one.
A spending review, especially one that comes as the current consensus around spending and the role of government are being questioned, provides a perfect opportunity to rebalance the scales across government. So, it is important the issue of salary doesn’t go unnoticed when keeping talent to develop world-leading systems in-house (that could even be sold to other governments or the private sector) could end up costing less in the long-run in exchange for spending now.
Processes focuses on improving procurement and scaling up the use of emerging technology. This includes Spark, a new marketplace for the supply of emerging technologies including AI, and increasing the use of challenge and competition-based procurement following on from the GovTech Catalyst. There is a strong emphasis on supporting start-ups and SMEs to provide technology and services to government.
The government, in collaboration with the World Economic Forum, are also developing specific guidelines for AI procurement due to be piloted by departments in Autumn 2019. These guidelines will include advice on ethical considerations and guidance on the responsible adoption of AI throughout the procurement process.
Data and Technology focuses on greater use of existing data by increasing access, structuring the data, and developing a coherent approach to data across departments. It also means interoperable and interchangeable technologies and platforms built on transparent standards to prevent legacy lock-in. These standards are not specific to a given technology to help prevent the standards becoming legacy themselves. As part of this, DCMS has launched open call for evidence to inform the National Data Strategy. The deadline for evidence is the 14th July.
Guide to Using AI in the Public Sector: Published alongside the Technology Innovation Strategy, the Office for AI and Government Digital Service have published a guide to using AI in the public sector. The guide looks very promising. It includes practical advice on how to manage and utilise AI, including highlighting there is no one ‘AI technology’ but rather a collection of techniques and tools which is important in being realistic about the application of AI.
What makes so encouraging however is that it asks users to consider whether AI is the right solution for the problem at hand rather than trying to shoehorn it into projects. Further, it includes an actionable guide to the ethical use of AI created by the Turing Institute. This ethical advice is much more concrete than many of the high-level principles that have come before from governments and industry, making it much clearer for those who are implementing the technologies to actually construct systems in an ethical way.
Government announces £18.5 million to boost diversity in AI tech roles and innovation in online training for adults
What happened: The government has announced it will invest an additional £18.5 million over the next three years to increase the diversity of the workforce with data science skills and increase the use of AI in the teaching of skills to adults. This is split between two programmes:
· £13.5 million for up to 2500 data science and AI conversion courses over the next three years and 1000 scholarships for those from underrepresented backgrounds.
· A £5 million Adult Learning Technology Innovation Fund with Nesta to fund technology utilising AI and automation that will improve the quality of online learning for adults. The government had also published a review of the AI and online adult learning market.
Why it matters: There is a clear diversity problem in the field of AI development and computer science more generally. Recent works like Invisible Women by Caroline Criado Perez highlight the importance of including all groups in the design of systems to ensure that those systems consider the needs of all their users. Putting money behind this priority demonstrates government is taking the issue seriously.
Home Office revealed to be algorithmically screening visa applications
What happened: The Financial Times (£) has reported that the Home Office has been using
an algorithmic streaming tool to grade all visa applications according to their level of risk. They are given a green, amber or red rating and then forwarded to caseworkers for further processing. The Home Office did not provide any details about what factors the algorithm considers but claims it does not stream on the basis of race.
Why it matters: Firstly, a lack of transparency on the factors that go into the decision-making, even just to an independent review body, raise concerns. Those developing the system may not explicitly include race as a factor but other characteristics can become proxies for race. This system could legitimately speed up immigration decisions, which is clearly a good thing and its understandable they don’t want to share the specific factors to avoid gaming of the system. But the Home Office need to be aware that they have a bad track record, especially after Windrush, and appreciate transparency is necessary to retain public confidence and keep the systems from reinforcing existing bias.
Further, the Home Office has stated that caseworkers make the final decision and were not influenced by the streaming algorithm. However, people are obviously going to use information available to them, even subconsciously and there is evidence people defer to algorithmic systems. On top of this, we know from reports by the Data Justice Lab and the Bureau for Investigative Journalism that the use of these systems is driven by pressures to reduce costs and increase efficiency in the face of austerity, incentivising caseworkers to make quick decisions. All of this also reduces the accountability of these systems and gives us less reason to be confident that any biases aren’t being corrected by human review.
Mayor of London warns that Brexit is distracting government from preparing for impact of AI on society.
What happened: Sadiq Khan, Mayor of London, warned that the chaos caused by Brexit means that the government hasn’t been able to properly prepare for the impact of AI on society or to educate the public and have a public debate about how AI should be used. He also highlighted that trust in AI will depend on addressing concerns around ethics and privacy.
Why this matters: The mayor is right to point out that Brexit has hamstrung the government in many ways, drawing away civil servants into the Department for Exiting the EU and preparations for no-deal, which has led to delays in the publication of green and white papers, most notably the Social Care Green Paper. It has also taken up a lot of political will and legislative time.
However, the AI Sector Deal, Office for AI and Centre for Data Ethics and Innovation have all been setup in the shadow of Brexit. The Online Harms White Paper, while delayed, has also been published since June 2016. If anything, investment in and the regulation of technology has been one of the bright spots of progress in the last couple years, perhaps because this is one area that isn’t (for now) as party-politically charged.
Bank of England sets out principles for the governance of AI in finance
What happened: James Proudman, Executive Director for UK Deposit Takers at the Bank of England, gave a speech on the governance of AI in finance. In March, the Bank of England and Financial Conduct Authority surveyed over 200 firms on their adoption of AI and machine learning. The full results will be published later in the year, but some initial findings include:
· 80% are using ML applications in some form, with larger firms further along in deployment
· Barriers to AI deployment currently seem to be mostly internal to firms, rather than stemming from regulation, including: legacy systems and unsuitable IT infrastructure; lack of access to sufficient data; and challenges integrating ML into existing business processes.
· Firms believe AI and ML would lower risks such as in anti-money laundering, KYC and retail credit risk assessment. But some acknowledge that, incorrectly used, AI and ML techniques could give rise to new, complex risk types e.g. flash crashes.
Proudman set out three principles for governance:
1. Boards should prioritise governance of data — what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.
2. Boards should focus on the oversight of human incentives and accountabilities within AI/ML systems.
3. Boards should consider the skill and controls need mitigate the risk of failure in the rapid implementation of AI systems at both at senior level and throughout the organisation.
Defence Secretary suggests a joint command hub for all autonomous military vehicles and robots
What happened: At the Land Warfare conference, the defence secretary reemphasised a commitment to maintain military capabilities through investment in advanced sensors and automated searching, tracking and detection systems. She also suggested that control of all autonomous vehicles and robots used by the military be brought under the control of a single hub in Joint Force command, in a similar way to the current use of Helicopters.
· Royal United Services Institute think-tank partners with the Centre for Data Ethics and Innovation to research how algorithmic tools are used by police forces in England and Wales and their potential for biased outcomes for certain individuals or groups. An interim report will be published in autumn 2019, alongside a draft code of practice to be consulted on. The final report will be published early 2020.
· The Financial Conduct Authority’s Innovation Director discussing how machine learning has been adopted by financial services and the regulator themselves, highlighting that there aren’t ‘off-the-shelf’ technology solutions available for regulators and central banks.