a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Can we trust AI decisions? Organisations need to introduce processes that ensure we can

Blog: Can we trust AI decisions? Organisations need to introduce processes that ensure we can


Go to the profile of NKB Group

Artificial intelligence is making headlines by beating human Chess and Go champions, and is quietly entering our lives in the form of virtual voice assistants, car navigation systems, and e-shop purchase recommendations. It provides even more benefits behind the scenes, unseen by the public eye, ranging from AI helping organisations protect themselves against risks associated with loans, to diagnosing medical conditions and even counting pedestrians.

All this has been useful, but it has also been accompanied by new worries around algorithmic biases and personal privacy. Within the new challenges introduced by AI are the risks of unintended discrimination potentially resulting in unfair decisions, as well as issues relating to consumers’ knowledge about how AI is involved in making significant or sensitive decisions about them. Many big questions remain unanswered, and even bigger ones have yet to be asked.

Despite these challenges, AI still presents an unprecedented opportunity to improve productivity, enhance competitiveness, and introduce new products and services. However organisations need to apply a set of guiding principles to ensure that when AI is used in decision-making, the process is explainable, transparent, and fair. The primary consideration in developing and deploying AI should be to protect the interests of human beings, including their well-being and safety. Only then can trust in AI be promoted.

The technology itself can help in ensuring some of those goals. Some claim blockchain could provide a robust oversight over the integrity of the data which is used to train the AI models. Several IT companies already announced work on software which can visualise how AI algorithms make decisions, and which can also detect unwanted biases. Whether these solutions will prove to be effective enough to understand and more importantly to explain a complex deep learning AI algorithm, does remain to be seen, however.

In any case, organisations should introduce internal governance structures and measures to ensure oversight of their use of AI. Existing governance structures can be adapted: ethical considerations can be introduced as corporate values, associated risks can be managed by the existing enterprise risk management structures.

A basic approach to managing risks associated with AI algorithms is to classify the probability and severity of harm to an individual as an impact of the decision made about that individual. How we define the probability and severity of harm depends on the context — the harm associated with a wrong diagnosis of a patient’s medical condition is different from the one related to a wrong purchase recommendation at an e-shop.

The resulting AI risk management model should then suggest the level of human oversight in the decision-making process. In areas where probability and/or severity of harm is significant, humans should have full control and AI should only provide recommendation.

Another scenario could be that AI provides a list of alternative choices, with humans able to decide, such as when a GPS navigation suggests a list of routes, the human chooses, and the AI can then adapt to any human decision along the way, like unforeseen road congestion. Perhaps it should stay this way when autonomous cars arrive, and when the driver will no longer be physically directing the car.

It is easy to understand that AI algorithms are only as good as the data used to train them, so organisations should ensure they work with quality data. There are so many ways to have bad data: datasets can be incomplete, biased, aged, inaccurate, altered, or simply less relevant to the problem being solved.

Different datasets should be used for training, testing, and validation. Test data is used to determine accuracy of the model developed using the training data, and validated using the validation data. Different demographic groups should always be tested to identify possible bias.

Many algorithms arrive from foreign countries and may be trained on samples that might not fully work in our space. If an airport in Europe, for example, decides to use an algorithm trained in China to scan passenger’s faces and compare them with a biometric database of known terrorists, it may be inaccurate when analysing, for instance, the European Roma population.

Some issues become visible only when the solution is applied on a bigger scale. For example an intelligent parking system may well suggest a driver park on a specific spot, but when the system is widely used and a hundred drivers are recommended the very same spot, a problem may arise.

Finally organisations should manage open and transparent communication, when deploying AI. They should provide information on whether AI is used in their products and/or services, and explain how it is used to make decisions about individuals. It is also important to carefully consider providing an “opt-out” option for individuals who choose not to be submitted to AI algorithms.

As AI technologies evolve, so will the related ethical and governance issues. Progress equals technology plus civil society. It is therefore important that the technologies are used in an ethical way, only then can they benefit society overall.

DISCLAIMER

The information contained herein is strictly private and confidential and being furnished to a limited number of prospective investors who have the necessary professional experience of participating in private equity, unregulated schemes and other such sophisticated investments, to high net worth individuals, companies and associations and to other persons to whom it may lawfully be communicated, (all such persons being referred to together as ‘relevant persons’). Any investment, or investment activity to which this document relates is only available to relevant persons and all persons who are not relevant persons should not rely or act on this document. This is not an advertisement and is not intended for public use or distribution. This document may not be reproduced, redistributed, or copied in whole or in part for any purpose without NKB GROUP AG’s prior express consent.

This document has been prepared in good faith, however the information contained herein is subject to change without notice and is provided as of the dates indicated. No representation or warranty, express or implied, is given by or on behalf of NKB GROUP AG, or any of its affiliates, directors, officers, employees, advisers or any other persons as to the accuracy, fairness or completeness of the information or opinions herein and save in the case of fraud, no liability whatsoever is accepted by any such person for any loss, howsoever arising, directly or indirectly, from any use of such information or opinions or otherwise arising in connection herewith.

This document is intended for discussion purposes only and does not purport to contain all information that may be required to form the basis of an investment decision. Nothing in this document constitutes any type of recommendation or investment, account, legal, regulatory, tax or other advice. Recipients should consult their own professional advisers regarding the potential consequences of participating in any investment opportunity referred to in this document, including but not limited to the potential legal, regulatory, credit, tax and accounting impact of such an investment based upon their individual circumstances.

No action has been taken to permit the distribution of this document in any jurisdiction where any such action is required. Such distribution may be restricted in certain jurisdictions and, accordingly, this document does not constitute, and may not be used for the purposes of, an offer or solicitation to any person in any jurisdiction were such offer

In considering any performance data contained in this document, note that past or targeted performance is not necessarily indicative of future results and the value of investments and the income derived from those investments can go down as well as up. Future returns are not guaranteed and a total loss of principal may occur.

NKB Group AG. Registered No 486551t. NKB Advisory Limited is registered in England and Wales with registered number 11313961 whose registered office is at 1 Connaught Place, London, W2 2ET.

Miroslav Pikus
Chief Technology Officer

Source: Artificial Intelligence on Medium

(Visited 4 times, 1 visits today)
Post a Comment

Newsletter