Blog: The rise of machine learning and artificial intelligence in fraud detection – The Paypers
As advances in AI, smart tech, and machine learning turn science fiction into fact, a future once fantastical draws near now. How will the payments industry harness these mind-blowing opportunities?
Artificial intelligence and machine learning have a wide array of applications, from improving customer experience to enabling businesses to fight fraud, from driving the creation of personalised shopping/user experiences by analysing multiple data points to enabling businesses to stay compliant with the ever changing regulation landscape – KYC, AML. Moreover, these emerging technologies have also been applied in medicine; popular AI solutions such as IBM’s Watson are actively used in multiple cancer research hospitals, and they operate as a doctor’s assistant.
However, in these articles we will mostly focus on the ways in which these technologies can help fight fraud, manage and mitigate risk, and enable companies to stay compliant with AML laws and fight transaction laundering.
Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. AI augments human intelligence and should provide explanations to avoid erroneous interpretations, and its value should be considered in context, as definitive answers do not exist, according to Pedro Bizarro, Chief Science Officer, Feedzai.
AI design principles should be transparency, controllability, and automation. Moreover, data provenance is a crucial feature, as the user needs to keep track of data in order to be able to reconstruct it, and models should learn from real data, and be able to re-learn, while not being influenced/based on previous models. Most importantly, we must create the means of developing this tool in order for it to be human-enabled and human-centric.
According to Forbes, AI needs to be ‘Explainable’ and ‘Understandable’. Explainable AI is the domain of data scientists and AI engineers, the individuals who create and code artificial intelligence algorithms. These specialists aim to develop new algorithms that explain intermediate outcomes or provide reasoning for their solutions.
Understandable AI combines not only the technical expertise of engineers with the design usability knowledge of UI/UX experts, but also the people-centric design of product developers. Explainable AI is different from understandable AI. Since AI-driven solutions need to be developed with ‘user-first’ principles in mind, understandable AI has become the domain of UI/UX designers and product developers, in collaboration with AI engineers and data scientists.
Critical to the understandable AI process are the integration of non-data scientists to the development and design of AI products and enabling people to be a part of the decision-making process in an AI-driven enterprise.
To begin the journey towards a truly human-machine collaborative model that creates understandable AI outcomes, leaders, governance bodies, and companies must:
develop intuitive user interfaces – by using voice recognition and natural language processing, the technology industry is currently developing AI user interfaces that enable people to interact with intelligent machines simply by talking to them. By encouraging the development of these tools, the democratisation of AI technologies is encouraged;
create ethical principles for AI – all major stakeholders in the future of AI need to work together to build principles that embed understandability into technology development;
apply design principles – enterprises should use design-led thinking to examine core ethical questions in context. In addition, they are advised to build a set of value-driven requirements under which the AI will be deployed – including where explanations for decisions are expected;
monitor and audit – the AI solutions used at the enterprise level need to be continually improved through value-driven metrics such as algorithmic accountability, bias, and cybersecurity.
When it comes to financial services, artificial intelligence can be applied to specific areas such as financial crime prevention, regulatory compliance, and payments. Successful AI projects rely on the deep amounts of research and work that expertise developers put in, and the application to specific business problems, which can be used in multiple different contexts. A critical element of AI systems is the data on which they are trained – it’s that combination of innovative AI capabilities and deep domain expertise.
A fundamental concept of AI is machine learning – that is why sometimes these two technologies go intertwined. But about this topic we will discuss more detailed in “Machine learning – an approach to fraud detection and protection”.
About Mirela Ciobanu
Mirela Ciobanu is a Senior Editor at The Paypers and has been actively involved in covering digital payments and related topics, especially in the cryptocurrency, online security and fraud prevention space. She is passionate about finding the latest news on data breaches, machine learning, digital identity, blockchain, and she is an active advocate of the need to keep our online data/presence protected. Mirela has a bachelor degree in English language and holds a Master’s degree in Marketing.