Some strange alignment of the planets has led to the emergence of two great articles this week on how to leverage artificial intelligence and machine learning in business. Both make some good points, but I feel they are broadly applicable to advanced data analytics and data science of all types in any organization.
The first is an article in The Wall Street Journal, which has an excellent daily newsletter of their reporting on AI. Jordan Jacobs, a veteran entrepreneur in the AI startup for business space, distills his experience making the case for AI to three key tenets:
- Get buy-in from the top executives and the employees who will use the system. My experience mirrors Jacobs in that there is often both a lack of understanding and a measure of hostility to anything that shifts how something is done. It’s important for anyone helping catalyze change towards a more data-driven approach to understand there is both ignorance and fear. Dispelling the first doesn’t necessarily dispel the second, and neither should be ignored. I’ve seen too often (and personally experienced) how the expertise bias thwarts necessary empathy for creating change around new approaches and technologies of every kind. Engaging people where they’re at and work at the pace of trust with everyone involved.
- Identify clearly the business problem to be solved. I’ve long been ruminating on what I call “the tyranny of could”, which is the tendency to focus on the possibilities of a new technology or approach without a clear understanding of what problem it actually solves. Matt LeMay has an excellent discussion of this in his book Agile for Everybody. If the underlying issue/problem/challenge isn’t well-understood and articulated, it’s very unlikely any solution, let alone the most expensive/popular will actually have the positive impact we want it to have. We set the conditions for failure if we don’t understand the issues standing in the way of success.
- Set metrics to demonstrate the technology’s return on investment. Too often, we take promises of success without checking to see if they actually materialize. Even corporations with their profit motive and fiduciary responsibilities don’t always account for the return on their investment. In the public sector, we have a similar responsibility to the investment of trust and public resources to account for whether we are receiving a benefit in terms of time, money, or other resources for the expenditure on technology such as artificial intelligence and machine learning, but also Internet of Things, process automation, and other advanced data science techniques. Losing sight of this invites criticism like generates a broad resistance to any future efforts at innovation.
- What is the problem it’s trying to solve?
- How is the company approaching that problem with machine learning?
- How does the company source its training data?
- Does the company have processes for auditing its products?
- Should the company be using machine learning to solve this problem?
I’ll let you read her elaboration of each question rather than try to summarize them here. Suffice to say, she’s hitting some of the very same points as Jordan Jacobs in asking that we understand the problem, the desired outcome, the resources, and how to verify results in the short and long term.
I’m reminded of Arthur C. Clarke’s memorable third law:
Any sufficiently advanced technology is indistinguishable from magic.
To make technology a tool for the masses, we need to demystify the technology and its uses, pulling back the curtain to show how the trick is done and how it can be done over and over again by anyone who needs it.