Blog: Inside the realm of Artificial Intelligence – A deep dive with thought leader Robbie Allen – WRAL Tech Wire
Editor’s note: WRAL TechWire is teaming with YourLocalStudio.com, a 10-year-old self-described video agency, for what we believe is a riveting, deep, deep dive into the world of Artificial Intelligence. Alexander Ferguson, the CEO and founder of YourLocalStudio, and his team have done a series of in-depth video interviews with thought leaders in the AI field – many of whom are based in the Triangle. It’s part of the studio’s UpTech program. This week’s segment is the third part of an interview with AI thought leader Robbie Allen.
Welcome to UpTech Report series on artificial intelligence. I’m Alexander Ferguson. This video is part of our deep dive interview series where we share the wealth of knowledge by one of our experts in the field of artificial intelligence. This is the third part of my conversation with Robbie Allen, CEO of Infinia ML in Durham, NC. Among his other business accomplishments, Robbie owns six patents and has authored eight books. The interview:
Why is there a surge in machine learning? Why does data means everything? And how is the AI community is different from the traditional software arena.
There’s really three things that ushered in this machine learning way that we’re seeing. The first is data, which I mentioned before. And I often say that the big data era, which really started around 2005 or -6 you stared hearing the term “big data.” That was coined around that time.
Companies really started to take data collection more seriously. They started thinking about data as a competitive advantage. And so they at least started collecting it. And so that was necessary, because as I’ve mentioned, machine learning is all about data. If you don’t have data you can’t do machine learning. And so at least many companies had the data.
There was GPUs, which really is a special form of processor. Most computers, laptops have what they refer to as CPUs, or central processing units. GPUs are graphical processing units. They’re the things that will oftentimes will power your monitor or power gaming consoles. And, in fact, GPUs became more popular due to the rise of gaming consoles. So I often say that we have gamers to thank in some part for some of the advances that we see with machine learning. And then there’s been advancements in the algorithms themselves, specifically around deep learning. There’s so much research going in now in the machine learning community and so many improvements that have been made, especially that are deep-learning based that now thanks to the processing, thanks to the data, we can apply more complex versions of machine learning that can do all sorts of interesting things.
I don’t think it’s a big jump to go from really basic statistical techniques to machine learning, at least maybe the more straightforward techniques. In fact, it really depends on what the data looks like. ‘Cause the complexity of the data oftentimes dictate the complexity of the algorithms or the things that you kinda have to keep an eye on. And that’s why deep learning, that’s really the step function of complexity in terms of it’s very difficult to go from basic regression to deep learning without having more formal training or a lot more experience because there’s more things that could go wrong.
And if you don’t have an awareness or an ability to understand where things went wrong or why they went wrong, then you won’t be able to understand or interpret the results that you get. So I would say it’s not too much of a stretch to go to basic machine learning techniques, but once you get to something like deep learning, that requires more of an understanding.
- Why is data so important to machine learning?
Machine learning, as I mentioned, is learning patterns and data. And if the data has garbage in it or if it’s very noisy or there’s problems, especially ones that you don’t anticipate or you can’t see yourself, then the algorithms gonna learn that noise or it’s gonna start making misjudgments because it’s learned patterns that were incorrect. And so if the data’s not in pretty good shape, you’re gonna have problems.
- What are some of the challenges with machine learning?
One of the big challenges that many companies face, and I think even companies like mine that are trying to really focus in on a couple of key use cases is we go and talk to companies, and we’re talking to some of the largest companies in the world all the way down to small product companies. And everybody has a different use case. And the reason for that is we’re so early on in the machine learning journey that there’s not just one or two applications of it.
In fact, you go into a large company and there’s dozens of applications of it. And so right now, because people are just getting started, no two companies tend to have the exact same need or the exact same priority because machine learning is a real general purpose tool that you can apply to lots of problems. So anyway, there’s that.
But we’re seeing all sorts of interesting use cases for things like all sorts of document processing type of applications, whether that’s contract analysis, where you can automatically read a contract, a legal document and determine what kind of contract it is, understand what kind of issues are in the contract, do what they refer to as Q and A, question and answering about the contracts. Does this contract have a change of ownership clause in it? And it could tell you yes or no.
Another one that you probably heard about is resume analysis. So can you automatically scan a resume and then tell me if that person is gonna be potentially a good employee for me or not. And there’s been some failed examples of that in the past that have injected bias. And this is a common thing that I get asked about all the time: Are you worried about the bias of the algorithms? And what I tell people is machine learning starts at neutral. There’s nothing biased about the algorithms. What’s biased is the data that it trains on.
And so really if it’s making a biased decision, it’s just reflecting what it was trained on, likely biases that were built into the data that was collected. And so I think it’s overall a net positive, even with the bias built in because at least we’re at a point where we can recognize the bias, people are talking about the bias, and we can do something about it versus how many years dozens, decades, where people have been making these decisions they never talk about bias, but guess what? The bias was present, they just didn’t talk about it.
We often say, too, that we’re not gonna apply machine learning to every problem. In fact, there’s many problems, there’s lots of low-hanging fruit, in fact, inside of the corporate world that now that they have the data, you don’t need something like the big hammer of machine learning to go solve it.
You can use simple statistical techniques and get a lot of mileage out of them that way. So it’s not always the case that you need machine learning. There is a very large set of statistical techniques that can do all sorts of processing and analysis, whether it’s regression or classification that doesn’t require more complicated techniques.
- How is the AI community different than traditional software?
The great thing about the artificial intelligence community is that it’s academic based, which means most of the great advances that have been made are open and freely available. It’s not really buried in patents for the most part. And so that’s one of the nice things about it. It’s not like a closed community; it’s very open. I hear more and more about companies trying to file patents. The US PTO recently came down with new guidance on filing AI-oriented patents that will make it a little bit more difficult to do that.
Again, the nice thing is that the credibility in the AI space comes with published papers, not published patents. And that’s another fundamental difference from traditional software, which most of the badges of honor were around how many patents did you file. Now it’s how many research papers have you published. My company, we publish a number of papers. Dr. Larry Carin, our chief scientist, is one of the most prolific machine learning researchers in the world.
- Lastly, I asked Robbie Allen for his final thoughts on the age of artificial intelligence.
I think it’s super exciting. I started my career during the dot-com boom and bust, and I was early 20s during the late ’90s and early 2000s and there was something a little sad about the fact that I thought maybe the most exciting point in my career happened when I was 21 years old. But now I would revise that and think that now potentially will be the most exciting time in my career.
This will be the time that one day my grandkids will ask me about. What was it like to be in the early days of the artificial intelligence revolution? And I do believe we’re just at the very beginning of that. So machine learning, while there may be some ups and downs, as there is with any technology, I think overall it’s a long-term winner.
This was just a taste. Stay tuned as we share the full Deep Dive interviews we had with each one of our panel of experts and our upcoming episodes focused on specific topics that will transform the way you think about artificial intelligence. All this on UpTech Report’s new series on AI.