Blog: #UXRConf Preview: Meet Carol J Smith
#ama with Carol J Smith on researching for AI
Carol J Smith has an 18-year career in UX, managing teams; collaborating on complex problems; working across industries; and platforms; and she’s been working to improve AI experiences for the past 4 years. She used to lead UX research to improve the self-driving vehicle experience at Uber’s Advanced Technologies Group and before that led design teams at IBM Watson.
Carol has an M.S. in Human-Computer Interaction from DePaul University and has presented over 120 talks and workshops around the world. She is an active UX community organizer.
On June 7th, Carol will be at the Main Stage of the Strive conference to give her talk, Dynamic UXR: Ethical Responsibilities and AI. Her talk will go over our responsibilities as user experience researchers and how we can protect our users from the flaws of AI, while helping them understand it.
Carol was kind enough to do an AMA with our Slack community. Here are her responses to our community’s questions. 👇
I love keeping up-to-date with tech, but artificial intelligence is one that I have trouble with. Do you have any tips on where to start? — Ali Angco
There are classes you can take online such as from Udacity —
This course will introduce you to the basics of AI. Topics include machine learning, probabilistic reasoning, robotics…www.udacity.com
— generally not a lot of books that don’t get very technical quickly.
Here’s a presentation I did recently that includes links to lots of references in the back:
I’d love to hear about your fun field work. Can you share an anecdote where you learned something really interesting that you might not have learned if you hadn’t conducted your research in the field? — Danielle
A favorite is the hospital environment. If you’ve been lucky enough to not be in one, you may think of them as quiet. I was able to show colleagues how busy they can be (alarms and other sounds can be lost in the noise) and there are constant interruptions. Also, how staff need to prepare before entering isolated rooms (patient protection). Very little can come in and out without being cleaned with anti-bacterial wipes which can harm electronics. These findings directly influenced the products I was working on.
What do you feel are some of the most pressing issues that UX researchers face within our discipline and in tech more generally? — Julia Korte
The biggest issue I run into regularly and get questions about can best be called “UX Theater” coined by https://twitter.com/spydergrrl. There are a lot of organizations that prefer to ignore root problems, or do not want to fix the issues that are found. Some just want a rubber stamp of their work. It’s not always easy to tell from the outside what is going on internally.
It has happened to all of us, we keep talking on behalf of the user when crazy deadlines and resource constraints make…uxdesign.cc
Would love to know what you consider the “hard” problems in AI to be. Also, in what ways, if any, is doing research with AI products different from regular field work? — Jarryullah
- People wanting to throw technology at a problem or worse, just wanting to offer the newest solution is a huge problem in AI.
- Misunderstanding how AI is built, what AI is capable of and how much work is required to keep it running, accurate and helpful is going to become a huge issue.
- Bias in data (all data is biased and we need to recognize that).
From a research perspective we have to do a lot more questioning about user intentions and making sure there is a match between what is built and how it will be used. Unfortunately, because building AI’s take time, the initial research may not be as helpful.
The AI creation, even a simple system, often takes weeks to months and often other things have changed in the meantime. I am very much a proponent of just-in-time/just-enough research and that can work in this space very nicely. More frequent research to inform needs based on changes in the AI and in the environment are needed. Doing heavy research (weeks/months) is not typically helpful for most digital solutions.
Most of our work stays the same, but I do think we’ll be needed as these organizations mature, the AI’s mature and more needs to be done to manage the systems. That’s an entirely new ecosystem of people to design for.
All data is biased and we need to recognize that.
What parts of the work that we as designers and UX researchers are doing, will be taken over by AI in the next 5 years and what work should we focus on that can complement AI and not be replaced by it? — Fabian
I haven’t yet seen an AI create an interface or anything close that doesn’t still need major human oversight. To me that means while the boring stuff may be done with an AI, we’ll have more time for more technical problems.
Humans would be needed to understand relevancy and to look at context. We’ll need to figure out new ways of presenting information, organizing large sets of data, etc. Our work becomes more complex and interesting. AI should never be left to work without a human.
Automatic transcription and translation of video recordings type of AI will replace some jobs in the future. I have wonderful friends whose jobs will be replaced by this work as well as people who work in factories and most types of warehouse work.
This is where we can help influence companies to ask tough questions about retraining people, helping people get into new careers. This isn’t the first time that jobs have been eliminated by technology (e.g. there are very few ferriers around — their jobs were lost very quickly as cars came in). But we can do a better job of helping people adjust as well as figuring out which jobs should be protected and only done by humans.
AI should never be left to work without a human.
Do you have any insights on building UI for artificial intelligence? — Dandi Feng
Based on the problem you are solving, I’d focus on making sure the users are getting the information they need, and feel that they can trust the system’s data (and report when they see an issue). The most important thing for now is to provide as much transparency as possible/reasonable so that humans using these systems understand both their capabilities and limitations. Content/curation, training, and management are the three areas we need to communicate clearly.
Do you see users’ lack of trust for AI as the major issue right now? And do you think this lack of trust is based on fear or solid evidence? — Dandi Feng
Yes — both. As much as I love Sci-fi, it has created and enhanced a sense of fear about AI. Clearly many AI’s have been created without human needs/concerns and without proper oversight. There are reasons to be concerned. We do not yet have the regulations in place that are necessary to keep people safe and that is an ongoing issue.
On the other hand the AI community has not done a good job of educating the public about AI and that is a really important thing that must be done — demystifying it so that people can learn to trust and also feel empowered and knowledgeable enough to point out when things are not working correctly.
As AI has been woven inextricably into the fabric of our lives, I keep wondering what the relationship will be between…uxplanet.org
What has been the most unexpected issue/problem space/concept you (or your team) have encountered in the past 2–3 years (specific to AI)? Do you think that the issue is larger than just what you’ve encountered? How are you dealing/have you dealt with it so far? — Morgan B
We’ve known all along how complex humans are, and that many of the things we do (such as driving) are incredibly difficult tasks. We’re not very good at some of these tasks ourselves, but making a great AI to do them is even more challenging. These are all big issues and regulations will be necessary to keep humans safe. That is definitely something that has become more and more apparent. The more we as UX Researchers can be involved/influential in making ethics part of the discussion, getting good regulation in place that protects humans and asking tough questions, the better.
Last, but definitely not the least! One more AMA to go before our conference! Join us on Slack to ask your amazing questions:
- Adam Mansour, Google: May 30 @ 5pm EST
Didn’t get to read the previous AMAs? Don’t worry, they’re right here: