ProjectBlog: Machine Behavioral and UX Challenges in AI Era

Blog: Machine Behavioral and UX Challenges in AI Era

Bridge The Gap Between Machines and Humans


As an intermezzo, it has been a year since I took off the ground to my first real world job as a user experience (UX) practitioners, specifically UX Engineering field which if some of you may not reckon its meanings — go read this first.

Therefore, I’ve realised that UX Engineering isn’t bound to certain technology stacks — they’re dynamic — as how these tech giants like Amazon, Google and Microsoft perceive about UX Engineering / Design Technologists role. Take a look at Amazon’s role description here, if you want to clarify it yourself.

In fact, if you work as a UX Engineer, I can say that almost 60% of your time will make you feel like a bit of a design researcher as well, spending some of your time to read, understand and the fun part is, test these design ideas by mixing with some portions of engineering or technology through code.

or simply high fidelity prototyping — with code.

My Grab Food Prototyping Project

Okay, enough with the intermezzo and let’s get down to business.

The Black Box Problem in Artificial Intelligence

The buzzword that almost everyone have heard of — but hardly to digest for its current incapability to map human abstraction in a sense of behavior and ethical framework — which lead to bias in social symbiosis. Speaking about technicality, I have no doubt that things such as DRL, NLP and so forth are taking good care of the advancement of AI but as Kiri Wagstaff said,

Zhou, Jianlong & Fang Chen, ‘Human and Machine Learning’, Springer (2018), p.4–5

Kiri Wagstaff is a researcher at NASA Jet Propulsion Laboratiry which you can found her video talking about AI used in Mars Exploration here. Flabbergast !

Thus, what do Wagstaff meant by saying human users and human factors?. In its root level, any caliber and behemoth breakthrough in AI — with no human understanding as in ethics and behavior, these sentient will remain just as an expert system, or in other words, Narrow Artificial Intelligence. Now, why?

First, take a look at Andrej Dameski’s paper abstract

Dameski, Andrej, ‘A Comprehensive Ethical Framework for AI Entities: Foundations’, Springer Nature Switzerland AG 2018 M. Iklé et al. (Eds.): AGI 2018, LNAI 10999, pp. 42–51, 2018.

In terms of technicality, there’s no doubt that what we achieved may just be Turing’s daily daydream — but its inclusivity remains in question. Everyone, even Google is working to define and connect some parts of our humanity to the machines, providing white box for everyone to understand, as suggested by Prof. Antti Oulasvirta in his talk in University of Tsukuba last year.

— all of the issues discussed above are known as The Black Box problem in Artificial Intelligence or in the simplest manner, the un-explainability of AI which driving the gap in both aspects, technology and humanity — even to business perspective in advance.

But wait, did I mentioned white box?

The White Box Approach to Artificial Intelligence

In the same book that I referenced Wagstaff — Jianlong Zhou and Fang Chen suggested that there are four elements in white box approach to, again in the simplest form of explanation manner, make AI inclusive — or AI for Everyone as what Andrew Ng said. Thus, those elements are:

  • Visibility
  • Explainability
  • Transparent
  • Trustworthy

I’ll divide into separate sections to explain each elements by adding more sophisticated reference for better in-depth understanding. But as for now, let’s go with the general summaries with some real world study cases.

A. Visibility (Study Case: Plant Disease)

Recently, researchers have been inspired by the success of deep learning in computer vision to improve the performance of detection systems in various cases, and one of them is plant disease.

The question is, why using AI for plant disease? why it matters?. First thing, plant diseases cause great damages to agriculture crops by decreasing the production quantity, significantly — which means, protecting both quality and quantity of crops affect to, aforementioned, business perspective.

Thus, why can’t just use humans instead of AI — or simply machines?. Here’s the case, human reinforcenment requires load of labour times and amount to continously monitoring the data, which is pivotal to identify the abruption of certain diseases and avoiding it from massive spread to other plants.

Its not like we can’t do it, in fact we can but ineffiecently. Thus, this is when AI aid comes by utilising image processing and machine learning in develop the necessary algorithm as AI’s learning process baseline. It involves lots of people — business, engineers, scientists and user domain experts.

The aforementioned study case can be learned here. The takeaway is, this visibility element can be translated as visual representative for humans to continously monitor all necessary data streams. It produces vivid interface output for us, humans to understand

B. Explainability (Study Case: Human Machine Interaction)

There are two directions to simulate advanced human machine interaction in machine learning systems — the first one acts on local level by suggesting the reasoning process of why certain model decisions are made for current query samples, which aligned with user needs.

In other words, machine usability testing — I believe most UX practitioners are familiar with usability testing. This direction may help to provide good understanding of how the models we’ve trained behave and its supporting factors for humans to provide consistent and certain feedbacks.

Consistency in this case can be referred to human labelling — not that kind of bad labelling we learned in sociology, but label of certain objects, or as what Don Norman famously said, the design of everyday things.

Here’s one of our experiment in simulating digital shopping experience with AI which simulates human machine interaction — basically also took visbility element as well since we implement Computer Vision too

C. Transparent and Trustworthy (Study Case: Domain Users)

This part refers back to the black box problem mentioned in early parts of this writing where fundamental challenges in AI advancement attributed to how to ensure that domain users — or common people — understand what’s under the hood of deliverable solutions, drived by certain algorithms.

In short, how do you know that the algorithm is inclusive?. Who built it?, and most importantly, who controls it?.

This discussion branches to deeper philosophical understanding of ethics which drive behavioral changes underline by certain cognitive process in each individual brains in processing specified information.

The Rise of Machine Behavior : Pros and Cons

Recently, researchers at MIT Media Lab are calling for a new field of research known as machine behavior which take the study of artificial intelligence off the computer science and engineering world to biology, economics, politics, pshycology and other behavioral and social sciences.

To my understanding, this is a breakthrough for seeing AI as actors whom making decision and in advance, taking actions autonomously with their respective behavioral patterns and ecology to emphasis on four elements Zhou and Chen advised which is discussed in previous parts.

In advanced, this field offers great possibility to unite all scholars studying machine behavior under a single banner to recognize common goals and as well as complementarities. But, fairly speaking, there will always be cons in every form of scientific advancement — novel behaviors.

Let me provide an example. Say, for instance, a hypothetical self-driving car is sold as being the safest on the market. One of the factors that makes it safer is that it knows when a big truck pulls up along its left side and automatically moves itself three inches to the right while still remaining in its own lane.

But what if a cyclist or motorcycle happens to be pulling up on the right at the same time and is thus killed because of this safety feature?

UX Challenges

This particular issues not only affects to the engineering but as well as design perspective. In specific, I’m referring to computational design thinking which related to how to syntehsis an algorithmic design by decomposing multiple micro instances — or possibilities based on the abstraction of our behavior.

— to decode our complex behaviors, Payne and Howes (2013) suggested that designers should acknolwedge several components such as ecology which is strongly related to how certain environment affects certain behavior and as well as symbiotic relationship between individuals or groups.

Then, mechanism which can be translated as the behavioral patterns when we face certain phenomenons, including the disruption of AI technologies, and the last one — utility which is foremost since its the reactive factor from mechanism, referring to Actor-Network Theory (ANT)

Okay, I know that this article is lengthy enough thus I’ll continue explaining things related to computational thinking and design in future parts which is pivotal to reshape UX understanding in the realms fo Artificial Intelligence.

Thank you for reading.

Connect with me on Twitter @cordova or Instagram @cordovansyah for further discussion !

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.