Blog: Artificial Intelligence (AI) in Employment
This post is for the final project of Professor Darakhshan Mir’s class of Computer and Society.
Many companies apply artificial intelligence in their employee recruiting, hiring and assessments . A 2017 Deloitte study showed that more than 33% of their company respondents already used the machine in the interviewing process . However, since the technology of AI in recruitment only raised recently, there are still many ethical concerns about its application, including the concerns about computer technologies, and about the exercise of professional responsibilities. For instance, although such algorithms reduce human labor and produce reliable results most of the times, these algorithms could make unfair decisions that affect individual employees significantly.
In this blog post, we are going to look at some cases related to AI and employment and touch upon the possible ethical issues introduced by this kind of artificial intelligence.
As shown by the video above, there exist many different products related to AI and employment. For example, HirVue is a digital recruiting company that offers video interviewing with assessments, which is an algorithm to score candidates based on the speech they give in an interviewing video. And Wade & Wendy is an AI company specializing in creating conversational chatbots for the hiring process . We can see that there are many different kinds of AI technologies that support hiring process and employment.
However, with the widespread of application of AI in employment, there are also cases related to the biases in the algorithm. For example, as shown by Figure 2 above, Ibrahim Diallo was fired unfairly by algorithm because of the mistake of his boss: Ibrahim’s direct manager had been laid off and sent to work from home during a transitional period. Meanwhile, his ex-boss stopped ticking the box that said that Ibrahim’s contract was still in force. This missed step triggered the automated termination process by algorithm, which discontinued all of Ibrahim’s company access . This case informs us that the potential inaccuracy and lack of transparency of some employee assessment algorithms can affect individual employees significantly.
For another, Amazon machine learning specialists found that their recruiting engine magnified gender biases as shown by Figure 3 above. That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Because of male dominance across the period, most of the training data came from men. As result, Amazon’s system taught itself that male candidates were preferable: It penalized resumes that included the word “women”. For instance, it downgraded graduates of all-women’s colleges. Amazon has edited the programs to make the particular terms neutral. But such change cannot ensure that the machines would not be discriminatory in other ways .
As introduced in the previous section, we can see that the problems of application of AI in employment not only affect individual employees significantly, but also have negative impact on society. For example, by the ACM Code of Ethics and Professional Conduct, entrepreneurs should put public good as the primary consideration. Given that 38% of Americans will be looking for a new job this year, making decisions that are subject to discrimination (1.4) could have a strong, negative impact on the public, and it’s against the public will of seeking good life, as introduced in Intro to Data Ethics.
Discrimination in AI-centered hiring process may have different causes. The article “Hiring By Algorithm” introduces that the cause may be the fault in machine. For example, the choice of training examples provided to a data-fitting procedure can have a profound difference on the produced decision rules. The problem could also be biases in the process, as although role of the human is obscured in decision making, the training data — decisions that are then matched as closely as possible by the algorithm, is still made by humans.
Since there are different causes of the problem, there exist different possible solutions. For technological solutions, the article “Quantifying Program Bias”  introduces a inventive calculating algorithm to evaluate whether a decision making program is biased or not. It also introduces a data mining process which modifies the data so that algorithms trained on the data are more likely to make non-discriminatory decisions.
For legal solution, the article “Hiring By Algorithm” states that law should be used as part of an anti-discriminatory measure to ensure that hiring policies, as aided by advancements in computing, are not inadvertently excluding well qualified members of protected classes.
The fast development of AI technique promotes the widespread of application of AI in employment, and helps reduce human labor and produce reliable results. However, AI in employment also has many problems, including potential inaccuracy, lack of transparency and discrimination. The algorithms could make unfair decisions that significantly affect individual, and also have a negatively impact on society. The potential solutions to the problems include the technological solution, which uses algorithm to evaluate whether a decision making program is biased or not, and the legal solution, which uses law as part of an anti-discriminatory measure.
Acm.org. (2018). ACM Code of Ethics and Professional Conduct. [online] Available at: http://www.acm.org/about-acm/acm-code-of-ethics-and-professional-conduct [Accessed 21 Mar. 2019].
Ajunwa, I. & Venkatasubramanian, S. (2016). Hiring by Algorithm: Predicting and
Preventing Disparate Impact. SSRN Electronic Journal. 10.2139/ssrn.2746078.
Albarghouthi, A., D’Antoni, L., Drews, S., & Nori, A. (2017). Quantifying Program Bias. Retrieved March 21, 2019, from https://arxiv.org/pdf/1702.05437.pdf.
CBC News. Could your next job interview be with a machine? How AI could change hiring,
Retrieved from: https://youtu.be/gPzIF89E1Dg
Fired by an algorithm, and no one can figure out why. (2018, June 20). Retrieved March 21, 2019, from https://boingboing.net/2018/06/20/the-computer-says-youre-dead.html?_ga=2.263479244.1574229553.1553033472-1058922381.1537828952
H. (n.d.). Hiring Intelligence | Assessment & Video Interview Software. Retrieved March 21, 2019, from https://www.hirevue.com/
Kobie, N. (2017, October 04). Who do you blame when an algorithm gets you fired? Retrieved March 21, 2019, from https://www.wired.co.uk/article/make-algorithms-accountable
Koller, D., & Friedman, N. (2012). Probabilistic graphical models principles and techniques. Cambridge, MA: MIT Press.
Miller, C. C. (2015, June 25). Can an Algorithm Hire Better Than a Human? Retrieved March 21, 2019, from https://www.nytimes.com/2015/06/26/upshot/can-an-algorithm-hire-better-than-a-human.html
Reuters. (2018, October 10). Amazon’s job-recruiting engine discriminated against women. Retrieved March 21, 2019, from https://nypost.com/2018/10/10/amazons-job-recruiting-engine-discriminated-against-women/
Riley, T. (2018, March 13). Get ready, this year your next job interview may be with an A.I. robot. Retrieved March 21, 2019, from https://www.cnbc.com/2018/03/13/ai-job-recruiting-tools-offered-by-hirevue-mya-other-start-ups.html
Snyder, B. (2019, January 10). Adina Sterling: How will artificial intelligence change hiring? Retrieved March 21, 2019, from https://engineering.stanford.edu/magazine/article/adina-sterling-how-will-artificial-intelligence-change-hiring?utm_source=YouTube&utm_medium=video&utm_campaign=TheFutureofEverything&utm_content=hiring010719
Stirling, M. Artificial Intelligence and Recruiting, Retrieved from:
Vallor, Shannon, and William J. Rewak. An Introduction to Data Ethics.
Weiss, S. (2018, October 17). Your next job interview might be with a robot. Retrieved March 21, 2019, from https://nypost.com/2018/10/17/your-next-job-interview-might-be-with-a-robot/