by Carmen Fernández and Alberto Fernández (Universidad Rey Juan Carlos)

Artificial Intelligence (AI) applications may have different ethical and legal implications depending on the domain. One application of AI is analysis of video-interviews during the recruitment process. There are pros and cons to using AI in this context, and potential ethical and legal consequences for candidates, companies and states. There is a deficit of regulation of these systems, and a need for external and neutral auditing of the types of analysis made in interviews. We propose a multi-agent system architecture for further control and neutral auditing to guarantee a fair, inclusive and accurate AI and to reduce the potential for discrimination, for example on the basis of race or gender, in the job market.

Image analysis in human resources: pros and cons
There has been a recent trend towards video-interview analysis in HR departments. Traditionally, AI played no more than an assistant role in HR, e.g. resume and CV scanning. But lately, apps and systems like HireVue [L1], Montage [L2], SparkHire [L3] and WePow [L4] have been changing how recruitment is carried out. An AI-based video interview system could be programmed to check, during an interview, features such as age, lighting, tone of voice, cadence, keywords used (substantial conversation), mood, behaviour (eccentric, movement or quite calm and not talkative), eye contact and, above all, emotions. AI targets the specific traits of a customer-oriented role that employers want in their teams.

AI has produced benefits for HR so far, including:

  • Reduction of interview time per candidate, thus recruiting time.
  • Customised candidate experience and customised questions and answers.
  • Attention to detail (eye contact time, emotions-intonation and body language) and lack of interviewer bias (physical appearance, tattoos, etc.).

But there are several problems that accompany the use of these technologies:

  • Candidates are unfamiliar with video interview analysis (for example, lighting, settings), which could affect global performance.
  • Gender and racial bias: traditionally, machine learning algorithms were trained with data from white people.
  • Imprecisions of technology. Training classifiers with biased datasets. For instance, Affectiva [L5] dataset of human emotions was fed with data from Superbowl viewers, and could presumably have culture-bias.

Controversial characteristics
We studied several potential controversial characteristics, among them, facial symmetry, race, gender, sexual orientations in voice and image recordings. The problem of racial-bias in AI is not new, just like the detection of mixed race in bad lighting conditions according to Siyao et all [1]. MIT researchers acknowledged race-bias in learning algorithms mainly trained with data from white people.

As an illustration of the advances in sexual orientation recognition both in images and sound, one study [2] needed ethical supervision due to the opaque invasive nature of the research and the use of real user data from dating applications. The study argues that there is a relationship between homosexuality and exposure to particular concentrations of hormones in the womb, and that sexual orientation can be determined by morphological features (for example, the jawline and forehead).

Ethical and legal aspects of AI
Whilst the use of AI in this context may have its benefits, it also strips away aspects of humanity, reducing a human recruit to a set of descriptors. The automation of HR processes could lead to potential ethical and legal implications that cannot be ignored. In some countries, companies are not allowed to ask a candidate’s age during recruitment. Traditionally, United States legislation has been particularly protective of racial differences and discrimination in the workplace (the Civil Rights Act, 1964, forbids “improperly classifying or segregating employees by race”). And yet, even while these regulations exist to reduce discrimination, enterprises are given more and more freedom to customise their systems. We conclude it is risky to blindly follow the adoption of AI in recruiting.

Multi-agent system architecture
At CETINIA (URJC) we are working on a multi-agent system architecture for auditing (Figure 1). The core of the architecture comprises three different parties that must collaborate: (i) a recruiter/company, (ii) external auditor, and (iii) government/authorities.

Figure 1: Multiagent System architecture for auditing.
Figure 1: Multiagent System architecture for auditing.

An Interview design agent, based at the company central headquarters, is responsible for designing a general interview. The Interview auditing agent is based in company branches and applies the general interview format to a regional scenario of the country where the recruiting is taking place. The Selection process agent can cancel the process due to controversies or give back a list of candidates to the central office if the process is fair. It is also capable of running checks with authorities and auditors.

If the features analysed in the recruiting process break any law or if the process contravenes basic civil rights, the interview process agent would ask for the approval of the Labour Law Agent or Ethical Agent if necessary. If the recruiting process is dealing with a candidate’s personal information, it would require the candidate’s approval for data handling. If a company is recruiting in another country it would need to register with the Authorities agent.

Links:
[L1] https://www.hirevue.com/
[L2] https://www.montagetalent.com/
[L3] https://www.sparkhire.com/
[L4] https://www.wepow.com/es/
[L5] https://www.affectiva.com/

References:
[1]  Siyao Fu, Haibo He, Zeng-Guang Hou: “Learning Race from Face: A Survey”, IEEE Trans. Pattern Anal. Mach. Intell. 36(12): 2483-2509 (2014)
[2]  M. Kosinski, Y. Wang: “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images”, Journal of Personality and Social Psychology 114 (2), 246-257, 2018

Please contact:
Carmen Fernández
Universidad Rey Juan Carlos, Spain
This email address is being protected from spambots. You need JavaScript enabled to view it.

Next issue: January 2025
Special theme:
Large-Scale Data Analytics
Call for the next issue
Image ERCIM News 116 epub
This issue in ePub format

Get the latest issue to your desktop
RSS Feed