--- layout: default title: "Banking on Artificial Intelligence: In Hiring Drive, Bots Are Calling the Shots Now" description: "An Economic Times report on algorithmic video interview systems deployed by Axis Bank and Bajaj Allianz to screen thousands of job candidates, featuring Sunil Abraham's warning about homogenization of emotional expression and concerns over racial and neurological bias in facial recognition technology." categories: [Artificial Intelligence, Media mentions] date: 2019-06-04 source: "The Economic Times" authors: ["Anjali Venugopalan"] permalink: /media/banking-on-artificial-intelligence-in-hiring-drive-bots-are-calling-the-shots-now/ created: 2025-12-16 --- **Banking on Artificial Intelligence: In Hiring Drive, Bots Are Calling the Shots Now** is a news report published in *The Economic Times* on 4 June 2019, written by Anjali Venugopalan. The article documents the adoption of AI-powered video assessment platforms by major Indian employers to analyze candidates' facial expressions, voice tone and emotional states during recruitment, featuring Sunil Abraham's critique of algorithmic homogenization alongside evidence of racial bias in facial recognition technology from MIT researcher Joy Buolamwini. ## Contents 1. [Article Details](#article-details) 2. [Full Text](#full-text) 3. [Context and Background](#context-and-background) 4. [External Link](#external-link) ## Article Details
đź“° Published in:
The Economic Times
✍️ Author:
Anjali Venugopalan
đź“… Date:
4 June 2019
đź“„ Type:
News Report
đź“° Newspaper Link:
Read Online
## Full Text

Synopsis
Algorithms analyse expressions, tone to check for traits such as confidence, anger in video interviews.

NEW DELHI: The future of hiring is already upon us.

Algorithms are analysing people's expressions and tone of voice to check for traits such as "confidence" and "happiness" during video interviews.

The robotic video assessment software is then used to hire candidates — customer service operators and assistant vice presidents alike — though the process comes with its own set of problems.

Axis Bank used algorithm-based video interviews — along with aptitude tests — to hire around 2,000 customer service officers from a pool of more than 40,000 applicants this year, said Rajkamal Vempati, HR head of the private sector bank, adding it could standardise and scale up the process of hiring.

HR managers only gave offer letters, he said.

Nirmal Singh, CEO of Wheebox, a division of PeopleStrong which carried out the hiring, said it trained the face-indexing software — sourced from Microsoft — using around 50,000 candidates who had applied to Axis Bank in 2017. The software picked up emotional states such as "nervousness" and "happiness" based on eye movements, expressions and tone of voice and marked the candidates, Singh said. Scores from candidates who were shortlisted were used to come up with the "cutoff" for these traits.

Insurance provider Bajaj Allianz has hired more than 1,600 people, including underwriters and assistant vice presidents, with the help of robotic video assessments that analysed behaviour, said Vikramjeet Singh, chief HR officer, adding it could help reduce human bias.

CONCERNS OVER SOFTWARE'S BIASES

Talview, a Palo Alto-headquartered company with operations in Singapore and the United States, provided the assessment for the insurer.

The software, sourced from Microsoft and IBM, can analyse states such as "anger" and "happiness" from expressions, "confidence" from voice tone and traits like "ability to work in a team" and "decisiveness" from text analysis, according to Rajeev Menon, chief product officer, Talview.

Candidates may be able to beat questionnaires by giving expected answers to questions like "Can you work in a team?", but video assessments pick up on subtleties in expression and vocabulary, and cannot be gamed, Menon said.

Be that as it may, Amazon.com scrapped its artificial intelligence-based recruiting system after it found the AI system biased against women, according to an October 2018 report by Reuters. The AI system was drawing on data from the past, where more men had made it into the company than women.

"If you can fool a human, you can fool a computer," said Sunil Abraham, executive director of Centre for Internet and Society.

Recruitment algorithms could "homogenise the emotional economy" by forcing people to act a certain way, he said.

Since the software is based on expressions and tone of voice, it could disadvantage less expressive people, like those who are autistic, said Wheebox's Singh.

Facial recognition by companies such as IBM, Microsoft and Amazon got the gender of a dark-skinned woman wrong one out of three times (20-35% error rate), a 2018 study by MIT researcher Joy Buolamwini found. For white males, the error was 0.8%.

VIDEO ASSESSMENTS

Facial recognition has nothing to do with video analytics, Wheebox's Singh said. The two are, however, closely linked, said Animashree Anandkumar, professor of computing and mathematical science at California Institute of Technology.

She said such software was "deeply problematic", as it could correlate wrong factors (like gender or skin colour) and show that as the cause for success.

It is possible dark-skinned people would be disadvantaged, said Menon of Talview. The company uses facial expression as just one input among many and gives it a low weightage, he said.

The software they use is only 39% accurate, and will improve with more data, said Ridhima Gauba, co-founder of Interview Air, a Navi Mumbai-based company that provides a similar service to companies and colleges.

Companies also say video assessments are a risky business.

Bajaj Allianz does not use video assessments for recruitments beyond middle management.

It is "important to see a person physically" when hiring for senior positions, said Asha Sharma, manager (corporate HR) of Everest Industries.

The company, however, uses pre-recorded video interviews — where the computer asks questions — to hire juniors from campuses, she said.

{% include back-to-top.html %} ## Context and Background This 2019 report documented the early deployment of emotion-recognition AI in Indian corporate hiring, revealing how algorithms trained on facial expressions and vocal patterns had become gatekeepers for thousands of job opportunities. Axis Bank's use of Wheebox's Microsoft-sourced software to screen 40,000 applicants represented automation at unprecedented scale, with HR personnel reduced to distributing offer letters after algorithmic vetting. The system purported to detect "nervousness" and "happiness" from eye movements whilst establishing personality cutoffs derived from previously successful candidates. Sunil Abraham's warning about "homogenising the emotional economy" identified a subtle yet profound social risk: algorithmic recruitment systems imposing narrow templates of acceptable emotional expression that candidates would feel compelled to mimic. This went beyond traditional concerns about employment discrimination to encompass behavioral conformity pressures, where divergence from algorithmically-approved emotional patterns could foreclose economic opportunity regardless of actual job competence. Joy Buolamwini's MIT research findings—showing 20-35% error rates for dark-skinned women versus 0.8% for white males in commercial facial recognition systems from IBM, Microsoft and Amazon—exposed technical inadequacies with discriminatory impacts. Talview's acknowledgment that darker-skinned candidates "would be disadvantaged" whilst claiming low weightage for facial analysis illustrated the defensive postures vendors adopted when confronted with bias evidence. Interview Air's admission of 39% accuracy underscored how experimental technology with coin-flip reliability was nonetheless being deployed for consequential hiring decisions. The industry's attempt to distinguish "video analytics" from "facial recognition" appeared semantic given functional overlaps acknowledged by CalTech professor Animashree Anandkumar, who termed such systems "deeply problematic" for correlating protected characteristics like race or gender with employment success. Amazon's 2018 scrapping of its AI recruitment tool after discovering gender bias demonstrated how training data reflecting historical discrimination inevitably reproduced those patterns, yet Indian firms continued adopting similar approaches. Companies' restriction of algorithmic hiring to junior and middle-management roles—requiring face-to-face interviews for senior positions—suggested implicit acknowledgment that automated systems lacked nuance for complex evaluations whilst remaining acceptable for bulk processing of less powerful workers. ## External Link - Read on The Economic Times