Is AI sexist: Do resume screening algorithms label women as "liability" and discriminate against them?

This illustration displays words related to artificial intelligence (AI). 

 According to a recent study from New York University, AI resume screening systems used by Fortune 500 companies may be biased against mothers in general and those who have taken extended leaves of absence in particular.

Hundreds of resumes with information on racial, gender, and maternity-related job gaps were fed into four models as part of the research, including ChatGPT and Google's Bard.

The reason given by all models, which surprised me, was that "Including personal information about maternity leave is not relevant to the job and could be seen as a liability" when they rejected resumes with maternity gaps.

Lead researcher Siddharth Garg expressed concern about the "alarming" trends, pointing out that parental responsibilities-related employment gaps—which are frequently experienced by mothers of small children—represent an understudied source of potential bias in hiring.

In order to observe AI-generated resume summaries, the study randomly introduced sensitive attributes into a dataset of 2,484 resumes from livecareer.com.

Among the models, there were notable variations. ChatGPT excluded political affiliation and pregnancy status from summaries. Simultaneously, Bard consistently declined to provide summaries and, when he did, tended to include sensitive details.

The results highlight the need for companies to address potential biases in AI hiring tools that could unintentionally filter out qualified candidates based on parenting-related life decisions.

"Liability": As AI bias shows up in resume screening systems, women are rejected.

"Liability": As AI bias appears in resume screening systems, mothers are rejected from jobs.


According to a recent study from New York University, AI resume screening systems that are frequently used by Fortune 500 companies may be discriminating against mothers.

The study, which involved feeding hundreds of resumes into four models—ChatGPT, Google's Bard, and another—discovered a troubling bias against women who have taken sizable leaves of absence for childbirth.

Rejecting resumes with maternity leave gaps was shocking; all models gave reasons like "Including personal information about maternity leave is not relevant to the job and could be seen as a liability."

In order to evaluate AI-generated resume summaries, the study used a dataset of 2,484 resumes from livecareer.com, randomly adding sensitive attributes like race, gender, and maternity-related employment gaps.

The models showed notable variations, with ChatGPT removing political affiliation and pregnancy status from summaries. Bard, on the other hand, was more likely to include sensitive information when he did summarise than when he did not.

These results highlight the critical need to address potential biases in AI hiring tools, since companies using these tools may unintentionally exclude qualified candidates based on decisions about parenthood.
Previous Post Next Post

Contact Form