ai

Unlocking Potential: Empowering Youth in an AI-Driven Recruitment Landscape

Future of Work Series


In Brief


  • Large language models (LLMs), such as ChatGPT, and generative artificial intelligence (AI) more broadly stand to significantly impact the working world.
 
  •  The use of AI in recruitment offers both opportunities and risks for social mobility that should be front of mind for policymakers and HR leaders.
 
  • Young people from low-income backgrounds can leverage AI to navigate an increasingly technology-led recruitment landscape when they are empowered with the knowledge and tools to do so.


Introduction


Artificial Intelligence (AI) is the latest technology phenomenon to gain global attention, largely propelled by advancements in generative AI and large language models (LLMs), such as ChatGPT. AI is expected to be a game changer for the working world – from its impact on the labour market, to the skills employees will need to succeed. At the same time, the Organisation for Economic Co-operation and Development (OECD) following on from their Employment Outlook of 2023 has stated that ‘urgent action is required to make sure AI is used responsibly and in a trustworthy way in the workplace’ [1].

 

For an organisation like the EY Foundation, there is great concern on the impact that AI might have on young people eligible for Free School Meals. Lynne Peabody, the Foundation’s CEO said:

 


We know technology is rapidly changing the future of work, but more attention is needed to understand the implications on social mobility. The lack of fairness in the workplace for those from low-income backgrounds is well known, but will technology help tackle these issues or make them worse? It’s critical that an organisation like ours can answer that question. It’s why we are working to better understand the potential impact of AI and provide practical support for the young people we work with to maximise the benefits and eliminate any potential harm.

And this isn’t just about tomorrow. One area of employment that has a long-standing relationship with AI is recruitment. Take applicant tracking systems (ATS) and recruitment management systems (RMS) as examples. Collectively, such tools are already used by organizations to “manage and track the pipeline of applicants” across the recruitment process [2]. This includes automating common recruitment tasks, such as sourcing, CV screening, and candidate scoring, often with the help of AI techniques.

 

In research from 2019, it was reported that 98.8% of Fortune 500 companies in the USA use an ATS [3], while research from 2021 found that 90% of employers with RMS capabilities used these systems to “initially filter or rank potential middle-skills and high-skills candidates” [4]. As a result, candidates often need to go through several rounds of automated recruitment before they ever get to speak with a person.

Some have emphasized that the use of AI and increased automation in recruitment can be helpful in removing ‘affinity bias’, the propensity for people to over-value the skills and potential of people who ‘look like them’. However, equally, as evidenced by the field of AI ethics, there are concerns that AI models are often biased themselves, because of the data on which they were trained, as well as choices made by system designers.

 

John Walker, 18, a recent graduate of an EY Foundation Smart Futures programme, reflected on his experience of AI in recruitment:

 


I’ve been using AI to help me prepare for interviews – it’s a bit like having a professional mentor – which is great for people who don’t have easy access to that sort of support in the physical world. Though there is a definite downside to AI recruitment, because it lacks human skills like nuanced reasoning, I’m sure this will change in the future. In just a few years it could be that AI makes the recruitment process fairer for young people who are often marginalised.


AI in Recruitment: Boom or Bust for Social Mobility?


There is no doubt that AI presents organizations with an efficient way to streamline the hiring process. However, efficiency should not come at the expense of fairness. In fact, ethical concerns posed by the use of AI technologies in recruitment contexts have already been raised in law through the New York City Automated Employment Decision Tool Law (NYC AEDT) and the impending EU AI Act [5].

 

Let’s look at some of the commonly AI-enabled tasks in the recruitment process and the opportunities and risks presented from the perspective of social mobility.

 

 

Candidate Sourcing

 

AI tools can assist in finding candidates by scanning numerous online platforms, job boards, and social media profiles to identify individuals who match the desired criteria for a posting. In addition, generative AI, that is, the use of AI to produce new content, including text, images, audio, and even video, can be used to draft job postings.

 

The opportunities for social mobility:

 

  • Recruiters can reach a greater potential pool of applicants than what manual sourcing methods permit for, which may provide individuals without access to a professional network greater visibility towards recruiters.
 
  • Tone meters in combination with generative AI can “make suggestions on how to improve the inclusiveness of the language used” in job postings prior to publication [6].

 

  • As generative AI capabilities mature, personalisation in job-matching is also likely to mature. This could enable opportunities for candidates to query what jobs best fit their CV’s, where gaps in their experience may be, and suggested next steps to be a competitive applicant. Increased transparency and personalised feedback can assist young people in navigating the recruitment process and understand pathways to get to the role that they want.

 

The risks for social mobility:

 

  • Depending on the parameters of the AI model used in candidate sourcing, such tools may algorithmically discriminate against individuals based on personal characteristics (e.g., race, gender, socio-economic status) or data that serves as a proxy for such characteristics (as explicit gender and race data is not typically included in model design). As an example of a proxy variable, consider the use of postal or zip codes in AI-assisted credit assessments for a bank. Even if race data is not considered by the model, because “zip code is strongly correlated with race” this datapoint ends up becoming a proxy for race [7].

 

 

CV Screening

 

After sourcing, RMS can be leveraged to automatically screen CV’s and identify key qualifications, skills, and experiences. Using natural language processing (NLP), an AI technique, these systems can analyse the content of CV’s, match keywords, and rank candidates based on their suitability for specific roles. This can save recruiters significant time and effort by quickly narrowing down the pool of applicants for a given job posting. It is often at this stage that automated or semi-automated decision-making to progress with a candidate further is made.

 

The opportunities for social mobility:

 

  • We already know that hiring led by humans can be remarkably biased, which can disadvantage vulnerable or marginalised groups. If effectively governed, tested, and monitored, AI tools could be leveraged to mitigate human biases or variations in decision-making between multiple recruiters by using a consistent evaluation criterion for all candidates.
 
  • There’s also concerns that recruitment has historically overemphasised credentials over skills. AI tools could open the door for individuals that may have relevant work experience but not a university degree by focusing more on tangible skills [8]. This could assist young people in finding work that face financial barriers in attending post-secondary education and avoid biases that individual recruiters may have favouring specific academic institutions.

 

The risks for social mobility:

 

  • Despite consistent evaluation criterion, it is important to remember that AI models do not represent objective truths about people, our society, and our value. AI models are subjective by nature, as they are created by humans that make design choices that impact the overall fairness of a tool (e.g., via feature selection, whereby the data that is to be considered by a model is selected). The data upon which models are trained can be unrepresentative or contain historical biases that affect how particular groups are treated (e.g., disproportionately selecting male candidates for managerial roles). Captured by the adage of ‘garbage in, garbage out,’ if the data on which an AI system is trained is biased or unrepresentative, it will lead to unfairness and inequality.

 

Video Interviewing and Assessments

 

For applicants that progress past the initial CV screening stage of recruitment, some companies are making use of AI-powered video interviewing platforms or recruitment games to further shortlist candidates. Oftentimes, AI interviewing tools analyse facial expressions, voice tone, and body language, in addition to what is said during an interview to provide insights on candidate behaviour and characteristics. Automated assessments and coding challenges are also used to evaluate candidates' technical skills and problem-solving abilities.

 

The opportunities for social mobility:

 

  • AI interviews are often held without a human practitioner being present and, in some cases, will permit candidates to re-record responses. For those early on in their careers, the opportunity to practice responses and record multiple takes can lessen anxiety and improve candidate experience.

 

The risks for social mobility:

 

  • The ways in which AI interviews are assessed may be opaque to users, particularly if employers use third party tools that they have not developed themselves. Given the asymmetric power dynamics that exist between employers and applicants, this can be discouraging, especially so for young people.
 
  • Models that consider emotional affect, body language, and tone of voice may be discriminatory against particular groups, as what are seen as appropriate or desirable emotional responses may differ across cultures and genders. This also poses a particular concern for candidates that are neurodiverse and may not exhibit emotional responses or social behaviour in the same way as their neurotypical peers yet be perfectly capable of fulfilling work expectations.

 

 


Harnessing AI to Put Your Best Foot Forward


At the EY Foundation, we are committed to giving young people a great start to their working lives. To deliver on our mission, there is a pressing need to ensure that young people understand how the working world is evolving because of novel technologies, as well as what steps they can take practically to put their best foot forward. To this end, we’re working with leading AI ethics and People and Advisory Services experts in EY to craft and deliver workshops on how young people can level-up their job applications and understand how AI might impact their experience. In particular, we’ll be focusing on the following topics:

 

  • Optimizing professional social media profiles for machine readability
 
  • Leveraging generative AI to make more impactful CV’s and cover letters
 
  • Preparing for and understanding the AI interview process
 
  • AI bias and unfairness – what candidates need to know
 
  • Changing labour market and skills demands 

 

In addition to working with young people, we will share the impact of this pilot with employers and eagerly welcome the opportunity to learn from others who are developing solutions to address the risks and seize the opportunities presented by AI. Our goal is to ensure that AI is utilized to build a more inclusive workforce.



Conclusion


With reflection and thoughtful governance, it is possible to harness the power of AI to deliver on social mobility goals. However, it is crucial to be aware and address the corresponding social mobility pitfalls that can arise due to known fairness issues with AI models to create the conditions for a truly inclusive working world. By actively seeking solutions, sharing experiences, and collaborating with employers and the communities we serve, we can collectively work towards a positive technology future that puts the interests of future generations at the centre.