What Is It Like Going Through An AI Job Interview?
The job market has lately turned into a brutal playground. The "over 100 applicants" label on almost all LinkedIn job posts gives a glimpse into the sheer competitiveness of the job market. As hard as it is for a candidate to secure a role, the growing number of applications also makes it tough for recruiters to screen and hire the perfect people. Automation in the hiring process is not new, as resume screening tools have been helping recruiters filter potential candidates based on their CVs for a while now. That said, recruiters are now moving to more sophisticated automation, and AI interviewing is the newest kid on the block.
Recruiters are using AI for the initial round of interviews to screen more potential candidates with minimal use of resources. While it sounds like a no-brainer for recruiters, it can be a dehumanizing experience for candidates. Moreover, outsourcing interviewing to an AI also means tasking it with judging real people. Given that AI isn't perfect and some chatbots can hallucinate during regular conversations, should we trust it to decide if candidates are suitable for a role?
What to expect in an AI interview
After an initial resume screening, potential candidates are asked to schedule a virtual interview. Recruiters usually inform the candidates if it is an AI interview. The actual interview experience largely depends on the tool used by the recruiter, but most of them do a good job of hosting the candidate and asking relevant questions. For instance, an AI interview for a data analyst role by StartHunt AI included generic questions about the candidate's background and a few technical questions about confidence intervals and their real-world applications.
Many AI bots ask candidates to turn on their video during the interview, and some of them may analyze the video for head movements and overall body language. Candidates mostly talk to an AI-generated voice in a virtual meeting, which, in theory, is similar to attending a meeting with someone with their video turned off. Some tools also try to emulate human behavior, like taking pauses or appreciating the candidate's responses to give a false impression that the artificial intelligence is as sentient, but the experience is way more mechanical than a conventional human interaction. A poster on Reddit who underwent such an interview said it was like talking to HAL 9000.
AI interview tools like Apriora maintain a live transcription of the conversation for the candidate to keep track of their responses. However, others may not be that transparent and leave candidates guessing if their answers were received correctly by the AI. Most AI tools also ask follow-up questions based on the candidate's responses, but the line of questioning isn't as nuanced as that of a human interviewer.
The AI interview system has its flaws
AI interviews could widen the gaps in the hiring process. A major cause of concern lies in the fundamental process of training an AI model. AI models are essentially taught specific tasks by training them on a large dataset. Models trained on biased datasets can amplify the biases even further in real-world applications. A recent study by Carnegie Mellon University shows how female candidates are shown fewer ads related to high-paying jobs based on the existing gender ratio in the job market. Flawed AI models can introduce these biases in the hiring process, and AI may discriminate based on race, gender, or age.
Additionally, interviewing with an AI can feel like a one-sided conversation for candidates. The lack of human touch can make the process feel dehumanizing for job seekers and may erode their trust in the company. While the technology is still nascent, it hasn't been received entirely positively by job seekers. According to a survey by Newsweek in 2024, 43% of job seekers reported feeling uncomfortable about being interviewed by AI bots, while 26% were unsure. As of now, taking the human touch out of the picture, even in the initial stages of the hiring process, shows a significant level of trust in AI systems, which artificial intelligence may not be ready for.