Commentary: Why I as a recruiter can’t ignore ChatGPT anymore
Responses from the likes of ChatGPT are ultra-realistic and scarily indistinguishable at times, but specific contexts and feelings are only understood by another human, says Kerry Consulting’s Agnes Yee.
SINGAPORE: ChatGPT, the newfangled robot helper on the tip of everybody’s tongue, has stirred up quite a debate in the recruitment industry.
Many recruiters I have spoken to are hesitant to adopt generative artificial intelligence (AI) tools such as ChatGPT, and I understand that. I am hesitant, too.
Viewpoints are varied, both in opinion and intensity. There seems to be two doors - one labelled “passionate advocate”, the other “staunch disapprover”.
The debate is omnipresent, seeping into water cooler conversations at every opportunity. We are all curious as to what this new technology can do for us, and how it might be exploited and used against us.
NO BEHIND-THE-SCENES WIZARDRY WITH GENERATIVE AI
Perhaps without realising it, recruiters are already using AI and enjoying the benefits of it. Take LinkedIn for example, where its recruiter-facing product will automatically suggest top candidates based on keywords or other parameters we feed it. There are countless examples of AI doing background work for busy talent acquisition professionals, letting them focus on what is important.
The reason that we are seeing more conversations around ChatGPT and generative AI specifically is because of how in-your-face it is. Perhaps ignorance has been bliss in the past?
There is no behind-the-scenes wizardry with generative AI. No shielding from the sausage-making process. You are watching these models produce responses in real time and the work that they produce can be marked as yours, should you choose to use it.
Office-wide debates that have flared up are rarely about whether we should use AI in recruitment. I think that most have acknowledged that AI will play a role in our profession. The question now circles around the way we use generative AI. What is considered appropriate? What is ethical? These questions are understandably divisive in a profession built on genuine human connection.
IF TEMPLATES ARE ACCEPTABLE, WHY NOT AI-GENERATED WORK
The daily grind of a recruiter involves a significant amount of background work. Updating digital Rolodexes, working across response documents for prospective new clients, people management, budgeting and accounting, the list goes on.
Administrative work in recruitment is often quite binary. It requires a lot of numbers and words, but little feeling or strategic thought. The consensus among the many recruiters I’ve spoken to is that generative AI has a place in cutting down and defeating endless spreadsheets and word documents.
Another potential application of generative AI is in the realm of job description writing. This area, while certainly murkier than “pure numbers” work, is still administrative by nature. You can write an outline that includes the right information and the likes of ChatGPT will help you to format and display it. Job description writing is a serious drain on a recruiter’s time (we write thousands every year) and so having an AI assistant produce them is, in theory, going to be a significant help.
Here’s the issue: With a job description you are getting into candidate- and employer-facing materials.
Will the employer you’re advertising on behalf of be comfortable with an AI-generated job description? Won’t they expect the human touch a recruiter is assumed to provide? Oftentimes the answer might be no; they want a problem solved and how you solve it is irrelevant. But for some, it will be of critical importance.
When we step into AI response generation, the waters become noticeably greyer. I hear you exclaim: “Recruiters use automation and template responses all the time, why will generative AI be different?”
The answer is to do with awareness. When a candidate receives news - good or bad - via a template response, they will most often be able to surmise whether the response was born of an automated process. At times, the automated response will even tell on itself in the footer of an email. These responses are generally expected and accepted; a part of digital business that is mutually understood. The end reader is aware of what they are dealing with.
DATA PRIVACY AND DECEIT
Generative AI is wholly different. The way that many seek to use it is to effectively ghostwrite, so the personalised nature of response generation makes for tricky business. An AI tool will not ever truly know the person you are responding to.
To use ChatGPT to generate responses to candidates or clients, you would first need to feed it their correspondence, which might be considered a breach of data trust. Recruiters also need to consider whether the person on the receiving end of an AI response would be comfortable knowing it was written by generative AI, while under the impression that the recruiter themselves wrote it.
So, is there a level of deceit to response generation? Recruiters need to think about how they will handle this issue.
Each case will be different, but I imagine that soon we will see many recruiters develop AI communications policies that are made available (and clear) to recruitment process outsourcing stakeholders. Whether recruitment firms elect to use AI to create responses is one thing; if and how they disclose this information to business partners is another.
WHERE GENERATIVE AI FALLS SHORT
There are many technical pitfalls when it comes to generative AI. For a start, ChatGPT’s knowledge is currently limited to information and data before the year 2021, meaning it can’t answer current questions. Additionally, generative AI can struggle with unique or complex job requirements.
Generative AI bots learn based on the information they are fed. This makes way for biases to pervade algorithms, and, in an age of increasing diversity, makes leaning on AI for candidate evaluation unacceptable.
I do not trust that AI will ever be able to evaluate a candidate list with total objectivity and an understanding of context. It is a sensitive area that many humans still struggle with - so how can we trust a digital assistant with it?
Many believe that generative AI cannot replicate the human connection and personal touch that a recruiter brings to the process. I believe that too. You can push technology as far as possible; have it digest and parse billions of nodes of information, but at the end of the day it will still fail to fully emulate a human.
Sure, responses from the likes of ChatGPT are ultrarealistic and scarily indistinguishable at times, but specific contexts and feelings are only understood by another human.
HEALTHY SCEPTICISM FITS THE BILL
We question whether we will lose touch with the human-centric nature of our business. What we should be looking for is the right balance, whereby AI handles background work allowing recruiters to do what they do best.
In fact, many of us have already struck that balance without realising it; generative AI tools are just new components to add to the mix.
The debate on AI will rage on for sure. Generative AI has not yet found a comfortable place in the recruitment process, nor in any industry for that matter. It is new and shiny. It is also untested and unreliable.
For myself, a healthy dose of scepticism and a positive outlook is in order.
OpenAI, the company behind ChatGPT, recently launched a paid version of the AI chatbot called ChatGPT Plus. But it’s still early days yet and perhaps when something more concrete (and undated) is available I will add it to my arsenal.
I am sure that generative AI will help recruiters like me in the future, but I am even more certain that it will never replace my chosen profession. Recruitment will always need a human touch.
Agnes Yee is Executive Director of Kerry Consulting.