At HackerEarth, we take pride in building robust proctoring features for our tech assessments.
The tech teams we work with want to hire candidates with the right skills for the job, and it helps no one if the candidates can easily ace tests by plagiarizing answers. HackerEarth Assessments has always boasted of robust proctoring settings to ensure that our assessments help users find the right skill match every single time. And to add to it we launched our anti-ChatGPT feature called Smart Browser last year.
In case you missed the launch announcement, our Smart Browser is a unique new feature which only allows candidates to attempt a test in a HackerEarth desktop application with stricter proctoring features than those provided by our browser test environment. Smart Browser prevents the following candidate actions:
A year after the launch of this feature, we wanted to understand the impact of using this feature on the take home assignments sent to candidates. We decided to look at the difference in solvability between assessments where Smart Browser was used for proctoring, and the ones without.
One way to check a test’s integrity is to see how highly solvable it is. If a coding test scores high on solvability, then candidates would find it easy to crack; and anyone would pass the assessment. Creating the perfect coding assessments requires finding the right solvability, which should neither be too high nor too low. According to expert estimates, a solvability percentage of 10-20% is considered to be ideal, which can change according to the difficulty level chosen by the recruiting teams and the number of candidates taking the test.
Now, Smart Browser helps users set a high proctoring environment, which makes it difficult for candidates to use any unfair practices while taking the assessments, allowing only genuine candidates to solve the questions in the assessments.
This brings us to the following observations:
Some of our users chose not to implement the Smart Browser feature while conducting the assessments; instead allowing candidates the option to use an LLMs to answer questions. We found that the solvability of different questions is different in this scenario. The table given will explain the solvability of different question types in the assessment without the Smart Browser.
This is still a difficult-to-solve assessment for the candidates due to HackerEarth’s rich question library. But without the Smart Browser implementation, there is still a chance of candidates using unfair practices or ChatGPT for plagiarism, which makes the process unfair for those candidates who are genuine in their attempts.
After implementing the Smart Browser feature on these same assessments, we found that the solvability of various question types and the average solvability decreased significantly. The given table shows the solvability of different question types after implementing the Smart Browser.
This clearly demonstrates that implementing the Smart Browser feature for assessments helps decrease solvability and provides you, as recruiters, a much more genuine and serious pool of candidates who were able to solve that assessment without using any external help.
The table below shows the decrease in solvability when the Smart Browser is used in comparison to the assessment where the Smart Browser is not implemented.
LLMs like ChatGPT are making it easier for candidates to write code for take-home tech assignments. While most LLMs can currently handle basic coding tasks, they are getting better at building complex code. This raises the question: could AI eventually solve any coding challenge?
Tech recruiting teams have two options here:
Forbid the use of AI in coding tests completely: This is ideal for large-scale hiring where efficiency is key. HackerEarth can detect ChatGPT use and eliminate candidates who rely on it. This leaves only those who completed the test independently.
Embrace AI in coding tests: This is better for hiring a small number of highly technical roles. Many experienced developers use ChatGPT to write or analyze complex code. Allowing such candidates to use AI during tests broadens the scope of skill assessment. Think of it like writers using spell checkers. We don’t penalize them for using AI tools. We judge them on research, analytical skills, and creativity – qualities AI can’t replicate. Similarly, there are instances where we can accept AI use in coding tests for specific roles.
The data above clearly shows that the difficulty and solvability levels of coding questions significantly increases when HackerEarth’s Smart Browser is used for proctoring. Tech recruiters may want to employ this feature in assessments where the primary objective is to evaluate a candidate’s core programming skills, such as syntax familiarity, problem-solving ability without external assistance, and code efficiency.
Similarly, they may want to allow the use of LLMs in scenarios where the primary focus is on assessing problem-solving skills, creativity, and the ability to communicate effectively about code.
We leave the final decision of using the Smart Browser up to you, but we recommend that you consider using it to attract a pool of genuine candidates who can clear assessments without external help, and make your company’s assessment process more transparent and reliable.
Head over here to check the Smart Browser out for yourself! Or write to us at support@hackerearth.com to know more.
Introduction In today's dynamic workplaces, a strong HR department is no longer a luxury –…
Job task analysis is a crucial process for understanding the specific duties and skills required…
In today's competitive talent landscape, attracting top candidates requires going beyond traditional job board postings.…
Finding the perfect fit for your team can feel like searching for a unicorn. But…
Recruitment forms a strong foundation to build an effective team. However, do you know if…
Introduction Performance appraisal has seen a tremendous change over the years. It is no longer…