Talent Assessment

How HackerEarth’s Smart Browser Has Increased Integrity of Assessments In the Age of AI

At HackerEarth, we take pride in building robust proctoring features for our tech assessments.

The tech teams we work with want to hire candidates with the right skills for the job, and it helps no one if the candidates can easily ace tests by plagiarizing answers. HackerEarth Assessments has always boasted of robust proctoring settings to ensure that our assessments help users find the right skill match every single time. And to add to it we launched our anti-ChatGPT feature called Smart Browser last year. 

In case you missed the launch announcement, our Smart Browser is a unique new feature which only allows candidates to attempt a test in a HackerEarth desktop application with stricter proctoring features than those provided by our browser test environment. Smart Browser prevents the following candidate actions:

  • Screen sharing the test window
  • Keeping other applications open during the test
  • Resizing the test window
  • Using multiple monitors during the test
  • Taking screenshots of the test window
  • Recording the test window
  • Restrict keystrokes that are as follows:
    • All function keys and combos involving keys such as:
      • F1, F5 + Alt, etc.
      • Alt + Tab
      • Ctrl + Alt + Delete
      • Ctrl + V
      • Ctrl + C
    • OS superkeys and combos involving these keys  ex: Windows Key, Mac Command Key, Windows Key + C
  • Viewing OS notifications
  • Running the test window within a Virtual Machine
  • Usage of browser developer tools

A year after the launch of this feature, we wanted to understand the impact of using this feature on the take home assignments sent to candidates. We decided to look at the difference in solvability between assessments where Smart Browser was used for proctoring, and the ones without. 

What the data from Smart Browser shows us

One way to check a test’s integrity is to see how highly solvable it is. If a coding test scores high on solvability, then candidates would find it easy to crack;  and anyone would pass the assessment. Creating the perfect coding assessments requires finding the right solvability, which should neither be too high nor too low. According to expert estimates, a solvability percentage of 10-20% is considered to be ideal, which can change according to the difficulty level chosen by the recruiting teams and the number of candidates taking the test. 

Now, Smart Browser helps users set a high proctoring environment, which makes it difficult for candidates to use any unfair practices while taking the assessments, allowing only genuine candidates to solve the questions in the assessments.

This brings us to the following observations:

Scenario A:

Some of our users chose not to implement the Smart Browser feature while conducting the assessments; instead allowing candidates the option to use an LLMs to answer questions. We found that the solvability of different questions is different in this scenario. The table given will explain the solvability of different question types in the assessment without the Smart Browser.

Solvability of tech assessments without the use of Smart Browser

This is still a difficult-to-solve assessment for the candidates due to HackerEarth’s rich question library. But without the Smart Browser implementation, there is still a chance of candidates using unfair practices or ChatGPT for plagiarism, which makes the process unfair for those candidates who are genuine in their attempts.

Scenario B:

After implementing the Smart Browser feature on these same assessments, we found that the solvability of various question types and the average solvability decreased significantly. The given table shows the solvability of different question types after implementing the Smart Browser. 

Solvability of tech assessments with the use of Smart Browser

This clearly demonstrates that implementing the Smart Browser feature for assessments helps decrease solvability and provides you, as recruiters, a much more genuine and serious pool of candidates who were able to solve that assessment without using any external help. 

The table below shows the decrease in solvability when the Smart Browser is used in comparison to the assessment where the Smart Browser is not implemented.  

Overall decrease in the solvability of tech assessments when Smart Browser is used

Should you implement the Smart Browser for your next assessment?

LLMs like ChatGPT are making it easier for candidates to write code for take-home tech assignments. While most LLMs can currently handle basic coding tasks, they are getting better at building complex code. This raises the question: could AI eventually solve any coding challenge?

Tech recruiting teams have two options here:

Forbid the use of AI in coding tests completely: This is ideal for large-scale hiring where efficiency is key. HackerEarth can detect ChatGPT use and eliminate candidates who rely on it. This leaves only those who completed the test independently.

Embrace AI in coding tests: This is better for hiring a small number of highly technical roles. Many experienced developers use ChatGPT to write or analyze complex code. Allowing such candidates to use AI during tests broadens the scope of skill assessment. Think of it like writers using spell checkers. We don’t penalize them for using AI tools. We judge them on research, analytical skills, and creativity – qualities AI can’t replicate. Similarly, there are instances where we can accept AI use in coding tests for specific roles.

The data above clearly shows that the difficulty and solvability levels of coding questions significantly increases when HackerEarth’s Smart Browser is used for proctoring. Tech recruiters may want to employ this feature in assessments where the primary objective is to evaluate a candidate’s core programming skills, such as syntax familiarity, problem-solving ability without external assistance, and code efficiency. 

Similarly, they may want to allow the use of LLMs in scenarios where the primary focus is on assessing problem-solving skills, creativity, and the ability to communicate effectively about code.

We leave the final decision of using the Smart Browser up to you, but we recommend that you consider using it to attract a pool of genuine candidates who can clear assessments without external help, and make your company’s assessment process more transparent and reliable.

Head over here to check the Smart Browser out for yourself! Or write to us at support@hackerearth.com to know more. 

 

Shivam Gupta

Share
Published by
Shivam Gupta

Recent Posts

Benchmark Metrics to Improve Your Recruiting Funnel

In a competitive job market, recruiting the right talent efficiently and effectively can set your…

6 days ago

What Is a 30-60-90 Day Plan for New Managers?

Transitioning to a managerial position can be both thrilling and a bit daunting. To help…

6 days ago

Top 10 SaaS Recruitment Software

The competition for good jobs is very high, and SaaS recruitment software is used in…

7 days ago

How Talent Assessment Tests Improve Hiring Accuracy and Reduce Employee Turnover

Recruiting the right candidates is a science and an art. In the current world where…

1 week ago

10 Digital Interviewing Tips for Employers

The shift to remote work has brought digital interviewing to the forefront of recruitment strategies.…

1 week ago

Best Offboarding Software in 2025

Offboarding is as important to an organization's talent management system and strategy as onboarding is.…

1 week ago