Hire top tech talent with our recruitment platform
Access Free DemoSubjective Match on HackerEarth Assessments: Make Technical Screening Smarter
In tech or coding assessments, subjective questions are open-ended questions that require the candidate to provide a more detailed or nuanced response than a simple yes or no answer. These questions are often used to assess the candidate’s understanding of a particular concept, their ability to think critically, and their problem-solving skills.
Let’s be honest — subjective questions are an integral part of the technical screening process, but they are really hard to evaluate. There is no standardized format or set of guidelines for subjective questions in tech or coding assessments. This can make it difficult for recruiters to compare responses across different candidates and assessments.
Evaluating subjective questions requires a significant amount of effort. Recruiters need to carefully read and analyze each response, which can be time-intensive, especially when they have to evaluate a large number of candidates.
Delays in evaluation creates a domino effect — delaying all further processes and throwing the time-to-hire metric into a tizzy! Candidates don’t get timely updates about their interview status, which also impacts the candidate experience your recruiting team is trying to maintain.
The good news is, you can avoid this chaos. Thanks to HackerEarth’s newly introduced Subjective Match feature.
Enter: Subjective Match, a smarter evaluation method for assessments
There are three evaluation methods by which you can evaluate subjective questions:
Method #1: AI evaluation
Our AI evaluation method (earlier known as the auto-evaluation method) uses ChatGPT and HackerEarth’s proprietary AI models to evaluate a candidate’s answers automatically. The prerequisite is that recruiting teams need to provide a base answer before sending the tests to candidates. HackerEarth’s AI will compare this base answer to the candidate’s submission and evaluate its accuracy.
There is also an option to compare the expected answer and the one answered by the candidate. For this, you can simply enable the View Difference option.
Here’s an example of how our AI evaluates the differences between the expected answer for a question, and the candidate’s version.
The above screengrab shows sentences highlighted in red which have not been included by the candidate in their answer when compared to the expected answer.
This evaluation method is best-suited for long, text-based answers and we recommend that you do not use it for numerical strings.
Also read: 4 Ways HackerEarth Flag the Use of ChatGPT in Hiring Assessments
Method #2: Keyword evaluation
The keyword evaluation method lets admins define the specific keywords that should appear in the answer. If the candidate’s submission includes the exact keyword, they’ll be scored accordingly.
Things you need to know while using the keyword evaluation method:
- The maximum length of keywords should be 30 characters.
- At least 1 keyword should be present to execute the evaluation process
- The maximum limit for the keywords is 15.
- At least one keyword score option must be equal to the maximum score of the question.
Here’s the criteria to allocate the keyword score:
- Organize the keyword options in descending order based on their scores.
- Verify whether the keyword is present in the candidate’s response at least once using AI.
- Allocate associated score as the question’s score when the keyword is found.
- Repeat these steps for the next high-scoring keyword that the admin has set up if the keyword is not found.
Note: The verification done here is case insensitive.
This evaluation method is especially useful for evaluating questions related to data analytics (MS-Excel), mathematical numerical, or fill-in-the-blank questions.
You can use this for process roles like BA, data scientist, financial analyst, market analyst and business analyst where the outcome could be many and each outcome has a different impact.
For example, while working on a data set, the conclusion or outcome could be different and you can have a different score for each conclusion.
Like, in the image below, if the output is 14, the candidate will get a 100% score. If the output is 9 or anything else around this number, the candidate will get a 80% score. For any other output besides the one listed below, the candidate will get zero as the score.
Method #3: Manual evaluation
If you’d rather skip the AI and use your personal judgment to evaluate candidate submissions, then we have made that option available to you as well! You can manually check the candidate’s submission with the base answer you added when you were setting up the assessment.
Note: The base answer will also be present in the candidate’s report to make the comparison easier.
Witness a smoother evaluation experience with Subjective Evaluation
For recruiters and hiring managers, our Subjective Evaluation feature will change the way you evaluate candidate submissions.
Not only will it make the screening process seamless but also reduce the time and effort in conducting the manual checks for each submission efficiently. And, if you have only tried out our AI method yet, we recommend that you explore the keyword evaluation method, too, and check the difference.
Until next time, happy hiring!
Get advanced recruiting insights delivered every month
Related reads
Top 8 Sourcing Tools for Recruiters: A Comprehensive Guide
In today’s competitive talent landscape, attracting top candidates requires going beyond traditional job board postings. This is where effective sourcing tools comes into…
Benefits of Technical Interview Outsourcing for Growing Companies
With growth, recruiting the best technical talents becomes one of the most important, but also the hardest, processes. Screening technical candidates requires time,…
Enterprise Recruitment – Process & Challenges
In recent years, recruitment practices have changed tremendously. As the times advanced, organisations took numerous steps towards adopting technology-based recruitment, addressing the various…
Leveraging Recruitment Metrics to Improve Hiring Decisions
Today’s job market is very competitive. Organizations must adopt data-driven approaches to amplify their recruitment efforts to stay afloat in the face of…
The Impact of Talent Assessments on Reducing Employee Turnover
Organizations of all industries struggle with employee turnover. The high turnover rates cause increased hiring costs, lost productivity, and broken team dynamics. That’s…
Pre-Employment Assessment Testing – The Complete Guide
Candidate assessment is a major part of the hiring process. The talent acquisition system emphasizes conducting pre-employment assessment testing to derive quality results….