Cheating and AI FAQ
The introduction and increasing use of AI have made the topic of cheating more relevant than ever. Questions have been raised regarding the validity and utility of psychometric assessments in a world where candidate’s proficiency with AI tools increase every day. These questions are also frequently raised by clients and prospects in meetings, and this FAQ provides Assessio’s position and answers regarding cheating with AI in our assessments.*
*As developments in AI are happening rapidly and are difficult to predict, bear in mind that some of these answers might be outdated when reading this document. Creation date: January 23rd, 2026.
Can candidates use AI to «tailor» their MAP results to a role and ensure a high match score?
We have run a series of tests to see if this is possible, but so far the use of AI has yielded poor results. In theory, it could be possible, but in practice it’s quite hard to do. While AI could be instructed to answer in a way that could be indicative of certain personality traits, creating an ideal profile for a role is a different question entirely. In order for an AI to successfully engineer a specific match score, it would need to have access to a lot of information, including, but not limited to: Norm groups, scoring logic, item loading, details about our curvelinear competency scoring and the weights for every competency. Some of this information is available, but a lot of it is not. In practical terms, the probability of this happening is low. Overall, the use of AI in MAP is not something we are too concerned about for now.
Can AI be used to cheat a cognitive ability assessments?
Yes and no. This largely depends on the cognitive ability assessment in question. Though AI has proved to be quite successful in providing correct answers to verbal and numerical items, it has been less so for abstract, figure-based items. Given the nature of text and numbers, they are easier to process for commercially available AI models, and often it is easier to copy/paste these kinds of items into an AI. For non-verbal and non-numerical figures (like matrices), these are harder to process for the AI and the practical actions required to cheat are more difficult to carry out for candidate. This means two things: the AI will have a higher margin of error and it would be difficult for candidates to succeed within a strict time limit. In terms of Assessio’s psychometric assessments, that means that parts of Logics are susceptible to AI cheating, but Matrigma is not. This, in part, forms the basis of our recommendation to use Logics primarily for late testing, where the incentive to cheat is lower. Matrigma is the optimal choice for early testing, given it’s resilience to cheating, but could also be used in the later stages, if the insights align with needs of the client.
How useful is AI in cheating on a values assessments?
This is perhaps the type of assessment that merits least concern, mainly for two reasons. First of all, we don’t recommend values assessments as a strict selection criterion, but rather as a complementing insight for potential (personality, cognitive ability).
Secondly, there really is no right or wrong answers when it comes to values, and the incentives to cheat are lower than any other psychometric assessment. Taken together, using AI to complete a values assessment just doesn’t make a lot of sense for candidates, and not something organizations should be concerned about.
Does Assessio have any indication of how common the use of AI in assessment completion is, and what measures are you taking to prevent it?
Yes, this is something we have looked into from multiple angles. We have run a number of analyses of human versus AI test completion – primarily for cognitive ability assessments, where incentives to cheat are typically higher. From this data, we have identified a series of patterns that differentiate between how people respond and how AI responds.
While this is complicated, some important findings indicate that humans usually need more time to respond to complex items, where an AI does not. This, in conjunction with other score anomalies, can highlight artificial response styles that are extremely rare or implausible to expect from a human respondent. For now, we use this to monitor general tendencies in response styles in the platform, and from what we can conclude so far, this does not seem to be a common occurrence. Right now, there seems to be little reason for concern, but we’re keeping tabs on it and will consider necessary actions when we deemed necessary. Another way we’re monitoring the situation is through distribution of scores. If the trend of candidates attempting to cheat using AI has increased significantly, and if those have proved successful, we would except to see significant changes in the distribution of scores. In the case of GMA assessments, this would be seen as an increase in high scores and the general mean when compared to pre-2022 data. For now, this does not appear to be the case, but we’re monitoring this closely.
What can we (the company) do to prevent cheating?
If cheating (with or whitout AI) is to be prevented completely, the most certain way is to conduct on site testing. This might be needed in some cases. However, this puts strain on both the company and the candidate, and lighter methods can prove helpful like communicating the importance and benefits of being sincere to candidates. When a company subjects candidates to various tests, it’s not about seeing who can perform the best on random parameters. Candidates are evaluated based on criteria that indicate whether they can meet the demands and will thrive in a specific role. In other words, it’s not just about asking, “How high can you jump?” but rather assessing if you can ‘jump as high’ as the position requires, and if this is a good match both for the company but also for the candidate.
We know that incentives to cheat are higher earlier in a recruitment process and if candidates don’t think they are able to meet the demands. This is why it might be beneficial to use more “cheating-proof” assessments early in a selection process (e.g. figure-based assessments like matrigma).
How can you tell if someone cheated on an assessment?
As mentioned there might be some response styles that are extremely rare or implausible to expect from a human respondent and that could indicate cheating. However, it is important to know that just from looking at patterns or (high) scores you can never be sure someone cheated. Our assessments are developed so that all scores are obtainable, and even though some scores or combinations are rare, they still occur. When in doubt about authenticity of scores we recommend getting information verified through interviews, cases, reference checks or the like.