Competency based interviewing
Unstructured interviews invite bias and “first-impression” errors. Structured, competency-based interviews focus on past, observable behavior and are consistently the best predictors of future behavior when tied to a clear requirement profile. This is a step by step guide in how to conduct a competency based interview.
Why CBI (and what “good” looks like)
Goal: predict future behavior by collecting verifiable evidence of past behavior against a defined requirement profile.
What “good” looks like:
-
Questions map 1:1 to lens competencies and are asked consistently across candidates.
-
Interviewers probe for S-B-R (Situation → Behavior → Result) until the action and impact are clear.
-
Ratings are behavior-anchored, not “vibes.”
-
Panel decisions are mechanically combined with other insights (assessments, work samples) using a rubric.
Payoff: higher predictive validity, cleaner notes, less bias, easier calibration.
Design the interview from the lens (before you meet anyone)
-
Lock the profile: choose the active lens (and weights) that reflect the real job context.
-
Select 4–6 competencies to interview (role-critical + one potential risk area).
-
Draft your guide: 2 core questions (+1 reserve) per competency. Keep wording behavior-neutral (no leading).
-
Assign roles:
-
Lead interviewer drives flow and timekeeping.
-
Observer/scribe captures S-B-R facts and preliminary ratings.
-
-
Decide probes you’ll reuse (se below)
-
Define anchors: what does a 1 / 2 / 3 / 4 look like for each competency in this context? (Use the job’s success examples.)
Run the interview (structure & timing)
60-minute template
-
0–3 min — Welcome, agenda, time check.
-
3–8 min — Role context from hiring manager; confirm scope & constraints.
-
8–45 min — CBI block (4–6 competencies; ~6–8 min each).
-
45–55 min — Candidate questions (watch for situational awareness).
-
55–60 min — Close, decision timeline.
Ground rules you say out loud
-
“We’ll focus on specific examples. If I interrupt, it’s to keep us on time or clarify what you did.”
-
“I’ll ask for another example on some questions to see repeatability.”
Ask great CBI questions (and make them harder to “game”)
-
Start broad: “Tell me about a time you [competency behavior]…”
-
Then pin it down: “When was this? Who was involved? What was your part?”
-
Force specifics: dates, names, artifacts (deck, PRD, incident ticket), numbers (lead time, revenue, NPS).
-
Ask for a second, different example if the first one is borderline or old.
Example (Problem Solving / Strategic)
-
“Tell me about a complex problem you framed differently than others and what changed as a result.”
-
Probes: “What options did you consider and why were they rejected?” “How did you measure success?”
Probes you can reuse (copy/paste card)
-
“What made this hard?”
-
“Walk me through the sequence—first, then, finally.”
-
“What did you do (not the team)?”
-
“What would you do differently?”
-
“What evidence shows it worked (or didn’t)?”
-
“Give me another example in a different context.”
Note-taking that stands up in calibration (and legally)
-
Write facts, not adjectives: “Created 90-day plan; weekly cadence; reduced rework from 18% → 7%.”
-
Tag each note S / B / R so evidence is reviewable.
-
Keep out protected-class or irrelevant personal info (family, health, unrelated hobbies, etc.).
-
One line per risk/mitigation if something concerns you (e.g., “very thorough → risk of slow calls; asked how they timebox decisions”).
Rating & decision making
Behavior-anchored 1–4 (example)
-
1 – Insufficient: vague/generic; no ownership.
-
2 – Basic: one example; limited scope or unclear impact.
-
3 – Strong: multiple solid examples; clear personal impact.
-
4 – Exceptional: repeated in high-stakes contexts; measurable outcomes; teaches/influences others.
Combine scores mechanically:
-
Weight per the lens (e.g., Strategic 30%, Operative 20%, etc.).
-
Add structured evidence from assessments/work samples; avoid “late-stage gut feel.”
-
Decide “hire / hold / no hire” with a short justification referencing competencies + evidence.
Bias controls (practical, not preachy)
-
Same guide for everyone.
-
Silence the halo: rate each competency independently.
-
Debrief in order (competency by competency), not “overall impressions.”
-
Blind anchors: agree what a “3” looks like before seeing candidates.
-
Time discipline: equal time per candidate; same number of follow-ups.
-
In Analyze View, try a name/score-hidden pass before full reveal when comparing.
Red flags & how to verify fairly
-
Only “we” language → ask “What did you do specifically?”
-
No recent examples → ask for one from the last 12 months.
-
Outcome foggy → ask for artifacts or metrics, not opinions.
-
Story inflation → request a second example; triangulate with references/work samples.
Remote interviewing specifics
-
Send the agenda & guide topics in advance; clarify time zones.
-
Ask for screen-share artifacts (roadmap, ticket, deck) when appropriate.
-
Ensure bandwidth plan B (phone dial-in).
-
Still keep 80/20 talk ratio—mute self when not probing; use short, crisp prompts.
Candidate experience (CX) that helps your brand
-
Open with “what to expect” and decision timing.
-
Keep clear transitions between topics; it reduces anxiety and improves evidence quality.
-
Offer brief role clarity and constraints (budget, team size) so examples can be relevant.
-
Close with next steps and who to contact.
Calibrate your panel (30 minutes well spent)
-
Before the loop: review the lens, agree on anchors, and rehearse 1–2 sample ratings.
-
After the loop: debrief by competency, not by candidate. Capture rationale in notes (use Report Studio “Notes” if you want everything in one place).
Fit with Assessio Platform (where this plugs in)
-
Before interviews: pick/confirm the lens; use Candidate Summary to spot strengths/risks to probe.
-
During/after: record evidence in Report Studio → Notes; if you want a stakeholder view, generate a Match or Match with Extremes template and add your edit.
-
Comparisons: use Analyze View; start with insights only (hide names/scores), then reveal.
-
Onboarding handover: convert key risks into Directional goal → Actions → Checks.
One-page checklist (you can paste into your wiki)
-
Active lens & weights confirmed
-
4–6 competencies selected; 2 (+1 reserve) questions each
-
Probes list ready (S-B-R, evidence, “another example”)
-
Anchors for 1–4 agreed
-
Roles assigned (lead / scribe)
-
Agenda sent; bias controls set (same guide/order/time)
-
Notes captured as S-B-R facts
-
Scores combined mechanically with other signals
-
Decision & rationale recorded; onboarding risks → goals