Assessments· 2026-04-20 · 4 min read

How to Run Competency Assessments That Produce Data You Can Trust

Josh Friedman

Josh Friedman

The assessment is where competency frameworks meet reality. It's the moment you stop talking about what skills people should have and start measuring what they actually do have. And it's the step most organizations either skip or execute so poorly that the data is useless.

A competency assessment measures an individual's demonstrated capability in specific skills against a defined proficiency scale. Not their potential. Not their training history. What they can do right now, at what level, validated through structured evaluation.

When done well, assessments produce the most valuable data in your entire HR stack. When done poorly, they produce noise that erodes trust and wastes everyone's time.

Why Most Assessments Fail

Self-assessment alone is unreliable

People are bad at rating their own skills. Dunning-Kruger is real — the least competent tend to overrate themselves, and the most competent tend to underrate. A study published in the Journal of Personality and Social Psychology found that self-assessments correlated poorly with objective performance measures, with the bottom quartile overestimating their ability by an average of 50%.

Annual cadence is too slow

Running assessments once a year means working with stale data for 11 months. Skills develop. Roles change. People join and leave. By the time you act on annual assessment data, it's describing a workforce that no longer exists.

Disconnected from everything

The assessment happens. Data gets collected. Then nothing changes. It doesn't feed into development plans, career conversations, or workforce decisions. When people see that assessments don't lead to action, they stop taking them seriously — and data quality collapses.

The Dual Assessment Model

The most reliable method: combine self-assessment with manager assessment. Both parties rate the employee against the same competency framework, using the same proficiency scale.

Why both?

  • Self-assessment captures skills managers don't directly observe — technical depth, problem-solving approaches, collaboration with other teams
  • Manager assessment provides an external calibration check — correcting for blind spots and overconfidence
  • The gap between self and manager ratings is itself valuable data — large discrepancies indicate either a calibration problem or a visibility issue

How it works in practice:

  1. Employee rates themselves on 8-12 competencies using the 5-level proficiency scale
  2. Manager independently rates the same employee on the same competencies
  3. System highlights discrepancies (where ratings differ by 2+ levels)
  4. Manager and employee discuss discrepancies in a brief calibration conversation
  5. Final ratings are recorded and feed into gap analysis

The entire process takes 15 minutes per person for self-assessment, and 10-15 minutes per direct report for managers. For a team of 8, that's about 2 hours of total manager time per quarter — a fraction of what most organizations spend on annual reviews that produce less useful data.

Running the Assessment: Step by Step

Step 1: Prepare the framework

Before anyone rates anything, ensure the competency framework is clear:

  • Each competency has a definition that everyone interprets the same way
  • Each proficiency level has behavioral indicators specific to the role family
  • Managers have been calibrated on what each level looks like

Step 2: Communicate the purpose

Assessments fail when people think they're being judged rather than developed. Frame the assessment as a development tool:

  • "This measures where you are so we can build a targeted development plan"
  • "There's no pass or fail — every rating is a data point for growth"
  • "Your self-assessment matters as much as your manager's"

Step 3: Run dual assessments

Give both parties a defined window (1-2 weeks). Don't let it drag — urgency produces higher quality responses. Late assessments tend to be hasty.

Step 4: Review discrepancies

Flag any competency where self and manager ratings differ by 2 or more levels. These need a conversation. Sometimes the employee undervalues a skill. Sometimes the manager isn't seeing work that happens outside their view. Either way, the conversation is valuable.

Step 5: Generate gap data

With assessment data in, gap analysis becomes automatic: for each person, calculate the distance between their assessed proficiency and their role's requirements. Aggregate by team. Aggregate by organization. The gaps tell you where to invest.

Step 6: Connect to development

Every significant gap should map to a learning plan within two weeks of assessment completion. Assessments that don't lead to action lose credibility fast. When employees see that their assessment data directly informs their development opportunities, engagement with future assessments improves.

Assessment Frequency

Quarterly is the sweet spot for most organizations. It's frequent enough to track real skill development, infrequent enough to avoid assessment fatigue.

Monthly makes sense for fast-moving teams (early-stage startups, rapid-growth departments) or for new hires in their first 6 months.

Annual is too slow. By the time you act on the data, it's describing a different workforce. Reserve annual cadence only as a fallback if quarterly isn't organizationally feasible.

Measuring Assessment Quality

Track these signals to know if your assessments are producing trustworthy data:

  • Completion rate: Target 90%+. Below 80% means the process is too burdensome or people don't see the value.
  • Self-manager correlation: Some gap is expected and healthy. Massive systematic divergence (everyone rates themselves 2 levels higher than their manager) means the scale isn't calibrated.
  • Score movement: If scores never change quarter to quarter, either development isn't happening or people are auto-filling their previous answers. Both are problems.
  • Action rate: What percentage of identified gaps result in a development action within 30 days? If it's below 50%, the assessment-to-action loop is broken.

FAQ

What is a competency assessment?

A competency assessment measures an individual's demonstrated proficiency in specific skills against a defined scale. It captures what someone can actually do today — not their training history or potential — using structured evaluation methods like dual self-and-manager assessment.

What is the best method for assessing employee competencies?

Dual assessment (self + manager) produces the most reliable data. Self-assessment captures skills managers don't directly observe. Manager assessment provides external calibration. The gap between the two is itself valuable diagnostic data.

How often should competency assessments be conducted?

Quarterly is optimal — frequent enough to track development, infrequent enough to avoid fatigue. Each assessment takes about 15 minutes for self-assessment and 10-15 minutes per direct report for managers.

How do you ensure competency assessments are fair and consistent?

Three practices: use behavioral indicators (specific, observable descriptions of each proficiency level), calibrate managers before assessments (ensure consistent interpretation of the scale), and use dual assessment to reduce individual bias.

What do you do with competency assessment data?

Connect it to gap analysis (identify where people fall short of role requirements), learning plans (target development at specific gaps), career pathing (show progress toward next-role readiness), and workforce planning (aggregate team and org-level capability data).

Ready to make skills visible?

See how SkillsDB puts your workforce data to work.

Book a Demo