How to Run an AI Skills Gap Assessment for Your Organization
Framework for assessing your team's AI readiness. Identify skill gaps, prioritize training needs, and build a development plan that connects to business outcomes.
Most companies that come to us for AI training have already decided they need it. The problem is they skip the step that determines whether the training actually works: figuring out where their people stand right now.
Without a skills gap assessment, you are guessing. You might put your analysts through a prompt engineering workshop when what they actually need is data literacy fundamentals. You waste budget, you waste time, and you lose the trust of employees who sat through training that was not relevant to their work.
A skills gap assessment gives you the baseline. It tells you who knows what, where the gaps are, and what to prioritize. It also gives you documentation you can use for grant funding applications, which increasingly require evidence of specific training needs.
Why Assess Before You Train
Training without assessment leads to predictable problems.
You overspend. Generic AI training for an entire organization is expensive and inefficient. When you assess first, you target investment at the gaps that matter most to your business outcomes.
You miss the real gaps. We have worked with companies where leadership was convinced the problem was technical -- their people did not know how to use AI tools. The actual gap was strategic: the team could use ChatGPT just fine, but nobody knew how to evaluate whether an AI solution was the right approach for a given business problem.
You cannot measure progress. If you do not know where your team was before training, you have no way to demonstrate that the training worked. This matters for internal ROI conversations and for grant reporting.
The Four Skill Levels Framework
We use a four-level framework to categorize AI skills. This is not about job titles or seniority -- it is about the depth of understanding someone needs as AI becomes part of the workflow.
| Level | Label | Description | Can This Person... |
|---|---|---|---|
| 1 | Awareness | Understands what AI is, what it can and cannot do, and why it matters to the business. | Explain AI concepts in plain language. Identify where AI might apply. Recognize AI hype vs. reality. |
| 2 | Literacy | Can evaluate AI tools and proposals. Understands data requirements, risks, and ethical considerations. | Assess vendor claims. Ask the right questions about data and privacy. Make informed decisions about AI investments. |
| 3 | Proficiency | Uses AI tools effectively in daily work. Can design prompts, interpret outputs, and integrate AI into existing processes. | Build effective prompts. Validate AI outputs. Automate repetitive tasks. Use AI-assisted analytics tools. |
| 4 | Mastery | Builds, customizes, and deploys AI solutions. Understands model selection, fine-tuning, and system integration. | Develop custom AI applications. Evaluate and select models. Build data pipelines. Integrate AI into production systems. |
Not everyone needs to reach the same level. The goal is to get each person to the level their role requires, not to make everyone an AI expert.
How to Assess: Four Methods That Work
No single assessment method gives you the full picture. We use a combination of four approaches.
Surveys give you broad coverage quickly. We ask people to rate their comfort across specific AI competencies -- not just "how much do you know about AI" but "can you write a prompt that constrains an LLM's output format" and "can you identify when an AI recommendation is based on biased data." Self-assessments have a known limitation (people are bad at rating themselves), so we combine survey data with the other three methods.
Structured interviews -- fifteen-minute conversations with a sample of employees across roles. These reveal context that surveys miss. Someone might rate themselves low because they do not know the terminology, but they have been using AI tools under a different name for months. The reverse happens too.
Observation means watching people work. How do they interact with existing tools? Where do they get stuck? This is especially valuable for identifying the difference between someone who can technically use a tool and someone who uses it well.
Task-based evaluation gives people a realistic task and measures their approach. For someone who should be at Proficiency, that might be: "Use an AI tool to categorize these customer complaints and identify the top three themes." We look at process -- do they validate the output, do they question the categories, do they recognize when the tool gets something wrong.
Our free module works as a quick version of this. It gives people a hands-on AI task and lets them see where they stand.
Mapping Skills to Roles
The assessment only becomes useful when you map the required skill level to each role. Here is the pattern we see most often:
- Executive leadership needs Literacy at minimum. They need to evaluate proposals, understand risks, and make investment decisions. A CEO who cannot distinguish between a reasonable AI project and a vendor selling hype is a liability.
- Middle managers need Literacy plus early Proficiency. They translate strategy into action and manage people using AI tools daily.
- Analysts, marketers, and knowledge workers need Proficiency. They use AI tools every day for data analysis, content creation, and process automation. They need to validate outputs and know the limits.
- Developers and technical staff need Proficiency moving toward Mastery. They build integrations, maintain systems, and customize AI tools for your use cases.
- Frontline staff need Awareness at minimum, with targeted Proficiency for any AI tools they use directly.
Common Patterns We Find
After running these assessments across dozens of organizations, certain patterns repeat.
Leaders overestimate team readiness. They see people using ChatGPT and assume the organization has AI literacy. Using a chatbot is not the same as understanding how to evaluate AI for business decisions.
Tool knowledge does not equal strategic thinking. Someone proficient with a specific AI tool may still lack the ability to think critically about when and whether to use AI at all. Teams that generate impressive outputs with AI tools often cannot explain why they chose that approach over a simpler alternative.
The biggest gaps are in the middle. Most organizations have a few people with strong technical skills and a broad base with basic awareness. The gap is in Literacy and early Proficiency -- the ability to evaluate, apply, and integrate AI thoughtfully. This is where targeted training has the highest impact.
Confidence does not correlate with competence. Some of the most capable people rate themselves lowest. Some of the least capable rate themselves highest because they do not know what they do not know. This is why you cannot rely on surveys alone.
Turning the Assessment into a Training Plan
Once you have the data, build a training plan by working backward from business impact.
Start with strategic priorities. Which business outcomes matter most in the next 12 months? Those priorities determine which skill gaps to close first.
Map gaps to priorities. If your top priority is operational efficiency and your operations team is at Awareness when they need Proficiency, that is your highest-priority training investment.
Design training in stages. Do not try to move everyone from Awareness to Mastery in a single program. Move them one level at a time with practical application between stages. Our training programs are structured this way -- each level builds on the previous one with hands-on work between sessions.
Set measurable outcomes. "Improve AI skills" is not a goal. "Enable the analytics team to independently build and validate AI-assisted reports within 90 days" is a goal. You run the assessment again after training to measure progress.
Supporting Grant Applications
If you are a Michigan employer, the Going PRO Talent Fund and similar grant programs can cover a significant portion of your training costs. But these programs require evidence that you have identified specific skill gaps tied to business needs.
A formal skills gap assessment gives you exactly what grant applications ask for: documentation of current skill levels, identification of specific gaps, a training plan that addresses those gaps, and a method for measuring outcomes. Instead of writing "our team needs AI training," you write "our assessment identified that 73% of our analysts are at Awareness level for AI-assisted data analysis, while their roles require Proficiency." That specificity is the difference between applications that get funded and applications that get rejected.
Getting Started
Start with a simple self-assessment. Have each team member rate their comfort level across the four skill levels for the AI competencies most relevant to their role. This takes 15 minutes and gives you a rough picture of where you stand.
For the full picture -- one you can use for strategic planning and grant applications -- you need a structured assessment with interviews and task-based evaluation. That is what we do in our AI consulting engagements. We run the assessment, deliver a detailed report with role-by-role gap analysis, and build a training plan that connects directly to your business priorities.
The companies that get the most value from AI training are the ones that know exactly what they need before they start.
About the Author
Founder & Principal Consultant
Josh helps SMBs implement AI and analytics that drive measurable outcomes. With experience building data products and scaling analytics infrastructure, he focuses on practical, cost-effective solutions that deliver ROI within months, not years.
Get practical AI & analytics insights delivered to your inbox
No spam, ever. Unsubscribe anytime.
Ready to discuss your needs?
I work with SMBs to implement analytics and adopt AI that drives measurable outcomes.