Learn what really stuck from your workshop or training

After a workshop or training, "everyone nodded" doesn't mean everyone understood. Multiple-choice quizzes test recognition, not real comprehension, and they hide misconceptions.

Lenswyse uses AI judging to evaluate open-ended answers fast, so participants can explain concepts in their own words. Each response is scored for correctness, completeness, and clarity using the same transparent criteria, revealing true understanding.

Free plan available. Start a learning check in minutes.

When to use this in your team or workshop

  • During a workshop to check understanding before moving on
  • At the end of a training to see what participants actually retained
  • After explaining a process or policy to confirm shared understanding
  • In onboarding to ensure new team members grasp key concepts
  • As an original icebreaker that gets people thinking, writing, and discussing from the start

How Lenswyse works in this scenario

1. Host sets the challenge

You create an open-ended question that tests understanding, like "In your own words, describe the 3 key steps of [topic]" or "Give one example of how you'll apply [concept] in your work." You can add your own context about the material you've covered, your organization's processes, or specific case studies so the AI understands what participants should know when evaluating answers. Unlike multiple-choice quizzes, open questions reveal how people think, not just whether they guessed right.

2. Participants answer on their own devices

Everyone writes their response on their device. This gives them time to think and formulate their answer, and it ensures you're testing actual understanding, not just quick recall.

3. AI evaluates answers using lenses

The AI applies the same criteria (correctness, completeness, clarity) to every answer. All answers get evaluated in seconds using consistent, transparent criteria, regardless of group size. This gives you an objective view of understanding across the group.

4. Group sees rankings and discusses

Results are displayed ranked by the AI's evaluation, with scores and short AI feedback. You can ask "Why did this answer score high on correctness but low on completeness?" and turn the evaluation into a real learning moment. Review top answers to highlight correct understanding, address common misconceptions, and clarify any points that multiple people missed or misunderstood.

Design Thinking
Solve the problem by understanding users first, then defining what needs fixing, and finally proposing solutions you could test.
How would you approach improving the user experience of our mobile app?
187/200

Example challenges you can run

  • "Explain our customer support process talking like Yoda."

    Lighthearted, tongue-in-cheek icebreaker. Tests understanding while lowering pressure (correctness, clarity).

  • "What are the three main goals of our new travel expense policy?"

    Fact-based question. Checks whether participants understood and can recall key information (correctness, completeness).

  • "How would you apply Design Thinking to improve our internal onboarding process?"

    Design Thinking applied to a real company context. Tests structured thinking and practical application (correctness, completeness, clarity).

  • "Come up with one creative way we could reduce meeting fatigue in our team."

    Creativity-focused question. Reveals originality and clarity of ideas (clarity, completeness).

Why AI judging makes this better than classic quizzes

AI evaluates answers fast: The AI can evaluate all answers in seconds, regardless of group size. This means you get immediate insight into what participants understood without waiting or manually reviewing responses.

Open questions instead of multiple choice: AI judging lets you use rich, open-ended answers instead of rigid quizzes. This reveals how people think, not just whether they guessed right. When people misunderstand something, their written explanations make it clear where the confusion is.

Consistent, transparent criteria: AI applies the same lenses (correctness, completeness, clarity) to every answer. The AI evaluates all answers using the same criteria, giving you an objective view of understanding across the group.

Deeper discussion, not just a scoreboard: Scores and short AI feedback give a starting point for reflection. You can ask "Why did this answer score high on correctness but low on completeness?" and turn the evaluation into a real learning moment.

Built-in documentation: All answers and scores are captured automatically. Useful for workshop reports, follow-ups, and tracking progress over time.

Try this in your next session

Ready to check what your participants really learned? Start an interactive learning check with Lenswyse and get real insight into understanding. Perfect for workshop learning checks and team activities.

Related use cases