Goal: Implement a model of adaptive assessment in Numbas, for:
- diagnostic tests
- assessment for learning
- A knowledge graph of topics, linked by dependency.
- One question per node on the graph.
- Classify each node as "passed" or "failed".
- After answering a question, mark it and its backward/forward dependencies as passed/failed.
- Learning objectives are subsets of the nodes, e.g. "Algebra", "Calculus".
- Use student's qualifications to estimate starting point.
- Limited "lives" for retrying a question.
Inspired by Duolingo.
- Roughly linear.
- Each topic has several questions.
- All questions must be answered correctly.
- Failed questions are put back on the end of a queue.
Talk by Mo Jebara at EAMS 2018
- Inner loop: immediate question feedback
- Middle loop: pick a question within a topic
- Outer loop: pick a topic
- Update P(pass topic) after each answer.
- Stop asking questions when confidence is high.
- Estimate student's knowledge level on a linear scale.
- Move up or down based on answers.
- Ask N questions, chosen based on student's level.
- Exam author defines topics and learning objectives.
- Topics have "depends on" / "leads to" relations.
- One question group per topic.
- Controlled by a diagnostic algorithm.
- Some built-in, can extend or write your own.
(See the documentation)
I've reimplemented DIAGNOSYS in Numbas.
numbas.mathcentre.ac.uk/exam/22135/diagnosys
Need to write lots of questions.
Must think hard about model of knowledge, and relations between topics.
To do:
- Use DIAGNOSYS on students
- Improve the editing interface: more easily configurable settings
- Implement some other models
Your input is very welcome!

