Back to prompts

v3_evaluate_tutor_questions

extraction0 savesSource

Evaluates a set of tutor questions to determine their effectiveness and relevance for educational purposes. This tool is useful for educators seeking to enhance their question quality and alignment with learning objectives.

Prompt Text

[object SystemMessage]

{questions}

Evaluation Results

1/28/2026
Overall Score
1.95/5

Average across all 3 models

Best Performing Model
Low Confidence
openai:gpt-5-mini
2.13/5
openai:gpt-5-mini
#1 Ranked
2.13
/5.00
adh
1.2
cla
4.3
com
0.9
In
70
Out
2,763
Cost
$0.0055
anthropic:claude-3-5-haiku
#2 Ranked
1.89
/5.00
adh
0.9
cla
4.1
com
0.6
In
80
Out
324
Cost
$0.0014
google:gemini-2.5-flash-lite
#3 Ranked
1.82
/5.00
adh
0.9
cla
4.1
com
0.5
In
50
Out
288
Cost
$0.0001
Test Case:

Tags