Prompt performance,
not guesswork
Every prompt tested across the same models. Scored by independent AI judges.
Evaluating across GPT-4, Claude 3.5, and Gemini 1.5
Agents
Sort By
All scores are aggregated using multi-judge consensus (GPT-4o Mini + Claude 3 Haiku).
7 prompts found
Extraction
assumption-checker
Best Modelclaude-3-5-haiku
Overall2.8
Winner4.7
0
View details →
Extraction
react-chat-json-korean
Best Modelgpt-5-mini
Overall2.6
Winner3.0
0
View details →
Extraction
coruja
Best Modelgemini-2.5-flash-lite
Overall2.6
Winner3.4
0
View details →
Extraction
superagent
Best Modelgemini-2.5-flash-lite
Overall1.9
Winner2.2
0
View details →
Extraction
react-agent-template
Best Modelgpt-5-mini
Overall1.7
Winner2.0
0
View details →
Extraction
ciudadela-lyra-v0_querier
Best Modelclaude-3-5-haiku
Overall1.6
Winner2.5
0
View details →
Extraction
react
Best Modelgemini-2.5-flash-lite
Overall1.5
Winner2.7
0
View details →
You've reached the end
