Prompt performance,
not guesswork
Every prompt tested across the same models. Scored by independent AI judges.
Evaluating across GPT-4, Claude 3.5, and Gemini 1.5
Chatbots
Sort By
All scores are aggregated using multi-judge consensus (GPT-4o Mini + Claude 3 Haiku).
5 prompts found
Extraction
assumption-checker
Best Modelclaude-3-5-haiku
Overall2.8
Winner4.7
0
View details →
Extraction
chat-langchain-rephrase
Best Modelgemini-2.5-flash-lite
Overall2.6
Winner2.9
0
View details →
Extraction
rephrase
Best Modelgpt-5-mini
Overall2.1
Winner2.6
0
View details →
Summarization
librarian_guide
Best Modelclaude-3-5-haiku
Overall2.0
Winner2.4
0
View details →
Extraction
conversation-title-generator
Best Modelgpt-5-mini
Overall1.6
Winner1.9
0
View details →
You've reached the end
