Prompt performance,
not guesswork
Every prompt tested across the same models. Scored by independent AI judges.
Evaluating across GPT-4, Claude 3.5, and Gemini 1.5
anthropic-cookbook
Sort By
All scores are aggregated using multi-judge consensus (GPT-4o Mini + Claude 3 Haiku).
9 prompts found
Summarization
Legal Document Summarizer
Best Modelgemini-2.5-flash-lite
Overall4.6
Winner4.9
0
View details →
Extraction
RAG Query Answering
Best Modelgpt-5-mini
Overall4.0
Winner4.1
0
View details →
Summarization
Guided Legal Summary Generator
Best Modelgemini-2.5-flash-lite
Overall3.4
Winner4.8
0
View details →
Extraction
Citation Extraction Agent
Best Modelclaude-3-5-haiku
Overall2.9
Winner3.7
0
View details →
Classification
Customer Support Ticket Classifier
Best Modelgpt-5-mini
Overall2.7
Winner3.3
0
View details →
Summarization
Long Document Sublease Summarizer
Best Modelgemini-2.5-flash-lite
Overall2.7
Winner4.4
0
View details →
Extraction
Text-to-SQL with Chain-of-Thought
Best Modelclaude-3-5-haiku
Overall2.3
Winner2.6
0
View details →
Extraction
Text-to-SQL Converter
Best Modelgpt-5-mini
Overall2.3
Winner3.4
0
View details →
Extraction
Text-to-SQL with Few-Shot Examples
Best Modelgpt-5-mini
Overall2.2
Winner2.9
0
View details →
You've reached the end
