Prompt performance,
not guesswork
Every prompt tested across the same models. Scored by independent AI judges.
Evaluating across GPT-4, Claude 3.5, and Gemini 1.5
rlm
Sort By
All scores are aggregated using multi-judge consensus (GPT-4o Mini + Claude 3 Haiku).
8 prompts found
Extraction
rag-prompt-llama3
Best Modelgemini-2.5-flash-lite
Overall3.7
Winner3.9
0
View details →
Extraction
rag-prompt
Best Modelgemini-2.5-flash-lite
Overall3.4
Winner3.8
0
View details →
Extraction
rag-prompt-llama
Best Modelgpt-5-mini
Overall3.3
Winner3.6
0
View details →
Extraction
rag-prompt-mistral
Best Modelgpt-5-mini
Overall3.3
Winner3.8
0
View details →
Extraction
rag-answer-hallucination
Best Modelgemini-2.5-flash-lite
Overall2.7
Winner3.7
0
View details →
Summarization
reduce-prompt
Best Modelclaude-3-5-haiku
Overall2.4
Winner2.9
0
View details →
Extraction
text-to-sql
Best Modelgpt-5-mini
Overall2.3
Winner2.6
0
View details →
Extraction
map-prompt
Best Modelclaude-3-5-haiku
Overall2.0
Winner2.3
0
View details →
You've reached the end
