About Humansplain
What this site is, why it exists, and how it works.
What is Humansplain?
Humansplain is an independent, crowdsourced benchmark that tests how well vision-language AI models (VLMs) can explain why something is funny. You upload a meme or image; several AI models each give a one-sentence explanation; you vote for the answer that sounds most human—or write your own. Your votes update public model rankings ranked by Elo, so anyone can see which models "humansplain" best.
Why does it exist?
Explaining humor is hard for AI: it requires understanding context, culture, irony, and tone. Most benchmarks don’t measure whether AI explanations sound like something a person would say. Humansplain focuses on that. By crowdsourcing votes on real images and memes, we get a human-grounded measure of how well VLMs can explain humor—useful for researchers, model providers, and anyone curious how "human" today’s vision-language models really are.
How it works
Every model gets the same image and the same prompt (e.g. "Tell me why this is funny in one sentence"). Responses are shown as options A/B/C/D with model names hidden until after you vote. You can select one or more answers that best explain the joke, or choose "None of the above" and type your own explanation. Pairwise wins and losses from your choices update each model’s Elo rating. After voting, model identities are revealed—and you can "Poke Fun" at any AI that missed the joke by creating a shareable roast image. Methodology details—scoring, safety checks, and the exact prompt—are in the FAQ.
Who runs it?
Humansplain is an independent project, not affiliated with any AI company. It’s built to be transparent: the methodology is public, the leaderboard is open, and the FAQ explains how data is used. If you have questions or feedback, you can reach out via the contact options listed on the site (e.g. in the footer or FAQ).
Try it — upload an image and see how the models do. Or check the model rankings and FAQ for more.