Phunny / README.md
alecocc's picture
Update README.md
8dffc26 verified
metadata
dataset_info:
  features:
    - name: pun
      dtype: string
    - name: prefix
      dtype: string
    - name: definition
      dtype: string
    - name: answer
      sequence: string
    - name: phonetic
      dtype: int64
    - name: realistic
      dtype: int64
    - name: typology
      sequence: string
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: main
      num_bytes: 49417
      num_examples: 350
    - name: contaminated
      num_bytes: 2642
      num_examples: 20
    - name: few_shot
      num_bytes: 1382
      num_examples: 10
  download_size: 37114
  dataset_size: 53441
configs:
  - config_name: default
    data_files:
      - split: main
        path: data/main-*
      - split: contaminated
        path: data/contaminated-*
      - split: few_shot
        path: data/few_shot-*
license: mit
task_categories:
  - question-answering
language:
  - en

Phunny: A Humor-Based QA Benchmark for Evaluating LLM Generalization

Welcome to Phunny, a humor-based question answering (QA) benchmark designed to evaluate the reasoning and generalization abilities of large language models (LLMs) through structured puns.

This repository accompanies our ACL 2025 main track paper:
"What do you call a dog that is incontrovertibly true? Dogma: Testing LLM Generalization through Humor"

To reproduce our experiments: Code available on GitHub

Overview

Phunny consists of 350 novel, manually curated structured puns, created through a two-stage process: creative human design followed by automated contamination checks to ensure novelty.

All puns follow the same strcuture:

What do you call a X that Y? XZ
  • X is a prefix (subword of XZ)
  • Y is a natural language definition of the answer XZ
  • XZ is the pun answer (that starts with the prefix X), meant to be humorous

For example:

What do you call a dog that is incontrovertibly true? Dogma
→ “Dog” (X) + “dogma” (XZ), where “dogma” means a set of incontrovertible truths.

We define three tasks to evaluate different aspects of LLM capabilities:

  • Pun Comprehension
    Can an LLM distinguish between coherent and nonsensical puns?

  • Pun Resolution
    Can an LLM infer the correct punchline based on the question?

  • Pun Generation
    Can an LLM produce novel Phunny-style puns? We test this in two modes:

    • Free: unconstrained generation
    • Constrained: generation based on a provided prefix X

Data Fields

  • pun: the complete pun (question/answer)
  • prefix: the subject of the question/pun
  • definition: the meaning of the question/pun
  • answer: the punchline
  • phonetic: whether the punchline is phonetically correlated (starts with same pronunciation) w.r.t. the prefix
  • realistic: whether the pun itself is real
  • typology: whether the prefix itself is a noun, adjective, or verb

Data Splits

This dataset has 3 splits: Main, Contaminated, and Few-shot.

Dataset Split Number of Instances Content
Main 350 set of puns used in our experiments to evaluate LLMs
Contaminated 20 list of Phunny-like puns already present on the web (excluded from our evaluation)
Few-shot 10 puns used as in-context exemples for the Resolution and Generation tasks

Cite article

@inproceedings{cocchieri-etal-2025-call,
    title = "``What do you call a dog that is incontrovertibly true? Dogma'': Testing {LLM} Generalization through Humor",
    author = "Cocchieri, Alessio  and
      Ragazzi, Luca  and
      Italiani, Paolo  and
      Tagliavini, Giuseppe  and
      Moro, Gianluca",
    editor = "Che, Wanxiang  and
      Nabende, Joyce  and
      Shutova, Ekaterina  and
      Pilehvar, Mohammad Taher",
    booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
    month = jul,
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.acl-long.1117/",
    doi = "10.18653/v1/2025.acl-long.1117",
    pages = "22922--22937",
    ISBN = "979-8-89176-251-0",
    abstract = "Humor, requiring creativity and contextual understanding, is a hallmark of human intelligence, showcasing adaptability across linguistic scenarios. While recent advances in large language models (LLMs) demonstrate strong reasoning on various benchmarks, it remains unclear whether they truly adapt to new tasks like humans (i.e., generalize) or merely replicate memorized content. To explore this, we introduce Phunny, a new humor-based question-answering benchmark designed to assess LLMs' reasoning through carefully crafted puns. Our dataset is manually curated to ensure novelty and minimize data contamination, providing a robust evaluation of LLMs' linguistic comprehension. Experiments on pun comprehension, resolution, and generation reveal that most LLMs struggle with generalization, even on simple tasks, consistently underperforming the human baseline. Additionally, our detailed error analysis provides valuable insights to guide future research."
}