alecocc commited on
Commit
a09be6f
·
verified ·
1 Parent(s): 4c913fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -55
README.md CHANGED
@@ -1,56 +1,90 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: pun
5
- dtype: string
6
- - name: prefix
7
- dtype: string
8
- - name: definition
9
- dtype: string
10
- - name: answer
11
- sequence: string
12
- - name: phonetic
13
- dtype: int64
14
- - name: realistic
15
- dtype: int64
16
- - name: typology
17
- sequence: string
18
- - name: __index_level_0__
19
- dtype: int64
20
- splits:
21
- - name: main
22
- num_bytes: 49417
23
- num_examples: 350
24
- - name: contaminated
25
- num_bytes: 2642
26
- num_examples: 20
27
- - name: few_shot
28
- num_bytes: 1382
29
- num_examples: 10
30
- download_size: 37114
31
- dataset_size: 53441
32
- configs:
33
- - config_name: default
34
- data_files:
35
- - split: main
36
- path: data/main-*
37
- - split: contaminated
38
- path: data/contaminated-*
39
- - split: few_shot
40
- path: data/few_shot-*
41
- license: mit
42
- task_categories:
43
- - question-answering
44
- language:
45
- - en
46
- ---
47
-
48
- # Phunny dataset for humor-based question answering
49
-
50
- Phunny comprises 350 instances, each representing a novel, non-contaminated English pun.
51
-
52
-
53
- ### Data Fields
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  - `pun`: the complete pun (question/answer)
56
  - `prefix`: the subject of the question/pun
@@ -60,14 +94,14 @@ Phunny comprises 350 instances, each representing a novel, non-contaminated Engl
60
  - `realistic`: whether the pun itself is real
61
  - `typology`: whether the prefix itself is a noun, adjective, or verb
62
 
63
- ### Data Splits
64
 
65
  This dataset has 3 splits: _main_, _contaminated_, and _few_shot_.
66
 
67
  | Dataset Split | Number of Instances | Content |
68
  | ------------- | --------------------| ------------------------------------------------------------------------------ |
69
  | Main | 350 | set of puns used in our experiments to evaluate LLMs |
70
- | Contaminated | 20 | list of Phunny-like puns already present on the web |
71
  | Few-shot | 10 | puns used as in-context exemples for the Resolution and Generation tasks |
72
 
73
  # Cite article
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: pun
5
+ dtype: string
6
+ - name: prefix
7
+ dtype: string
8
+ - name: definition
9
+ dtype: string
10
+ - name: answer
11
+ sequence: string
12
+ - name: phonetic
13
+ dtype: int64
14
+ - name: realistic
15
+ dtype: int64
16
+ - name: typology
17
+ sequence: string
18
+ - name: __index_level_0__
19
+ dtype: int64
20
+ splits:
21
+ - name: main
22
+ num_bytes: 49417
23
+ num_examples: 350
24
+ - name: contaminated
25
+ num_bytes: 2642
26
+ num_examples: 20
27
+ - name: few_shot
28
+ num_bytes: 1382
29
+ num_examples: 10
30
+ download_size: 37114
31
+ dataset_size: 53441
32
+ configs:
33
+ - config_name: default
34
+ data_files:
35
+ - split: main
36
+ path: data/main-*
37
+ - split: contaminated
38
+ path: data/contaminated-*
39
+ - split: few_shot
40
+ path: data/few_shot-*
41
+ license: mit
42
+ task_categories:
43
+ - question-answering
44
+ language:
45
+ - en
46
+ ---
47
+
48
+ # Phunny: A Humor-Based QA Benchmark for Evaluating LLM Generalization
49
+
50
+ Welcome to **Phunny**, a humor-based question answering (QA) benchmark designed to evaluate the reasoning and generalization abilities of large language models (LLMs) through structured puns.
51
+
52
+ This repository accompanies our **ACL 2025 main track paper**:
53
+ ["What do you call a dog that is incontrovertibly true? Dogma: Testing LLM Generalization through Humor"](https://aclanthology.org/2025.acl-long.1117.pdf)
54
+
55
+ ## Overview
56
+
57
+ **Phunny** consists of 350 novel, manually curated structured puns, created through a two-stage process: creative human design followed by automated contamination checks to ensure novelty.
58
+
59
+ All puns follow the same strcuture:
60
+ ```
61
+ What do you call a X that Y? XZ
62
+ ```
63
+
64
+ - **X** is a prefix (subword of XZ)
65
+ - **Y** is a natural language definition of the answer XZ
66
+ - **XZ** is the pun answer (that starts with the prefix X), meant to be humorous
67
+
68
+ For example:
69
+
70
+ > What do you call a dog that is incontrovertibly true? **Dogma**
71
+ > → “Dog” (X) + “dogma” (XZ), where “dogma” means a set of incontrovertible truths.
72
+
73
+ We define three tasks to evaluate different aspects of LLM capabilities:
74
+
75
+ - **Pun Comprehension**
76
+ Can an LLM distinguish between coherent and nonsensical puns?
77
+
78
+ - **Pun Resolution**
79
+ Can an LLM infer the correct punchline based on the setup?
80
+
81
+ - **Pun Generation**
82
+ Can an LLM produce novel Phunny-style puns? We test this in two modes:
83
+ - *Free*: unconstrained generation
84
+ - *Constrained*: generation based on a provided prefix X
85
+
86
+
87
+ ## Data Fields
88
 
89
  - `pun`: the complete pun (question/answer)
90
  - `prefix`: the subject of the question/pun
 
94
  - `realistic`: whether the pun itself is real
95
  - `typology`: whether the prefix itself is a noun, adjective, or verb
96
 
97
+ ## Data Splits
98
 
99
  This dataset has 3 splits: _main_, _contaminated_, and _few_shot_.
100
 
101
  | Dataset Split | Number of Instances | Content |
102
  | ------------- | --------------------| ------------------------------------------------------------------------------ |
103
  | Main | 350 | set of puns used in our experiments to evaluate LLMs |
104
+ | Contaminated | 20 | list of Phunny-like puns already present on the web (excluded from our evaluation) |
105
  | Few-shot | 10 | puns used as in-context exemples for the Resolution and Generation tasks |
106
 
107
  # Cite article