0xhimzel commited on
Commit
ee57f8e
·
1 Parent(s): f2e0115

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -10,4 +10,37 @@ pinned: false
10
  license: mit
11
  ---
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
10
  license: mit
11
  ---
12
 
13
+ # Model Card for `Hello-SimpleAI/chatgpt-detector-roberta`
14
+
15
+ This model is trained on **the mix of full-text and splitted sentences** of `answer`s from [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3).
16
+
17
+ More details refer to [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597) and Gtihub project [Hello-SimpleAI/chatgpt-comparison-detection](https://github.com/Hello-SimpleAI/chatgpt-comparison-detection).
18
+
19
+
20
+ The base checkpoint is [roberta-base](https://huggingface.co/roberta-base).
21
+ We train it with all [Hello-SimpleAI/HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3) data (without held-out) for 1 epoch.
22
+
23
+ (1-epoch is consistent with the experiments in [our paper](https://arxiv.org/abs/2301.07597).)
24
+
25
+ ## Citation
26
+
27
+ Checkout this papaer [arxiv: 2301.07597](https://arxiv.org/abs/2301.07597)
28
+
29
+ ```
30
+ @article{guo-etal-2023-hc3,
31
+ title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
32
+ author = "Guo, Biyang and
33
+ Zhang, Xin and
34
+ Wang, Ziyuan and
35
+ Jiang, Minqi and
36
+ Nie, Jinran and
37
+ Ding, Yuxuan and
38
+ Yue, Jianwei and
39
+ Wu, Yupeng",
40
+ journal={arXiv preprint arxiv:2301.07597}
41
+ year = "2023",
42
+ }
43
+ ```
44
+
45
+
46
  Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference