YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

SEA-S - GGUF

Name Quant method Size
SEA-S.Q2_K.gguf Q2_K 2.53GB
SEA-S.IQ3_XS.gguf IQ3_XS 2.81GB
SEA-S.IQ3_S.gguf IQ3_S 2.96GB
SEA-S.Q3_K_S.gguf Q3_K_S 2.95GB
SEA-S.IQ3_M.gguf IQ3_M 3.06GB
SEA-S.Q3_K.gguf Q3_K 3.28GB
SEA-S.Q3_K_M.gguf Q3_K_M 3.28GB
SEA-S.Q3_K_L.gguf Q3_K_L 3.56GB
SEA-S.IQ4_XS.gguf IQ4_XS 3.67GB
SEA-S.Q4_0.gguf Q4_0 3.83GB
SEA-S.IQ4_NL.gguf IQ4_NL 3.87GB
SEA-S.Q4_K_S.gguf Q4_K_S 3.86GB
SEA-S.Q4_K.gguf Q4_K 4.07GB
SEA-S.Q4_K_M.gguf Q4_K_M 4.07GB
SEA-S.Q4_1.gguf Q4_1 4.24GB
SEA-S.Q5_0.gguf Q5_0 4.65GB
SEA-S.Q5_K_S.gguf Q5_K_S 4.65GB
SEA-S.Q5_K.gguf Q5_K 4.78GB
SEA-S.Q5_K_M.gguf Q5_K_M 4.78GB
SEA-S.Q5_1.gguf Q5_1 5.07GB
SEA-S.Q6_K.gguf Q6_K 5.53GB
SEA-S.Q8_0.gguf Q8_0 7.17GB

Original model description:

license: apache-2.0 tags: - Automated Peer Reviewing - SFT datasets: - ECNU-SEA/SEA_data

Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis

Paper Link: https://arxiv.org/abs/2407.12857

Project Page: https://ecnu-sea.github.io/

πŸ”₯ News

  • πŸ”₯πŸ”₯πŸ”₯ SEA is accepted by EMNLP2024 !
  • πŸ”₯πŸ”₯πŸ”₯ We have made SEA series models (7B) public !

Model Description

⚠️ This is the SEA-S model for content standardization, and the review model SEA-E can be found here.

The SEA-S model aims to integrate all reviews for each paper into one to eliminate redundancy and errors, focusing on the major advantages and disadvantages of the paper. Specifically, we first utilize GPT-4 to integrate multiple reviews of a paper into one (From ECNU-SEA/SEA_data) that is in a unified format and criterion with constructive contents, and form an instruction dataset for SFT. After that, we fine-tune Mistral-7B-Instruct-v0.2 to distill the knowledge of GPT-4. Therefore, SEA-S provides a novel paradigm for integrating peer review data in an unified format across various conferences.

@inproceedings{yu2024automated,
  title={Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis},
  author={Yu, Jianxiang and Ding, Zichen and Tan, Jiaqi and Luo, Kangyang and Weng, Zhenmin and Gong, Chenghua and Zeng, Long and Cui, RenJing and Han, Chengcheng and Sun, Qiushi and others},
  booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024},
  pages={10164--10184},
  year={2024}
}
Downloads last month
17
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support