nielsr HF Staff commited on
Commit
35e1dab
·
verified ·
1 Parent(s): 289a41e

Improve model card: Add pipeline tag, library name, and GitHub link

Browse files

This PR enhances the model card by adding the `pipeline_tag: text-generation` and `library_name: transformers` metadata. This improves model discoverability on the Hugging Face Hub and enables the "Load with Transformers" widget. Additionally, it includes a direct link to the associated GitHub repository for the project, providing users with access to the original codebase.

Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -1,11 +1,13 @@
1
  ---
2
- license: mit
3
- datasets:
4
- - open-r1/codeforces-cots
5
  base_model:
6
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
 
 
 
7
  tags:
8
  - code
 
 
9
  ---
10
 
11
  # Paper Page
@@ -18,6 +20,8 @@ tags:
18
 
19
  This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
20
 
 
 
21
  # 🧠 Reasoning Mode
22
 
23
  We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
@@ -30,8 +34,10 @@ from transformers import AutoTokenizer, AutoModelForCausalLM
30
  tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-7B", trust_remote_code=True)
31
  model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-7B", device_map="auto", trust_remote_code=True).eval()
32
 
33
- message = [{"role": "user", "content": "Please write a Python quick sort algorithm.\n"}]
34
- prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>\n"
 
 
35
 
36
  model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
37
 
@@ -43,4 +49,4 @@ outputs = model.generate(
43
  )
44
 
45
  print(tokenizer.decode(outputs[0][len(model_inputs.input_ids[0]):], skip_special_tokens=False))
46
- ```
 
1
  ---
 
 
 
2
  base_model:
3
  - deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
4
+ datasets:
5
+ - open-r1/codeforces-cots
6
+ license: mit
7
  tags:
8
  - code
9
+ pipeline_tag: text-generation
10
+ library_name: transformers
11
  ---
12
 
13
  # Paper Page
 
20
 
21
  This model was fine-tuned on pruned CoTs examples derived via our **ASAP** method(**A**nchor-guided, **S**urpris**a**l-polished **P**runing), focusing on highly compressed yet semantically informative reasoning traces.
22
 
23
+ GitHub Repository: [https://github.com/azzzacs/ASAP](https://github.com/azzzacs/ASAP)
24
+
25
  # 🧠 Reasoning Mode
26
 
27
  We recommend **explicitly activating reasoning mode by inserting ```<think>``` in the prompt**.
 
34
  tokenizer = AutoTokenizer.from_pretrained("azzzacs/LogicCoder-7B", trust_remote_code=True)
35
  model = AutoModelForCausalLM.from_pretrained("azzzacs/LogicCoder-7B", device_map="auto", trust_remote_code=True).eval()
36
 
37
+ message = [{"role": "user", "content": "Please write a Python quick sort algorithm.
38
+ "}]
39
+ prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False) + "<|Assistant|><think>
40
+ "
41
 
42
  model_inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
43
 
 
49
  )
50
 
51
  print(tokenizer.decode(outputs[0][len(model_inputs.input_ids[0]):], skip_special_tokens=False))
52
+ ```