Dr. Jorge Abreu Vicente commited on
Commit
5ad0932
·
1 Parent(s): 2ee412b

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -30
README.md CHANGED
@@ -21,15 +21,13 @@ model-index:
21
  metrics:
22
  - name: Precision
23
  type: precision
24
- value: 0.9214459062460084
25
  - name: Recall
26
  type: recall
27
- value: 0.9505863750164713
28
  - name: F1
29
  type: f1
30
- value: 0.9357893371384097
31
- widget:
32
- - text: "Figure 2A. HEK293T cells were transfected with MYC-FOXP3 and FLAG-USP44 encoding expression constructs using Polyethylenimine. 48hrs post-transfection, cells were harvested, lysed, and anti-FLAG or anti-MYC antibody coated beads were used to immunoprecipitate the given labeled protein along with its binding partner. Co-IP' ed proteins were subjected to SDS PAGE followed by immunoblot analysis. Antibodies recognizing FLAG or MYC tags were used to probe for USP44 and FOXP3, respectively. B. Endogenous co-IP of USP44 and FOXP3 in murine iTregs. iTregs were generated as in Fig. 1 from naïve CD4+T cells FACS isolated from pooled suspensions of the lymph node and spleen cells of wild type C57BL/6 mice (n = 2-3 / experiment). iTregs were lysed and key proteins were immunoprecipitated using either anti-USP44 (right panel) or anti-FOXP3 (left panel) antibody. Proteins pulled-down in this experiment were then resolved and analyzed by immunoblot using anti-FOXP3 or anti-USP44 antibodies. C. Endogenous co-IP of USP44 and FOXP3 in murine nTregs. nTregs (CD4+CD25high) isolated by FACS were activated by anti-CD3 and anti-CD28 (1 and 4 ug/ml, respectively) overnight in the presence of IL-2 (100 U/ml). The cells were lysed and proteins were immunoprecipitated using either anti-Foxp3 (left panel) or anti-Usp44 (right panel). Proteins pulled down in this experiment were then resolved and identified with the indicated antibodies. D . Naïve murine CD4+T cells were isolated by FACS from lymph node and spleen cell suspension of USP44fl/fl CD4Cre+ mice and that of their wild type littermates (USP44fl/fl CD4Cre-mice; n = 2-3 / group / experiment) . iTreg cells were generated from these mice as described for Fig. 1 before incubation on a microscope slide pre-coated with poly-L lysine for 1h. Adhered cells were then fixed by PFA for 0.5 followed by blocking with 1% BSA for 1h, then incubation with the specified antibodies. Representative confocal microscopy images (40X) were visualized for endogenous USP44 (red) and FOXP3 Baxter et al (). DAPI was used to visualize cell nuclei (blue); scale bar 50μm."
33
  ---
34
 
35
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -39,11 +37,11 @@ should probably proofread and complete it, then remove this comment. -->
39
 
40
  This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
41
  It achieves the following results on the evaluation set:
42
- - Loss: 0.0047
43
- - Accuracy Score: 0.9983
44
- - Precision: 0.9214
45
- - Recall: 0.9506
46
- - F1: 0.9358
47
 
48
  ## Model description
49
 
@@ -68,32 +66,18 @@ The following hyperparameters were used during training:
68
  - seed: 42
69
  - optimizer: Adafactor
70
  - lr_scheduler_type: linear
71
- - num_epochs: 2.0
72
 
73
  ### Training results
74
 
75
  | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
76
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
77
- | 0.0053 | 1.0 | 474 | 0.0051 | 0.9981 | 0.9193 | 0.9391 | 0.9291 |
78
- | 0.0035 | 2.0 | 948 | 0.0047 | 0.9983 | 0.9214 | 0.9506 | 0.9358 |
79
-
80
- ### Results in test set
81
-
82
- ```
83
- precision recall f1-score support
84
-
85
- PANEL_START 0.92 0.95 0.94 7589
86
-
87
- micro avg 0.92 0.95 0.94 7589
88
- macro avg 0.92 0.95 0.94 7589
89
- weighted avg 0.92 0.95 0.94 7589
90
-
91
- {'eval_loss': 0.004700918216258287, 'eval_accuracy_score': 0.9982938520454369, 'eval_precision': 0.9214459062460084, 'eval_recall': 0.9505863750164713, 'eval_f1': 0.9357893371384097, 'eval_runtime': 40.9026, 'eval_samples_per_second': 44.692, 'eva
92
- l_steps_per_second': 0.196, 'epoch': 2.0}
93
- ```
94
  ### Framework versions
95
 
96
- - Transformers 4.15.0
97
  - Pytorch 1.11.0a0+bfe5ad2
98
  - Datasets 1.17.0
99
- - Tokenizers 0.10.3
 
21
  metrics:
22
  - name: Precision
23
  type: precision
24
+ value: 0.9120703437250199
25
  - name: Recall
26
  type: recall
27
+ value: 0.9449275362318841
28
  - name: F1
29
  type: f1
30
+ value: 0.9282082570673175
 
 
31
  ---
32
 
33
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
37
 
38
  This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-large](https://huggingface.co/michiyasunaga/BioLinkBERT-large) on the source_data_nlp dataset.
39
  It achieves the following results on the evaluation set:
40
+ - Loss: 0.0051
41
+ - Accuracy Score: 0.9981
42
+ - Precision: 0.9121
43
+ - Recall: 0.9449
44
+ - F1: 0.9282
45
 
46
  ## Model description
47
 
 
66
  - seed: 42
67
  - optimizer: Adafactor
68
  - lr_scheduler_type: linear
69
+ - num_epochs: 1.0
70
 
71
  ### Training results
72
 
73
  | Training Loss | Epoch | Step | Validation Loss | Accuracy Score | Precision | Recall | F1 |
74
  |:-------------:|:-----:|:----:|:---------------:|:--------------:|:---------:|:------:|:------:|
75
+ | 0.0048 | 1.0 | 431 | 0.0051 | 0.9981 | 0.9121 | 0.9449 | 0.9282 |
76
+
77
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  ### Framework versions
79
 
80
+ - Transformers 4.20.0
81
  - Pytorch 1.11.0a0+bfe5ad2
82
  - Datasets 1.17.0
83
+ - Tokenizers 0.12.1