chengan98 commited on
Commit
b8fcab7
Β·
verified Β·
1 Parent(s): 91008df

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -10
README.md CHANGED
@@ -1,23 +1,18 @@
1
  ---
2
  license: apache-2.0
3
  ---
4
- <p align="center" href="https://visurg.ai/">
5
- <a href="https://visurg.ai/">
6
- <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/hr0txL0zblj3i2cV77OYQ.png" alt="VISURG">
7
- </a>
8
- </p>
9
 
10
- [πŸ“š Paper](https://arxiv.org/abs/2503.19740) - [πŸ€– GitHub](https://github.com/visurg-ai/surg-3m)
11
 
12
- We provide the models used in our data curation pipeline in [πŸ“š Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740) to assist with constructing the Surg-3M dataset (for more details about the Surg-3M dataset and our
13
- SurgFM foundation model, please visit our github repository at [πŸ€– GitHub](https://github.com/visurg-ai/surg-3m)) .
14
 
15
 
16
  If you use our dataset, model, or code in your research, please cite our paper:
17
 
18
  ```
19
  @misc{che2025surg3mdatasetfoundationmodel,
20
- title={Surg-3M: A Dataset and Foundation Model for Perception in Surgical Settings},
21
  author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
22
  year={2025},
23
  eprint={2503.19740},
@@ -56,7 +51,7 @@ This Hugging Face repository includes video storyboard classification models, fr
56
  </div>
57
 
58
 
59
- The data curation pipeline leading to the clean videos in the Surg-3M dataset is as follows:
60
  <div align="center">
61
  <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/yj2S0GMJm2C2AYwbr1p6G.png"> </img>
62
  </div>
 
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
4
 
5
+ [πŸ“š Paper](https://arxiv.org/abs/2503.19740) - [πŸ€– GitHub](https://github.com/visurg-ai/LEMON)
6
 
7
+ We provide the models used in our data curation pipeline in [πŸ“š LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings](https://arxiv.org/abs/2503.19740) to assist with constructing the LEMON dataset (for more details about the LEMON dataset and our
8
+ LemonFM foundation model, please visit our github repository at [πŸ€– GitHub](https://github.com/visurg-ai/LEMON)) .
9
 
10
 
11
  If you use our dataset, model, or code in your research, please cite our paper:
12
 
13
  ```
14
  @misc{che2025surg3mdatasetfoundationmodel,
15
+ title={LEMON: A Large Endoscopic MONocular Dataset and Foundation Model for Perception in Surgical Settings},
16
  author={Chengan Che and Chao Wang and Tom Vercauteren and Sophia Tsoka and Luis C. Garcia-Peraza-Herrera},
17
  year={2025},
18
  eprint={2503.19740},
 
51
  </div>
52
 
53
 
54
+ The data curation pipeline leading to the clean videos in the LEMON dataset is as follows:
55
  <div align="center">
56
  <img src="https://cdn-uploads.huggingface.co/production/uploads/67d9504a41d31cc626fcecc8/yj2S0GMJm2C2AYwbr1p6G.png"> </img>
57
  </div>