File size: 1,480 Bytes
86fb856
 
fb7b6b5
 
 
86fb856
 
 
fb7b6b5
86fb856
fb7b6b5
86fb856
 
 
 
 
7cbad57
86fb856
 
 
 
 
fb7b6b5
 
86fb856
 
 
 
 
 
fb7b6b5
 
86fb856
 
 
 
0c8e0cd
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: other
pipeline_tag: text-generation
tags:
- cortex.cpp
---

## Overview
Marco-o1 not only focuses on disciplines with standard answers, such as mathematics, physics, and coding—which are well-suited for reinforcement learning (RL)—but also places greater emphasis on open-ended resolutions. We aim to address the question: "Can the o1 model effectively generalize to broader domains where clear standards are absent and rewards are challenging to quantify?"

Currently, Marco-o1 Large Language Model (LLM) is powered by Chain-of-Thought (CoT) fine-tuning, Monte Carlo Tree Search (MCTS), reflection mechanisms, and innovative reasoning strategies—optimized for complex real-world problem-solving tasks.

## Variants

| No | Variant | Cortex CLI command |
| --- | --- | --- |
| 1 | [Marco-o1-8b](https://huggingface.co/cortexso/marco-o1/tree/8b) | `cortex run marco-o1:8b` |

## Use it with Jan (UI)

1. Install **Jan** using [Quickstart](https://jan.ai/docs/quickstart)
2. Use in Jan model Hub:
    ```bash
    cortexhub/marco-o1
    ```

## Use it with Cortex (CLI)

1. Install **Cortex** using [Quickstart](https://cortex.jan.ai/docs/quickstart)
2. Run the model with command:
    ```bash
    cortex run marco-o1
    ```
    
## Credits

- **Author:** AIDC-AI
- **Converter:** [Homebrew](https://homebrew.ltd/)
- **Original License:** [Licence](https://huggingface.co/AIDC-AI/Marco-o1/blob/main/LICENSE)
- **Papers:** [Paper](https://arxiv.org/abs/2411.14405)