Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -164,12 +164,14 @@ print(make_table(results)) | |
| 164 |  | 
| 165 | 
             
            # Exporting to ExecuTorch
         | 
| 166 |  | 
| 167 | 
            -
             | 
|  | |
| 168 |  | 
| 169 |  | 
| 170 | 
             
            ## Convert quantized checkpoint to ExecuTorch's format
         | 
| 171 |  | 
| 172 | 
            -
             | 
|  | |
| 173 | 
             
            ```
         | 
| 174 | 
             
            python -m executorch.examples.models.phi_4_mini.convert_weights phi4-mini-8dq4w.bin phi4-mini-8dq4w-converted.bin
         | 
| 175 | 
             
            ```
         | 
|  | |
| 164 |  | 
| 165 | 
             
            # Exporting to ExecuTorch
         | 
| 166 |  | 
| 167 | 
            +
            We can run the quantized model on a mobile phone using [ExecuTorch](https://github.com/pytorch/executorch).
         | 
| 168 | 
            +
            Once ExecuTorch is [set-up](https://pytorch.org/executorch/main/getting-started.html), exporting and running the model on device is a breeze.
         | 
| 169 |  | 
| 170 |  | 
| 171 | 
             
            ## Convert quantized checkpoint to ExecuTorch's format
         | 
| 172 |  | 
| 173 | 
            +
            We first convert the quantized checkpoint to one ExecuTorch's LLM export script expects by renaming some of the checkpoint keys.
         | 
| 174 | 
            +
            The following script does this for you.
         | 
| 175 | 
             
            ```
         | 
| 176 | 
             
            python -m executorch.examples.models.phi_4_mini.convert_weights phi4-mini-8dq4w.bin phi4-mini-8dq4w-converted.bin
         | 
| 177 | 
             
            ```
         | 
