Update README.md
Browse files
README.md
CHANGED
@@ -1,89 +1,169 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
|
|
10 |
<p>Arm’s AI development resources ensure you can deploy at pace, achieving best performance on Arm by default. Our aim is to make your AI development easier, ensuring integration with all major operating systems and AI frameworks, enabling portability for deploying AI on Arm at scale.</p>
|
11 |
-
|
12 |
-
<
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
<
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
<
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
<
|
34 |
-
<
|
35 |
-
<
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
<br>
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
<a href="https://newsroom.arm.com/blog/small-language-models-on-arm" target="_blank">Small Language Models: Efficient Arm Computing Enables a Custom AI Future</a>
|
60 |
-
</li>
|
61 |
-
<li>
|
62 |
-
<a href="https://community.arm.com/arm-community-blogs/b/infrastructure-solutions-blog/posts/best-in-class-llm-performance-on-arm-neoverse-v1-based-aws-graviton3-servers" target="_blank">Best-in-class LLM Performance on Arm Neoverse V1 based AWS Graviton3 CPUs</a>
|
63 |
-
</li></p>
|
64 |
-
</ul>
|
65 |
-
<!-- Add relevant links or content here -->
|
66 |
-
<br>
|
67 |
-
<strong>Arm Learning Paths</strong>
|
68 |
-
<br><p>Tutorials designed to help you develop quality Arm software faster.</p>
|
69 |
-
<ul><p>
|
70 |
-
<li>
|
71 |
-
<a href="https://learn.arm.com/learning-paths/smartphones-and-mobile/kleidiai-on-android-with-mediapipe-and-xnnpack/" class="underline" target="_blank">LLMs on Android with KleidiAI, MediaPipe and XNNPACK</a>
|
72 |
-
</li>
|
73 |
-
<li>
|
74 |
-
<a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/llama-chatbot/" class="underline" target="_blank">Run a Large Language model (LLM) chatbot on Arm servers</a>
|
75 |
-
</li>
|
76 |
-
<li>
|
77 |
-
<a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/nlp-hugging-face/pytorch-nlp-hf/" target="_blank">Deploy an NLP model using PyTorch on Arm-based device</a>
|
78 |
-
</li>
|
79 |
-
<li>
|
80 |
-
<a href="https://learn.arm.com/learning-paths/embedded-systems/llama-python-cpu/" target="_blank">Run a local LLM chatbot on a Raspberry Pi 5</a>
|
81 |
-
</li>
|
82 |
-
<li>
|
83 |
-
<a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/benchmark-nlp/" target="_blank">Accelerate Natural Language Processing (NLP) models from Hugging Face on Arm servers</a>
|
84 |
-
</li>
|
85 |
-
</p></ul>
|
86 |
-
<p>Contribute to our Learning Paths: <a href="https://github.com/ArmDeveloperEcosystem/arm-learning-paths/discussions/categories/ideas" target="_blank">suggest a new Learning Path</a> or <a href="https://learn.arm.com/learning-paths/cross-platform/_example-learning-path/" target="_blank">create one yourself </a>with support from the Arm community.</p>
|
87 |
-
<br>
|
88 |
-
<p><i><small>Note: The data collated here is sourced from Arm and third parties. While Arm uses reasonable efforts to keep this information accurate, Arm does not warrant (express or implied) or provide any guarantee of data correctness due to the ever-evolving AI and software landscape. Any links to third party sites and resources are provided for ease and convenience. Your use of such third-party sites and resources is subject to the third party’s terms of use, and use is at your own risk.</small></i></p>
|
89 |
</body>
|
|
|
|
1 |
+
<!DOCTYPE html>
|
2 |
+
<html lang="en">
|
3 |
+
<head>
|
4 |
+
<meta charset="UTF-8">
|
5 |
+
<title>Arm Hugging Face Learning Paths</title>
|
6 |
+
<style>
|
7 |
+
body {
|
8 |
+
font-family: Arial, sans-serif;
|
9 |
+
color: #fff;
|
10 |
+
background-color: #121212;
|
11 |
+
}
|
12 |
+
table {
|
13 |
+
width: 100%;
|
14 |
+
border-collapse: collapse;
|
15 |
+
margin-top: 20px;
|
16 |
+
}
|
17 |
+
th, td {
|
18 |
+
border: 1px solid #444;
|
19 |
+
padding: 10px;
|
20 |
+
color: #ddd;
|
21 |
+
}
|
22 |
+
th {
|
23 |
+
background-color: #333;
|
24 |
+
color: #fff;
|
25 |
+
text-align: left;
|
26 |
+
}
|
27 |
+
tr:nth-child(even) {
|
28 |
+
background-color: #222;
|
29 |
+
}
|
30 |
+
tr:hover {
|
31 |
+
background-color: #444;
|
32 |
+
}
|
33 |
+
a {
|
34 |
+
color: #8ac4ff;
|
35 |
+
text-decoration: none;
|
36 |
+
}
|
37 |
+
a:hover {
|
38 |
+
text-decoration: underline;
|
39 |
+
}
|
40 |
+
small {
|
41 |
+
color: #aaa;
|
42 |
+
}
|
43 |
+
</style>
|
44 |
+
</head>
|
45 |
+
<body>
|
46 |
|
47 |
+
<h2>Explore Arm-Optimized Learning Paths on Hugging Face</h2>
|
48 |
<p>Arm’s AI development resources ensure you can deploy at pace, achieving best performance on Arm by default. Our aim is to make your AI development easier, ensuring integration with all major operating systems and AI frameworks, enabling portability for deploying AI on Arm at scale.</p>
|
49 |
+
<p>
|
50 |
+
Discover curated <strong>Learning Paths</strong> that showcase AI models optimized for Arm platforms across key market applications.
|
51 |
+
Each learning path highlights specific models featured within our dedicated Hugging Face <strong>Model Collections</strong>,
|
52 |
+
simplifying your journey from learning to deployment on Arm technologies.
|
53 |
+
</p>
|
54 |
+
|
55 |
+
<table>
|
56 |
+
<thead>
|
57 |
+
<tr>
|
58 |
+
<th>Learning Path</th>
|
59 |
+
<th>Market Application</th>
|
60 |
+
<th>Model(s) Featured</th>
|
61 |
+
</tr>
|
62 |
+
</thead>
|
63 |
+
<tbody>
|
64 |
+
<tr><th colspan="3" style="background-color:#555; color:#fff;">CV: Image Classification & Object Detection</th></tr>
|
65 |
+
<tr>
|
66 |
+
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/profiling-ml-on-arm/" target="_blank">Profile the Performance of AI and ML Mobile Applications on Arm</a></td>
|
67 |
+
<td>Smartphone</td>
|
68 |
+
<td><a href="https://huggingface.co/google/mobilenet_v2_1.0_224" target="_blank">Mobilenet V2 1.0 224</a></td>
|
69 |
+
</tr>
|
70 |
+
<tr>
|
71 |
+
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/yolo-on-himax/" target="_blank">Run a Computer Vision Model on a Himax Microcontroller</a></td>
|
72 |
+
<td>IoT</td>
|
73 |
+
<td><a href="https://huggingface.co/Ultralytics" target="_blank">Ultralytics</a></td>
|
74 |
+
</tr>
|
75 |
+
<tr>
|
76 |
+
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/jetson_object_detection/" target="_blank">Get started with object detection using a Jetson Orin Nano</a></td>
|
77 |
+
<td>IoT</td>
|
78 |
+
<td><a href="https://huggingface.co/google/mobilenet_v2_1.0_224" target="_blank">Mobilenet V2 1.0 224</a></td>
|
79 |
+
</tr>
|
80 |
+
<tr><th colspan="3" style="background-color:#555; color:#fff;">GenAI: RAG</th></tr>
|
81 |
+
<tr>
|
82 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/milvus-rag/" target="_blank">Build a RAG application using Zilliz Cloud on Arm servers</a></td>
|
83 |
+
<td>Cloud & Datacenter</td>
|
84 |
+
<td><a href="https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2" target="_blank">All Minilm L6 V2</a></td>
|
85 |
+
</tr>
|
86 |
+
<tr>
|
87 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/milvus-rag/" target="_blank">Build a RAG application using Zilliz Cloud on Arm servers</a></td>
|
88 |
+
<td>Cloud & Datacenter</td>
|
89 |
+
<td><a href="https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf" target="_blank">Dolphin 2.9.4 Llama3.1 8B Gguf</a></td>
|
90 |
+
</tr>
|
91 |
+
<tr><th colspan="3" style="background-color:#555; color:#fff;">GenAI: Text Generation</th></tr>
|
92 |
+
<tr>
|
93 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/llama-cpu/" target="_blank">Deploy a Large Language Model (LLM) chatbot with llama.cpp using KleidiAI on Arm servers</a></td>
|
94 |
+
<td>Cloud & Datacenter</td>
|
95 |
+
<td><a href="https://huggingface.co/cognitivecomputations/dolphin-2.9.4-llama3.1-8b-gguf" target="_blank">Dolphin 2.9.4 Llama3.1 8B Gguf</a></td>
|
96 |
+
</tr>
|
97 |
+
<tr>
|
98 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/pytorch-llama/" target="_blank">Run a Large Language Model (LLM) chatbot with PyTorch using KleidiAI on Arm servers</a></td>
|
99 |
+
<td>Cloud & Datacenter</td>
|
100 |
+
<td><a href="https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct" target="_blank">Llama 3.1 8B Instruct</a></td>
|
101 |
+
</tr>
|
102 |
+
<tr>
|
103 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/rag/" target="_blank">Deploy a RAG-based Chatbot with llama-cpp-python using KleidiAI on Google Axion processors</a></td>
|
104 |
+
<td>Cloud & Datacenter</td>
|
105 |
+
<td><a href="https://huggingface.co/chatpdflocal/llama3.1-8b-gguf" target="_blank">Llama3.1 8B Gguf</a></td>
|
106 |
+
</tr>
|
107 |
+
<tr>
|
108 |
+
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-llama3-chat-android-app-using-executorch-and-xnnpack/" target="_blank">Build an Android chat app with Llama, KleidiAI, ExecuTorch, and XNNPACK</a></td>
|
109 |
+
<td>Smartphone</td>
|
110 |
+
<td><a href="https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct" target="_blank">Llama 3.2 1B Instruct</a></td>
|
111 |
+
</tr>
|
112 |
+
<tr>
|
113 |
+
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/llama-python-cpu/" target="_blank">Run a local LLM chatbot on a Raspberry Pi 5</a></td>
|
114 |
+
<td>Raspberry Pi</td>
|
115 |
+
<td><a href="https://huggingface.co/Aryanne/Orca-Mini-3B-gguf" target="_blank">Orca Mini 3B Gguf</a></td>
|
116 |
+
</tr>
|
117 |
+
<tr>
|
118 |
+
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/build-android-chat-app-using-onnxruntime/" target="_blank">Build an Android chat application with ONNX Runtime API</a></td>
|
119 |
+
<td>Smartphone</td>
|
120 |
+
<td><a href="https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda" target="_blank">Phi 3 Vision 128K Instruct Onnx Cuda</a></td>
|
121 |
+
</tr>
|
122 |
+
<tr>
|
123 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/rtp-llm/" target="_blank">Run an LLM chatbot with rtp-llm on Arm-based servers</a></td>
|
124 |
+
<td>Cloud & Datacenter</td>
|
125 |
+
<td><a href="https://huggingface.co/Qwen/Qwen2-0.5B-Instruct" target="_blank">Qwen2 0.5B Instruct</a></td>
|
126 |
+
</tr>
|
127 |
+
<tr>
|
128 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/vllm/" target="_blank">Build and Run a Virtual Large Language Model (vLLM) on Arm Servers</a></td>
|
129 |
+
<td>Cloud & Datacenter</td>
|
130 |
+
<td><a href="https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct" target="_blank">Qwen2.5 0.5B Instruct</a></td>
|
131 |
+
</tr>
|
132 |
+
<tr>
|
133 |
+
<td><a href="https://learn.arm.com/learning-paths/mobile-graphics-and-gaming/kleidiai-on-android-with-mediapipe-and-xnnpack/" target="_blank">LLM inference on Android with KleidiAI, MediaPipe, and XNNPACK</a></td>
|
134 |
+
<td>Smartphone</td>
|
135 |
+
<td><a href="https://huggingface.co/google/gemma-2b" target="_blank">Gemma 2B</a></td>
|
136 |
+
</tr>
|
137 |
+
<tr>
|
138 |
+
<td><a href="https://learn.arm.com/learning-paths/embedded-and-microcontrollers/rpi-llama3/" target="_blank">Run Llama 3 on a Raspberry Pi 5 using ExecuTorch</a></td>
|
139 |
+
<td>Raspberry Pi</td>
|
140 |
+
<td><a href="https://huggingface.co/meta-llama/Llama-3.1-8B" target="_blank">Llama 3.1 8B</a></td>
|
141 |
+
</tr>
|
142 |
+
<tr><th colspan="3" style="background-color:#555; color:#fff;">Sentiment Analysis</th></tr>
|
143 |
+
<tr>
|
144 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/benchmark-nlp/" target="_blank">Accelerate NLP models from Hugging Face on Arm servers</a></td>
|
145 |
+
<td>Cloud & Datacenter</td>
|
146 |
+
<td><a href="https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english" target="_blank">Distilbert Base Uncased Finetuned Sst 2 English</a></td>
|
147 |
+
</tr>
|
148 |
+
<tr>
|
149 |
+
<td><a href="https://learn.arm.com/learning-paths/servers-and-cloud-computing/nlp-hugging-face/" target="_blank">Run a Natural Language Processing (NLP) model from Hugging Face on Arm servers</a></td>
|
150 |
+
<td>Cloud & Datacenter</td>
|
151 |
+
<td><a href="https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest" target="_blank">Twitter Roberta Base Sentiment Latest</a></td>
|
152 |
+
</tr>
|
153 |
+
</tbody>
|
154 |
+
</table>
|
155 |
+
|
156 |
<br>
|
157 |
+
<strong>Arm Kleidi: Unleashing Mass-Market AI Performance on Arm</strong>
|
158 |
+
<p>Arm Kleidi is a targeted software suite, expediting optimizations for any framework and enabling accelerations for billions of AI workloads across Arm-based devices everywhere. Application developers achieve top performance by default, with no additional work or investment in new skills or tools training required.</p>
|
159 |
+
<p><b>Useful Resources on Arm Kleidi:</b></p>
|
160 |
+
<ul>
|
161 |
+
<li>Arm KleidiAI for optimizing any AI framework: <a href="https://gitlab.arm.com/kleidi/kleidiai" target="_blank">Gitlab repo</a> and <a href="https://community.arm.com/arm-community-blogs/b/ai-and-ml-blog/posts/kleidiai" target="_blank"> blog</a></li>
|
162 |
+
<li>Arm KleidiCV for optimizing any computer vision framework: <a href="https://gitlab.arm.com/kleidi/kleidicv" target="_blank">Gitlab repo</a> and <a href="https://community.arm.com/arm-community-blogs/b/ai-and-ml-blog/posts/kleidicv" target="_blank"> blog</a></li>
|
163 |
+
<li><a href="https://github.com/ARM-software/ComputeLibrary" target="_blank">Arm Compute Library for all AI software</a></li>
|
164 |
+
</ul>
|
165 |
+
|
166 |
+
<p><i><small>Note: The data collated here is sourced from Arm and third parties. While Arm uses reasonable efforts to keep this information accurate, Arm does not warrant (express or implied) or provide any guarantee of data correctness due to the ever-evolving AI and software landscape. Any links to third-party sites and resources are provided for ease and convenience. Your use of such third-party sites and resources is subject to the third party’s terms of use, and use is at your own risk.</small></i></p>
|
167 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
</body>
|
169 |
+
</html>
|