Started on README.md with basic info and Architecture
Browse files
README.md
CHANGED
@@ -1,3 +1,29 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
|
5 |
+
# Introduction
|
6 |
+
|
7 |
+
Novora Code Classifier v1 Tiny, is a tiny `Text Classification` model, which classifies given code text input under 1 of `31` different classes (programming languages).
|
8 |
+
|
9 |
+
This model is designed to be able to run on CPU, but optimally runs on GPUs.
|
10 |
+
|
11 |
+
# Info
|
12 |
+
- 1 of 31 classes output
|
13 |
+
- 512 token input dimension
|
14 |
+
- 128 hidden dimensions
|
15 |
+
- 2 linear layers
|
16 |
+
- The `snowflake-arctic-embed-xs` model is used as the embeddings model.
|
17 |
+
|
18 |
+
# Architecture
|
19 |
+
|
20 |
+
The `CodeClassifier-v-Tiny` model employs a neural network architecture optimized for text classification tasks, specifically for classifying programming languages from code snippets. This model includes:
|
21 |
+
|
22 |
+
- **Bidirectional LSTM Feature Extractor**: This bidirectional LSTM layer processes input embeddings, effectively capturing contextual relationships in both forward and reverse directions within the code snippets.
|
23 |
+
|
24 |
+
- **Adaptive Pooling**: Following the LSTM, adaptive average pooling reduces the feature dimension to a fixed size, accommodating variable-length inputs.
|
25 |
+
|
26 |
+
- **Fully Connected Layers**: The network includes two linear layers. The first projects the pooled features into a hidden feature space, and the second linear layer maps these to the output classes, which correspond to different programming languages. A dropout layer with a rate of 0.5 between these layers helps mitigate overfitting.
|
27 |
+
|
28 |
+
The model's bidirectional nature and architectural components make it adept at understanding the syntax and structure crucial for code classification.
|
29 |
+
|