Commit
·
1097202
1
Parent(s):
9585116
Update README.md
Browse files
README.md
CHANGED
|
@@ -19,10 +19,10 @@ code-autocomplete, a code completion plugin for Python.
|
|
| 19 |
Open source repo:[code-autocomplete](https://github.com/shibing624/code-autocomplete),support GPT2 model, usage:
|
| 20 |
|
| 21 |
```python
|
| 22 |
-
from autocomplete.
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
print(
|
| 26 |
```
|
| 27 |
|
| 28 |
Also, use huggingface/transformers:
|
|
@@ -83,31 +83,9 @@ from torch import nn
|
|
| 83 |
self.embedding_size = embedding_size
|
| 84 |
====================
|
| 85 |
import numpy as np
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
class PredicterDNN(nn.Module):
|
| 92 |
-
@classmethod
|
| 93 |
-
@parameterized.expand([0.5, 2.5] + (10, 10))
|
| 94 |
-
@classmethod
|
| 95 |
-
@static
|
| 96 |
-
def add(self, sample_rate, max_iters=self.max_iters, mask_fre
|
| 97 |
-
====================
|
| 98 |
-
import java.util.ArrayList[Tuple[Int]],
|
| 99 |
-
|
| 100 |
-
====================
|
| 101 |
-
def factorial(n): number of elements per dimension,
|
| 102 |
-
assert len(n) > 1
|
| 103 |
-
n.append(self.n_iters)
|
| 104 |
-
n = n_iter(self.n_norm)
|
| 105 |
-
|
| 106 |
-
def _score(
|
| 107 |
-
====================
|
| 108 |
-
|
| 109 |
-
Process finished with exit code 0
|
| 110 |
-
|
| 111 |
```
|
| 112 |
|
| 113 |
Model files:
|
|
@@ -130,7 +108,7 @@ cd autocomplete
|
|
| 130 |
python create_dataset.py
|
| 131 |
```
|
| 132 |
|
| 133 |
-
If you want train code-autocomplete GPT2 model,refer [https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/
|
| 134 |
|
| 135 |
|
| 136 |
### About GPT2
|
|
|
|
| 19 |
Open source repo:[code-autocomplete](https://github.com/shibing624/code-autocomplete),support GPT2 model, usage:
|
| 20 |
|
| 21 |
```python
|
| 22 |
+
from autocomplete.gpt2_coder import GPT2Coder
|
| 23 |
+
|
| 24 |
+
m = GPT2Coder("shibing624/code-autocomplete-gpt2-base")
|
| 25 |
+
print(m.generate('import torch.nn as')[0])
|
| 26 |
```
|
| 27 |
|
| 28 |
Also, use huggingface/transformers:
|
|
|
|
| 83 |
self.embedding_size = embedding_size
|
| 84 |
====================
|
| 85 |
import numpy as np
|
| 86 |
+
import torch
|
| 87 |
+
import torch.nn as nn
|
| 88 |
+
import torch.nn.functional as F
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
```
|
| 90 |
|
| 91 |
Model files:
|
|
|
|
| 108 |
python create_dataset.py
|
| 109 |
```
|
| 110 |
|
| 111 |
+
If you want train code-autocomplete GPT2 model,refer [https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py](https://github.com/shibing624/code-autocomplete/blob/main/autocomplete/gpt2_coder.py)
|
| 112 |
|
| 113 |
|
| 114 |
### About GPT2
|