title
stringlengths 1
544
โ | parent
stringlengths 0
57
โ | created
stringlengths 11
12
โ | editor
stringclasses 1
value | creator
stringclasses 4
values | edited
stringlengths 11
12
โ | refs
stringlengths 0
536
โ | text
stringlengths 1
26k
| id
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|
Streamlit Caching
|
Streamlit Usages
|
Jul 15, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
- `st.cache_data`
- `st_cache_resource`
> [Caching - Streamlit Docs](https://docs.streamlit.io/library/advanced-features/caching)
|
a22ccb53e76349dc9ccf31b616d84083
|
|
Streamlit Cloud
|
Streamlit Usages
|
May 24, 2023
|
Alan Jo
|
Alan Jo
|
May 24, 2023
|
> [Embed your app - Streamlit Docs](https://docs.streamlit.io/streamlit-community-cloud/get-started/embed-your-app)
|
8c045600c4314f6382bd3c05358dd2ba
|
|
Streamlit UI
|
Streamlit Usages
|
May 14, 2023
|
Alan Jo
|
Alan Jo
|
May 14, 2023
|
### Streamlit UIs
|Title|
|:-:|
|[Streamlit Chat](https://texonom.com/streamlit-chat-73c8557f0d094b3fac7448e10f013cb4)|
|[Streamlit Extras](https://texonom.com/streamlit-extras-02c8d0d7eebf4c03adce4e843305e2e7)|
|[Streamlit Pills](https://texonom.com/streamlit-pills-cc0ce4c16ac44d1798242fc990f7eb92)|
|
4d1410d0bdb54e1ba4512e03f500b0b7
|
|
Streamlit Widget
|
Streamlit Usages
|
Jul 17, 2023
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
### Streamlit Widgets
|Title|
|:-:|
|[Streamlit text_area](https://texonom.com/streamlit-textarea-d58bc341994a4cf9a6eec5095e6c5395)|
|
c4cd30e860ad4f8487ef2f4fcae564e6
|
|
Streamlit Chat
|
Streamlit UIs
|
May 14, 2023
|
Alan Jo
|
Alan Jo
|
May 14, 2023
|
73c8557f0d094b3fac7448e10f013cb4
|
||
Streamlit Extras
|
Streamlit UIs
|
May 14, 2023
|
Alan Jo
|
Alan Jo
|
May 14, 2023
|
02c8d0d7eebf4c03adce4e843305e2e7
|
||
Streamlit Pills
|
Streamlit UIs
|
May 14, 2023
|
Alan Jo
|
Alan Jo
|
May 14, 2023
|
cc0ce4c16ac44d1798242fc990f7eb92
|
||
Streamlit text_area
|
Streamlit Widgets
|
Jul 17, 2023
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
> [st.text_area - Streamlit Docs](https://docs.streamlit.io/library/api-reference/widgets/st.text_area)
|
d58bc341994a4cf9a6eec5095e6c5395
|
|
Hold-out Method
|
AI Generalization Methods
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
Given data is randomly partitioned into two independent sets
**which make them same distribution**
our assumption to be valid
|
cc7504f8f8db489fb474bd89cbbe0dd4
|
|
k-fold cross validation
|
AI Generalization Methods
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
k mutually-exclusive subset
|
f61ca1dbd3a5483998e9d6514b16c48d
|
|
Nested cross validation
|
AI Generalization Methods
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
[Hyperparameter](https://texonom.com/hyperparameter-ef7e34566add4e98b673d4cef59fca90)
|
- inner fold
- outer fold
a lot of times needed if dataset is large
|
c88ec0073c1d4dbda857ad019779559a
|
Random Sampling
|
AI Generalization Methods
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
a variation of hold-out
|
5879da4187bb4e1bafacedd6fe617149
|
|
Train/Validation/Test splitting
|
AI Generalization Methods
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
If dataset is large enough
|
bc5f600b344b455b994ca25671863ac8
|
|
Fine Tuning
|
Model Generalization Notion
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
Jun 22, 2023
|
### Task optimized
### Fine Tuning Notion
|Title|
|:-:|
|[PEFT](https://texonom.com/peft-f47a178d75804abf928ec7e7be2da27f)|
|[TRL](https://texonom.com/trl-ebc3e432e3984ca3b2a1cf20da0fa5d1)|
|[DRO](https://texonom.com/dro-4701102d6c0a4545afe7f0bad178180b)|
|[SFT](https://texonom.com/sft-f48f7e6eccd54a62bea82725fae98865)|
|
52215d8477ad46e3896a29e2fa408991
|
|
Generalization Gap
|
Model Generalization Notion
|
May 9, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
the test error is not necessarily always close to the training error
|
d8b4ec347e7d4df282ea96a093e7581f
|
|
Overfitting
|
Model Generalization Notion
|
May 31, 2021
|
Alan Jo
|
Alan Jo
|
Jun 7, 2023
|
[Underfitting](https://texonom.com/underfitting-e56067d23ba8404a8e5165085986c9aa)
|
### Predicts training dataset > test, High [Variance](https://texonom.com/variance-08c1eccc7dc84957afbb815ad6b41280)
large number of parameters can cause overfitting
Models that are bigger or have more **capacity **are more likely to overfit
Overfitting issues are usually observed when the magnitude of parameters is large
If data is general enough, overfitting is okay
### Resolve Overfitting
|Title|
|:-:|
|[Non-parametric algorithm](https://texonom.com/non-parametric-algorithm-bc41f745a14c4837bd54368652da5982)|
|[Regularized parameter](https://texonom.com/regularized-parameter-f3b208cdd37a4002a5d39ef990b8be33)|

|
24c3b183372845e8999ad7f7a0ba5035
|
Pre Training
|
Model Generalization Notion
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
์ผ๋ฐ์ ์ธ ๋ชฉ์ ์ ์ํ ํธ๋ ์ด๋
|
0d81c286e86f4dcba940d3c849631b35
|
|
Underfitting
|
Model Generalization Notion
|
Mar 14, 2023
|
Alan Jo
|
Alan Jo
|
May 14, 2023
|
[Overfitting](https://texonom.com/overfitting-24c3b183372845e8999ad7f7a0ba5035)
|
### The training error is relatively large, High [Bias](https://texonom.com/bias-ba063cd622a54deb8a677e8fb87dfdc8)
|
e56067d23ba8404a8e5165085986c9aa
|
DRO
|
Fine Tuning Notion
|
Jun 29, 2023
|
Alan Jo
|
Alan Jo
|
Jun 29, 2023
|
### DRO Usages
|Title|
|:-:|
|[Group DRO](https://texonom.com/group-dro-4877ea63979c44ed8149515d3b832397)|
|[DRO-LM](https://texonom.com/dro-lm-564725ae9e7140de9e42d25310728af4)|
|
4701102d6c0a4545afe7f0bad178180b
|
|
PEFT
|
Fine Tuning Notion
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
[peft](https://github.com/huggingface/peft)
|
### Parameter-Efficient Fine-Tuning
ํฐ ๋ชจ๋ธ์ ๊ฒฝ์ฐ, ์ผ๋ถ ๊ฐ์ค์น๋ง ํ์ธํ๋
### PEFT Usages
|Title|
|:-:|
|[LoRA](https://texonom.com/lora-fda3706be3674496898ad2e5e00007c9)|
|[PEQA](https://texonom.com/peqa-500506697a2b458caa8386691757b29a)|
> [PEFT๋ก LoRA Checkpoint ๋ก๋์ size mismatch ํด๊ฒฐ๋ฒ](https://junbuml.ee/lora-ckpt-size-mismatch)
|
f47a178d75804abf928ec7e7be2da27f
|
SFT
|
Fine Tuning Notion
|
Jul 15, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
### Supervised Fine-Tuning
|
f48f7e6eccd54a62bea82725fae98865
|
|
TRL
|
Fine Tuning Notion
|
Jun 22, 2023
|
Alan Jo
|
Alan Jo
|
Jun 22, 2023
|
[trl](https://github.com/lvwerra/trl)
|
### Train transformer language models with reinforcement learning
> [Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU](https://huggingface.co/blog/trl-peft)
> [Jonas Kim / 24GB ์ผ๋ฐ GPU์์ RLHF๋ก 20B LLM ๋ฏธ์ธ... | ์ปค๋ฆฌ์ด๋ฆฌ](https://careerly.co.kr/comments/79381)
|
ebc3e432e3984ca3b2a1cf20da0fa5d1
|
DRO-LM
|
DRO Usages
|
Jun 29, 2023
|
Alan Jo
|
Alan Jo
|
Jun 29, 2023
|
๊ฐ ๋๋ฉ์ธ์์ ์ต์
์ ๊ฒฝ์ฐ ํ์ ์งํฉ์ ์ ํํ์ฌ ๋ชจ๋ธ์ ์
๋ฐ์ดํธ
|
564725ae9e7140de9e42d25310728af4
|
|
Group DRO
|
DRO Usages
|
Jun 29, 2023
|
Alan Jo
|
Alan Jo
|
Jun 29, 2023
|
### Group DRO Usages
|Title|
|:-:|
|[DoReMi](https://texonom.com/doremi-dbb0435bc1cd4c94a30509cd0246e4d3)|
|
4877ea63979c44ed8149515d3b832397
|
|
DoReMi
|
Group DRO Usages
|
Jun 29, 2023
|
Alan Jo
|
Alan Jo
|
Jun 29, 2023
|
๋จผ์ ์์ ํ๋ก์ ๋ชจ๋ธ์ ์ฌ์ฉํ์ฌ Group DRO๋ฅผ ์ ์ฉํ์ฌ ๋๋ฉ์ธ ๊ฐ์ค์น(ํผํฉ ๋น์จ)๋ฅผ ์์ฑ
๋๋ฉ์ธ ๊ฐ์ค์น๋ฅผ ์ฌ์ฉํ์ฌ ๋ฐ์ดํฐ์
์ ์ฌ์ํ๋งํ๊ณ ๋ ํฐ ์ ์ฒด ๊ท๋ชจ์ ๋ชจ๋ธ์ ํ๋ จ
์ด๋ฅผ ํตํด DoReMi๋ ์ฌ์ ํ๋ จ ๋ฐ์ดํฐ ๋๋ฉ์ธ์ ํผํฉ ๋น์จ์ ์กฐ์ ํ์ฌ ์ธ์ด ๋ชจ๋ธ์ ์ฑ๋ฅ์ ์ต์ ํ
|
dbb0435bc1cd4c94a30509cd0246e4d3
|
|
LoRA
|
PEFT Usages
|
Jun 22, 2023
|
Alan Jo
|
Alan Jo
|
Jun 22, 2023
|
### Low-Rank Adaptation
ํฐ ๋ชจ๋ธ์ ํ๋ผ๋ฏธํฐ๋ฅผ ์ ์ computing resource๋ก ํ์ต๊ฐ๋ฅ
์ ์ฒด ๊ฐ์ค์น๋ ๊ณ ์ ์ํจ ์ํ์์ ๋ณ๋์ ํ๋ผ๋งคํฐ๋ค์ ๊ฐ Transformer์ Layer๋ค์ ์ถ๊ฐํ๊ณ ํด๋น ํ๋ผ๋งคํฐ๋ค๋ง ํ์ต
### LoRA Usages
|Title|
|:-:|
|[QLoRA](https://texonom.com/qlora-c6b36db321c4470bbfe72804c4c43409)|
|[AdaLoRa](https://texonom.com/adalora-e93b8dedde8542e99be72d6509ecfbae)|
> [์ ์ GPU ๋ฉ๋ชจ๋ฆฌ๋ก ๋๊ท๋ชจ ์ธ์ด ๋ชจ๋ธ์ ํธ๋ ์ด๋ ํ๋ ๊ธฐ๋ฒ ใQLoRAใ๊ฐ ๋ฑ์ฅ](https://doooob.tistory.com/1029)
> [QLoRA: 48GB GPU๋ก 65B ๋ชจ๋ธ์ ๋ฏธ์ธ์กฐ์ (ํ์ธํ๋)์ด ๊ฐ๋ฅํ๋ค๊ณ ์?](https://discuss.pytorch.kr/t/qlora-48gb-gpu-65b/1682)
|
fda3706be3674496898ad2e5e00007c9
|
|
PEQA
|
PEFT Usages
|
Jul 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
### Parameter Efficient Quantization-aware Adaptation
LoRA๋ณด๋ค ํจ์ฌ ์ ์ ์์ ๋ฉ๋ชจ๋ฆฌ๋ฅผ ์ ์ ํ๋ Fine-tuning์ด ๊ฐ๋ฅ
๊ฒฐ๊ณผ๋ 3/4-bit Weight-only Uniform Quantization๋ ํํ
> [Memory-Efficient Fine-Tuning of Compressed Large Language Models...](https://arxiv.org/abs/2305.14152)
|
500506697a2b458caa8386691757b29a
|
|
AdaLoRa
|
LoRA Usages
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
### adaptively allocates the parameter budget among weight matrices according to their importance score
effective pruning of unimportant updates, which reduces their parameter budget while circumventing intensive exact SVD computations
> [Untitled](https://arxiv.org/pdf/2303.10512.pdf)
|
e93b8dedde8542e99be72d6509ecfbae
|
|
QLoRA
|
LoRA Usages
|
Jun 22, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)
|
### LoRA + [Model Quantization](https://texonom.com/model-quantization-88320068bdd94ddab6f44c0c7d66de31)
4-bit Normalized FP + Double Quantized + Paged Optimizer = Memory Optimization
### Implementation
- [gptqlora](https://github.com/qwopqwop200/gptqlora)
- [qlora](https://github.com/artidoro/qlora)
> [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes)
> [Untitled](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b)
> [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314)
> [QLoRA: 48GB GPU๋ก 65B ๋ชจ๋ธ์ ๋ฏธ์ธ์กฐ์ (ํ์ธํ๋)์ด ๊ฐ๋ฅํ๋ค๊ณ ์?](https://discuss.pytorch.kr/t/qlora-48gb-gpu-65b/1682)
|
c6b36db321c4470bbfe72804c4c43409
|
Non-parametric algorithm
|
Resolve Overfitting
|
Mar 14, 2023
|
Alan Jo
|
Alan Jo
|
Mar 14, 2023
|
It gives non-negative valued weight to each training example
|
bc41f745a14c4837bd54368652da5982
|
|
Regularized parameter
|
Resolve Overfitting
|
Mar 14, 2023
|
Alan Jo
|
Alan Jo
|
Mar 27, 2023
|
[Regularization](https://texonom.com/regularization-c85433c8c0554eba8edf0035b7fd334c)
|
It uses an additional regularizer term to decrease magnitude of parameters
|
f3b208cdd37a4002a5d39ef990b8be33
|
Knowledge ****Distillation****
|
Model Optimization Notion
|
Jun 4, 2023
|
Alan Jo
|
Alan Jo
|
Jul 1, 2023
|
[knowledge-distillation-pytorch](https://github.com/haitongli/knowledge-distillation-pytorch) [kdtf](https://github.com/DushyantaDhyani/kdtf) [Soft Label](https://texonom.com/soft-label-a9e9a46dd208446cb9511764a0052c86) [AI Ensemble](https://texonom.com/ai-ensemble-03bf3a1926dd46f18400b5830d8fdf0b) [Transfer Learning](https://texonom.com/transfer-learning-442feb66465944eebf144d4e9dd1dbf8)
|
### Distillation from teacher network to student network (less parameter)
NIPS 2014 [Geoffrey Hinton](https://texonom.com/geoffrey-hinton-441d5ce2b78146d0935454042b4f06d9), ์ค๋ฆฌ์ฌ ๋น๋์์ค, [Jeff Dean](https://texonom.com/jeff-dean-dd38bba08cea419eb45e1029e7c3aa15)
Pre-trained Teacher network โ Student network
lighter than Ensemble
### Knowledge ****Distillation Notion****
|Title|
|:-:|
|[Teacher Network](https://texonom.com/teacher-network-bf45e9f87cd3474992c2ab38b3eaae9d)|
|[Student Network](https://texonom.com/student-network-894b29040685444e9d3d07919ffe9342)|
|[Hintonโs KD](https://texonom.com/hintons-kd-5dd4cda27117403d9154a069dc7e54f4)|
|[Dark Knowledge](https://texonom.com/dark-knowledge-180df27f31144a66b429d420841de821)|
|[Distillation loss](https://texonom.com/distillation-loss-cbc11ca184d24defbacc267d3bcfd641)|
|[Instruction Tuning](https://texonom.com/instruction-tuning-f1620fadb6694407b678276d09e077a3)|
### Knowledge ****Distillation Usages****
|Title|
|:-:|
|[LaMini LM](https://texonom.com/lamini-lm-7312cefcc508434aac19acbf61cd2968)|
### [Geoffrey Hinton](https://texonom.com/geoffrey-hinton-441d5ce2b78146d0935454042b4f06d9), [Jeff Dean](https://texonom.com/jeff-dean-dd38bba08cea419eb45e1029e7c3aa15)
> [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531)
> [๋ฅ๋ฌ๋ ๋ชจ๋ธ ์ง์์ ์ฆ๋ฅ๊ธฐ๋ฒ, Knowledge Distillation](https://baeseongsu.github.io/posts/knowledge-distillation/)
> [๋ฅ๋ฌ๋ ์ฉ์ด ์ ๋ฆฌ, Knowledge distillation ์ค๋ช
๊ณผ ์ดํด](https://light-tree.tistory.com/196)
|
d5f32ad3da32434892a9765d68d31542
|
Model Optimizer
|
Model Optimization Notion
|
Jun 18, 2023
|
Alan Jo
|
Alan Jo
|
Jun 18, 2023
|
[Stochastic Gradient Descent](https://texonom.com/stochastic-gradient-descent-d8b8d008e0a34f4bb55175ffba21db44)
|
### Model Optimizers
|Title|
|:-:|
|[Adam Optimizer](https://texonom.com/adam-optimizer-286be3ab866642d8bcdf8792f7b5608f)|
|[AdamW Optimizer](https://texonom.com/adamw-optimizer-495cda99414e451ba562e84331a2f0f6)|
|[Sophia Optimizer](https://texonom.com/sophia-optimizer-eee263eba00c4658909fc0230eb4338c)|
|[Adagrad](https://texonom.com/adagrad-ff51b7bdde5443b99cbd2823853d704f)|
|[RMSprop](https://texonom.com/rmsprop-fddbbccce2aa4f6691f85ed10f2edc86)|
> [[๋
ผ๋ฌธ ๋ฆฌ๋ทฐ] AdamW์ ๋ํด ์์๋ณด์! Decoupled weight decay regularization ๋
ผ๋ฌธ ๋ฆฌ๋ทฐ(1)](https://hiddenbeginner.github.io/deeplearning/paperreview/2019/12/29/paper_review_AdamW.html)
|
fcac7c38aa0647afb911eb84ff610ab1
|
Model Quantization
|
Model Optimization Notion
|
Jun 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 20, 2023
|
### Reduce memory and model size, Improve inference speed (max 32/bit multi)
- Not every layer can be quantized
- Not every model reacts the same way to quantization
### Model Quantization Notion
|Title|
|:-:|
|[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)|
|[Post-training quantization](https://texonom.com/post-training-quantization-ee4a0b4f02184b1193a68073dc60800e)|
|[Quantization Module Fusion](https://texonom.com/quantization-module-fusion-56a36e892c93498fb404da0c816549b4)|
|[Ouput Dequantization](https://texonom.com/ouput-dequantization-b0002e5322cb4acbb9171daea3d6fd87)|
|[Quantization Calibration](https://texonom.com/quantization-calibration-1e3675b171a44c1cb2b7d365e117e22b)|
|[Quantization Formular](https://texonom.com/quantization-formular-9f59115b5e7c438698ad3a9281d01d89)|
|[Quantization Error](https://texonom.com/quantization-error-789326c5a6294b44bab927096fe3f576)|
|[Quantization Clipping Range](https://texonom.com/quantization-clipping-range-d12231d52ec647f0b6df771e4b6514f1)|
### Model Quantization Usages
|Title|
|:-:|
|[Model Quantization Algorithm](https://texonom.com/model-quantization-algorithm-98bef3c3fe7e4bcc90df5144d7a42003)|
|[Model Quantization Tool](https://texonom.com/model-quantization-tool-384bc2583bcb498e9331ffc804315253)|
### 4bit or 8bit
> [The case for 4-bit precision: k-bit Inference Scaling Laws](https://arxiv.org/abs/2212.09720)
|
88320068bdd94ddab6f44c0c7d66de31
|
|
Dark Knowledge
|
Knowledge Distillation Notion
|
Jun 4, 2023
|
Alan Jo
|
Alan Jo
|
Jun 19, 2023
|
ํฐ ๋ชจ๋ธ์ด ๊ฐ์ง๊ณ ์๋ ์ถ๊ฐ์ ์ธ ์ ๋ณด๋ฅผ ์์ ๋ชจ๋ธ์๊ฒ ์ ๋ฌํ๋ ๊ฒ์ ์๋ฏธ
์ผ๋ฐ์ ์ธ ๊ต์ก ๋ฐ์ดํฐ์์ ์ป์ ์ ์๋ ์ถ๊ฐ์ ์ธ ์ง์
๋ถํ์ค์ฑ ์ ๋ณด๋ ํด๋์ค ๊ฐ ์๋์ ์ธ ์ ์ฌ์ฑ
Dark Knowledge๋ฅผ ์ ํ์ฉํ๋ฉด ์์ ๋ชจ๋ธ์ด ๋ ๋์ ์ฑ๋ฅ์ ๋ฐํ
|
180df27f31144a66b429d420841de821
|
|
D****istillation loss****
|
Knowledge Distillation Notion
|
Jun 4, 2023
|
Alan Jo
|
Alan Jo
|
Jun 19, 2023
|
์์ ๋ชจ๋ธ์ด ํฐ ๋ชจ๋ธ์ ์ถ๋ ฅ๊ณผ ์ ์ฌํ ์ถ๋ ฅ์ ๋ด๋๋ก ํ๋ ์์ค ํจ์
ํฐ ๋ชจ๋ธ๊ณผ ์์ ๋ชจ๋ธ์ ์ถ๋ ฅ ๋ถํฌ ๊ฐ์ ์ฐจ์ด๋ฅผ ์ต์ํํ๋ ๋ฐฉ์์ผ๋ก Distillation loss๋ฅผ ์ ์
|
cbc11ca184d24defbacc267d3bcfd641
|
|
Hintonโs KD
|
Knowledge Distillation Notion
|
Jun 4, 2023
|
Alan Jo
|
Alan Jo
|
Jun 19, 2023
|
## Hintonโs Knowledge Distillation
**์์ ๋ชจ๋ธ์ ์ฑ๋ฅ ํฅ์์ ๋งค์ฐ ํจ๊ณผ์ **
์์ ๋ชจ๋ธ์ด ํฐ ๋ชจ๋ธ๋ณด๋ค ๋ ์ ์ ํ๋ผ๋ฏธํฐ๋ฅผ ๊ฐ์ง๊ณ ์์ง๋ง, ํฐ ๋ชจ๋ธ๊ณผ ์ ์ฌํ ์ฑ๋ฅ์ ๋ฐํํ ์ ์๋๋ก
ํฐ ๋ชจ๋ธ๊ณผ ์์ ๋ชจ๋ธ์ ํจ๊ป ๊ต์ก์ํค๋ ๋ฐฉ์
ํฐ ๋ชจ๋ธ์ ์ถ๋ ฅ ๋ถํฌ๋ฅผ ์์ ๋ชจ๋ธ์ด ๋ฐ๋ฅด๋๋ก ํ๋ Distillation loss๋ฅผ ์ฌ์ฉ
|
5dd4cda27117403d9154a069dc7e54f4
|
|
Instruction Tuning
|
Knowledge Distillation Notion
|
Jul 1, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
### teach language models to follow instructions to solve a task
fine-tune less powerful LLMs by using the output of a teacher LLM as a training target for supervised fine-tuning of another LLM
instruction์ผ๋ก ์ค๋ช
๋ NLP ์์
๋ชจ์์ ์ฌ์ฉ
> ์ฐ์
์ ์ผ๋ก ๋ฐ์ดํฐ์
๋๊ตฌ๋ ์ฝ๊ฒ ๋ง๋ค ์ ์๋ค๋ ๊ฒ ํต์ฌ์ ์๋ฏธ
### Instruction Tuning Notion
|Title|
|:-:|
|[FLAN](https://texonom.com/flan-1a75823bb4b645e690ee6c20fcafb2c9)|
|[Self Instruct Tuning](https://texonom.com/self-instruct-tuning-061b12de01344308823ec115c5a17cc2)|
|[Evol Instruct Tuning](https://texonom.com/evol-instruct-tuning-3f73fb4abf574b1d87b9952e39d67196)|
|[Open Instruct](https://texonom.com/open-instruct-bbb16a58ec6445498a145c4ca37c2e5a)|
> [Imitation Models and the Open-Source LLM Revolution](https://cameronrwolfe.substack.com/p/imitation-models-and-the-open-source)
|
f1620fadb6694407b678276d09e077a3
|
|
**Student Network**
|
Knowledge Distillation Notion
|
Jun 4, 2023
|
Alan Jo
|
Alan Jo
|
Jun 19, 2023
|
894b29040685444e9d3d07919ffe9342
|
||
**Teacher Network**
|
Knowledge Distillation Notion
|
Jun 4, 2023
|
Alan Jo
|
Alan Jo
|
Jun 19, 2023
|
bf45e9f87cd3474992c2ab38b3eaae9d
|
||
Evol Instruct Tuning
|
Instruction Tuning Notion
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
> [WizardLM (WizardLM)](https://huggingface.co/WizardLM)
> [WizardCoder: Empowering Code Large Language Models with Evol-Instruct](https://arxiv.org/abs/2306.08568)
|
3f73fb4abf574b1d87b9952e39d67196
|
|
FLAN
|
Instruction Tuning Notion
|
Jun 11, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
## Finetuned Language Models are Zero-Shot Learners
Instruction ๋ฐ์ดํฐ์
์ ํตํด fine-tuning์ ์งํํ๊ณ ์ด๋ฅผ ํตํด zero-shot ์ฑ๋ฅ์ ๋์ด๋ ๋ฐฉ๋ฒ
Mismatch between LM objective and human preferences ๋ฌธ์ ๊ฐ ์๋๋ฐ [RLHF](https://texonom.com/rlhf-4b184f9c9e8b4c7a8861fb6374e91aa6) ์์ ๊ฐ์
[Zero shot learning](https://texonom.com/zero-shot-learning-8c92d9386f6648f5b877cb593ec2747b)
> [Introducing FLAN: More generalizable Language Models with Instruction Fine-Tuning](https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html?m=1)
> [Instruction Tuning์ด๋?](https://velog.io/@nellcome/Instruction-Tuning์ด๋)
> [Finetuned Language Models Are Zero-Shot Learners](https://arxiv.org/abs/2109.01652)
|
1a75823bb4b645e690ee6c20fcafb2c9
|
|
Open Instruct
|
Instruction Tuning Notion
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
[AllenAI](https://texonom.com/allenai-930c776c8fdd4e358032803f037cd6fa)
|
[open-instruct](https://github.com/allenai/open-instruct)
[Tulu](https://texonom.com/tulu-4dc404191ca748ae9194c1218abf2b9f)
> [allenai/tulu-7b ยท Hugging Face](https://huggingface.co/allenai/tulu-7b)
> [How Far Can Camels Go? Exploring the State of Instruction Tuning...](https://arxiv.org/abs/2306.04751)
|
bbb16a58ec6445498a145c4ca37c2e5a
|
Self Instruct Tuning
|
Instruction Tuning Notion
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
### Self Instruct Tuning Usages
|Title|
|:-:|
|[Airoboros](https://texonom.com/airoboros-d696afdc961946019af06539b1ce82a0)|
> [Self-Instruct: Aligning Language Models with Self-Generated Instructions](https://arxiv.org/abs/2212.10560)
|
061b12de01344308823ec115c5a17cc2
|
|
Tulu
|
Open Instruct
| null | null | null | null | null |
> [allenai/tulu-65b ยท Hugging Face](https://huggingface.co/allenai/tulu-65b)
|
4dc404191ca748ae9194c1218abf2b9f
|
Airoboros
|
Self Instruct Tuning Usages
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
[airoboros](https://github.com/jondurbin/airoboros)
|
d696afdc961946019af06539b1ce82a0
|
|
LaMini LM
|
Knowledge Distillation Usages
|
Jun 19, 2023
|
Alan Jo
|
Alan Jo
|
Jun 25, 2023
|
[LaMini-LM](https://github.com/mbzuai-nlp/lamini-lm)
### Dataset
> [MBZUAI/LaMini-instruction ยท Datasets at Hugging Face](https://huggingface.co/datasets/MBZUAI/LaMini-instruction)
> [jncraton/LaMini-Flan-T5-77M-ct2-int8 ยท Hugging Face](https://huggingface.co/jncraton/LaMini-Flan-T5-77M-ct2-int8)
|
7312cefcc508434aac19acbf61cd2968
|
|
****Adagrad****
|
Model Optimizers
|
Jul 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
๋งค๊ฐ๋ณ์์ ์๋ก ๋ค๋ฅธ ํ์ต๋ฅ ์ ์ ์ฉ
๋ณํ๊ฐ ๋ง์ ๋งค๊ฐ๋ณ์๋ ํ์ต๋ฅ ์ด ์๊ฒ ์ค์
|
ff51b7bdde5443b99cbd2823853d704f
|
|
Adam Optimizer
|
Model Optimizers
|
Jun 18, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
### ****RMSprop + Momentum****
|
286be3ab866642d8bcdf8792f7b5608f
|
|
AdamW Optimizer
|
Model Optimizers
|
Jun 18, 2023
|
Alan Jo
|
Alan Jo
|
Jun 18, 2023
|
495cda99414e451ba562e84331a2f0f6
|
||
****RMSprop****
|
Model Optimizers
|
Jul 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
fddbbccce2aa4f6691f85ed10f2edc86
|
||
Sophia Optimizer
|
Model Optimizers
|
Jun 22, 2023
|
Alan Jo
|
Alan Jo
|
Jun 22, 2023
|


> [Sophia: A Scalable Stochastic Second-order Optimizer for Language...](https://arxiv.org/abs/2305.14342)
|
eee263eba00c4658909fc0230eb4338c
|
|
Ouput Dequantization
|
Model Quantization Notion
|
Jul 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
๋ง์ง๋ง์ผ๋ก inference๋ฅผ ํตํด ์ป์ ์ถ๋ ฅ์ fp๋ก ๋ณํ
- Affine Quantization Mapping
> - Scale Quantization Mapping
|
b0002e5322cb4acbb9171daea3d6fd87
|
|
**Post-training quantization**
|
Model Quantization Notion
|
Jul 2, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)
|
## PTQ
ํ๋ผ๋ฏธํฐ size ํฐ ๋ํ ๋ชจ๋ธ์ ๋ํด์๋ ์ ํ๋ ํ๋ฝ์ ํญ์ด ์์ง๋ง ์์ผ๋ฉด ํ๋ฝํญ ํฌ๋ค
> [Post-training quantization ย |ย TensorFlow Model Optimization](https://www.tensorflow.org/model_optimization/guide/quantization/post_training)
> [๋ฅ๋ฌ๋์ Quantization (์์ํ)์ Quantization Aware Training](https://gaussian37.github.io/dl-concept-quantization/)
|
ee4a0b4f02184b1193a68073dc60800e
|
**Quantization Aware Training**
|
Model Quantization Notion
|
Jul 2, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
[Post-training quantization](https://texonom.com/post-training-quantization-ee4a0b4f02184b1193a68073dc60800e)
|
## QAT
ํ์ต ์งํ ์์ ์ inference ์ quantization ์ ์ฉ์ ์ํ ์ํฅ์ ๋ฏธ๋ฆฌ ์๋ฎฌ๋ ์ด์
์ ํ๋ ๋ฐฉ์์ด๊ณ ๊ทธ๊ฑธ ๊ธฐ๋ฐ์ผ๋ก Back Propagation
์ํ ๋ชจ๋ธ์์๋ ์ฑ๋ฅํ๋ฝ ์ ๋ค

> [Quantization aware training ย |ย TensorFlow Model Optimization](https://www.tensorflow.org/model_optimization/guide/quantization/training)
> [Inside Quantization Aware Training](https://towardsdatascience.com/inside-quantization-aware-training-4f91c8837ead)
> [๋ฅ๋ฌ๋์ Quantization (์์ํ)์ Quantization Aware Training](https://gaussian37.github.io/dl-concept-quantization/)
|
e0fe4518abdc43c2ad661911b87a597c
|
Quantization Calibration
|
Model Quantization Notion
|
Jul 2, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
per hardware, using dataset
|
1e3675b171a44c1cb2b7d365e117e22b
|
|
Quantization Clipping Range
|
Model Quantization Notion
|
Jul 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
range standard on zero
- Symmetric Quantization
- Asymmetric Quantization
### Inference time ์ ๊ฒฐ์ ์ธ์ง Quantization ๋น์ ๊ฒฐ์ ์ธ์ง
- Dynamic Quantization ์ฆ input์ ์์กด์ ์ผ๋ก ์ฑ๋ฅ ์ข๋ค
- Static Quantization
|
d12231d52ec647f0b6df771e4b6514f1
|
|
Quantization Error
|
Model Quantization Notion
|
Jul 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
789326c5a6294b44bab927096fe3f576
|
||
Quantization Formular
|
Model Quantization Notion
|
Jul 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
- minmax
- histogram
-
|
9f59115b5e7c438698ad3a9281d01d89
|
|
Quantization Module Fusion
|
Model Quantization Notion
|
Jul 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
Conv-BatchNorm-ReLU ๊ฐ์ด
๋ ์ด์ด ๋ฌถ์ด์ quantization
|
56a36e892c93498fb404da0c816549b4
|
|
Model Quantization Algorithm
|
Model Quantization Usages
|
Jul 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 10, 2023
|
[Hessian Matrix](https://texonom.com/hessian-matrix-e1ebbf5284284a1793233973648ef0b6)
|
### Model Quantization Algorithms
|Title|
|:-:|
|[GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6)|
|[SparseGPT](https://texonom.com/sparsegpt-c3c4b078dd324442b89494f9a7106fc1)|
|[LUT Gemm](https://texonom.com/lut-gemm-2540ab1497b3494cb2754bb237c9a543)|
|[BCQ](https://texonom.com/bcq-b2d90a9433024c2fb763d14117f371b0)|
|[SpQR](https://texonom.com/spqr-b884c9f3d8cc449fb8612b874e6ad693)|
|[HAWQ](https://texonom.com/hawq-7c779f225dc54e1c827a9e50ae195949)|
|
98bef3c3fe7e4bcc90df5144d7a42003
|
Model Quantization Tool
|
Model Quantization Usages
|
Jun 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
### Model Quantization Tools
|Title|
|:-:|
|[AutoGPTQ](https://texonom.com/autogptq-8a1d898788434aa2bc00fb43fd34411d)|
|[bitsandbytes](https://texonom.com/bitsandbytes-1575b433faaf455ba0d86cf5f7e5190b)|
### Model Quantization Inference Tools
|Title|
|:-:|
|[ExLLaMa](https://texonom.com/exllama-250efbb669814c8e9ef7f85d902fc919)|
|[](https://texonom.com/d292b2922f684a35aa126a84abe5075c)|
|[PyLLaMA](https://texonom.com/pyllama-9e4fc8a8f08c4c6eaee8600943a621f8)|
|
384bc2583bcb498e9331ffc804315253
|
|
BCQ
|
Model Quantization Algorithms
|
Jul 17, 2023
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
[transformer_bcq](https://github.com/insoochung/transformer_bcq)
> [Ins๐๐ Chung - Sub-3bit quantization](https://sites.google.com/view/insoochung/sub-3bit-quantization)
|
b2d90a9433024c2fb763d14117f371b0
|
|
GPTQ
|
Model Quantization Algorithms
|
Jun 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 16, 2023
|
[gptq](https://github.com/IST-DASLab/gptq) โฃ [Post-training quantization](https://texonom.com/post-training-quantization-ee4a0b4f02184b1193a68073dc60800e)
|
### SOTA one-shot weight quantization method
1. Arbitrary Order insights
2. Lazy batch-updates - ๋ฉ๋ชจ๋ฆฌ ์ฒ๋ฆฌ ๋ณ๋ชฉํ์ ๊ฐ์
3. Cholesky Reformulation
Quantization ์ค์ฐจ๊ฐ ๊ฐ์ฅ ์ ์ ๊ฐ์ค์น ๊ธฐ์ค์ผ๋ก ์ ๋ ฌํ๊ณ ๊ณ์ฐ์ํ
[GPTQ Act Order](https://texonom.com/gptq-act-order-50d18e83ed7e4e388b753f0dc6db3a97)
[GPTQ True Sequential](https://texonom.com/gptq-true-sequential-1958b85809814f3ca5c4d7d8942d0f30)
second-order information
4 bits quantization

> [GPTQ: Accurate Post-Training Quantization for Generative...](https://arxiv.org/abs/2210.17323)
> [gptq](https://pypi.org/project/gptq/)
|
87428aee2a774b93906ab2213a1b6dc6
|
HAWQ
|
Model Quantization Algorithms
|
Jul 10, 2023
|
Alan Jo
|
Alan Jo
|
Jul 10, 2023
|
[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)
|
> [HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision](https://arxiv.org/abs/1905.03696)
|
7c779f225dc54e1c827a9e50ae195949
|
LUT Gemm
|
Model Quantization Algorithms
|
Jun 18, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
> [LUT-GEMM: Quantized Matrix Multiplication based on LUTs for...](https://arxiv.org/abs/2206.09557)
|
2540ab1497b3494cb2754bb237c9a543
|
|
SparseGPT
|
Model Quantization Algorithms
|
Jun 18, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
[Neural Magic](https://texonom.com/neural-magic-021ab1b8be0d4f95b8ae6278c08e1562) [sparsegpt](https://github.com/IST-DASLab/sparsegpt)
|
- [sparseml](https://github.com/neuralmagic/sparseml)
- [deepsparse](https://github.com/neuralmagic/deepsparse)
- [sparsezoo](https://github.com/neuralmagic/sparsezoo)
|
c3c4b078dd324442b89494f9a7106fc1
|
SpQR
|
Model Quantization Algorithms
|
Jun 25, 2023
|
Alan Jo
|
Alan Jo
|
Jul 5, 2023
|
1. Quantized weights
2. first, second level quantized quantization statistics
3. CSR outlier indices and values
> [SpQR: A Sparse-Quantized Representation for Near-Lossless LLM...](https://arxiv.org/abs/2306.03078)
|
b884c9f3d8cc449fb8612b874e6ad693
|
|
GPTQ Act Order
|
GPTQ
| null | null | null | null | null |
### activation order GPTQ heuristic
quantizes columns in order of decreasing activation size
```typeif actorder: perm = torch.argsort(torch.diag(H), descending=True) W = W[:, perm] H = H[perm][:, perm]```
|
50d18e83ed7e4e388b753f0dc6db3a97
|
GPTQ True Sequential
|
GPTQ
| null | null | null | null | null |
sequential quantization even within a single Transformer block
```typeif args.true_sequential: sequential = [['self_attn.k_proj', 'self_attn.v_proj', 'self_attn.q_proj'], ['self_attn.o_proj'], ['mlp.up_proj', 'mlp.gate_proj'], ['mlp.down_proj']]```
|
1958b85809814f3ca5c4d7d8942d0f30
|
[GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa)
|
Model Quantization Inference Tools
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
d292b2922f684a35aa126a84abe5075c
|
||
ExLLaMa
|
Model Quantization Inference Tools
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Aug 5, 2023
|
[GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6) [LLaMA](https://texonom.com/llama-f2b6721202d44d469add84d8a366809c) [exllama](https://github.com/turboderp/exllama)
|
### WebUI is good
|
250efbb669814c8e9ef7f85d902fc919
|
PyLLaMA
|
Model Quantization Inference Tools
|
Jul 16, 2023
|
Alan Jo
|
Alan Jo
|
Jul 16, 2023
|
[pyllama](https://github.com/juncongmoo/pyllama)
|
9e4fc8a8f08c4c6eaee8600943a621f8
|
|
AutoGPTQ
|
Model Quantization Tools
|
Jun 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) [GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6) [AdaLoRa](https://texonom.com/adalora-e93b8dedde8542e99be72d6509ecfbae)
|
need cuda
based on GPTQ algorithm

[AutoGPTQ Triton](https://texonom.com/autogptq-triton-030b1c01209341e295590fd19f97b09e)
[AutoGPTQ Quantization](https://texonom.com/autogptq-quantization-8fb40b7620ba4353ba12ea6b1ac14b75)
```typepip install auto-gptq```
|
8a1d898788434aa2bc00fb43fd34411d
|
bitsandbytes
|
Model Quantization Tools
|
Jun 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) [GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6)
|
### 8-bit CUDA functions for PyTorch
|
1575b433faaf455ba0d86cf5f7e5190b
|
AutoGPTQ Quantization
|
AutoGPTQ
| null | null | null | null | null |
[CUDA inference: issue with group_size = 1024 + desc_act = False. (Triton unaffected)](https://github.com/PanQiWei/AutoGPTQ/issues/83)
quantize(traindataset) example are there
|
8fb40b7620ba4353ba12ea6b1ac14b75
|
AutoGPTQ Triton
|
AutoGPTQ
| null | null | null | null | null |
[CUDA inference: issue with group_size = 1024 + desc_act = False. (Triton unaffected)](https://github.com/PanQiWei/AutoGPTQ/issues/83)
|
030b1c01209341e295590fd19f97b09e
|
Drop-out
|
Model Regularization Notion
|
Jun 7, 2023
|
Alan Jo
|
Alan Jo
|
Jun 7, 2023
|
### Drop out Rate
remove unnecessary neuron
์ผ๋ฐ์ ์ผ๋ก 0.5๋ก ์ค์
> [[๋ฅ๋ฌ๋] Drop-out(๋๋กญ์์)์ ๋ฌด์์ด๊ณ ์ ์ฌ์ฉํ ๊น?](https://heytech.tistory.com/127)
|
596f55ab03f64e11bf6d02464465dd54
|
|
Model Complexity
|
Model Regularization Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
[Sparsity of the Model](https://texonom.com/sparsity-of-the-model-80d3dc9702704c03bf7bfd9074d19829)
|
0975b8ae1d4e4d83bbab43a145011b95
|
|
Model Regularization Parameter
|
Model Regularization Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
641d1784e1f243818d51cc66541e3f21
|
||
Model Regularizer
|
Model Regularization Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
- nonnegative function [L2 Norm](https://texonom.com/l2-norm-38c15917350a4c82a11003474ac7d280)
|
19ff79e831de47d88bd5f9ec496ef11f
|
|
Regularized Loss
|
Model Regularization Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
optimize loss function + regularizer for model complexity
$$J_\lambda(\theta) = J(\theta) + \lambda R(\theta)$$
|
3d5b08feb6604023988d748b78650af7
|
|
Sparsity of the Model
|
Model Regularization Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
May 11, 2023
|
### ์๋ฏธ์์ผ๋ฉด์ 0์ด ์๋ ํ๋ผ๋ฏธํฐ ์๋ฅผ ์ค์ฌ์ค์ [Model Complexity](https://texonom.com/model-complexity-0975b8ae1d4e4d83bbab43a145011b95) ์ค์ฌ์ค
[L1 Norm](https://texonom.com/l1-norm-d316024c475e4eb691785783756bce57) but canโt derivative so can not be used in gradient descent
[L0 Norm](https://texonom.com/l0-norm-47471a6f6dea484fbf30a4c46cde8152)
l0, l1 ์ ๊ทํ๋ ๋ชจ๋ธ์ ์์ค ํจ์์ ์ผ๋ถ ํ๋ผ๋ฏธํฐ๊ฐ 0์ด ๋๋๋ก ์ ๋
|
80d3dc9702704c03bf7bfd9074d19829
|
|
Weight Decay
|
Model Regularization Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
Jun 7, 2023
|
regularized loss is equivalent to shrinking/decaying ฮธ by a scalar factor of $1 - \mu \lambda$ and then apply standard gradient
and that coefficient is decaying weight
when [L2 Norm](https://texonom.com/l2-norm-38c15917350a4c82a11003474ac7d280)
$$L_{reg} = \lambda\frac{1}{2}||w||_2^2$$
|
e69932d6128f4677a219a626db462172
|
|
AI Alignment
|
AI Problems
|
Aug 23, 2020
|
Alan Jo
|
Alan Jo
|
Jul 18, 2023
|
[Wireheading](https://texonom.com/wireheading-1eb526ecdf344731bebc47751739e2f4)
|
## Alignment Problem
**A Maximally Curious AI Would Not Be Safe For Humanity**
### AI is aligned with an operator - AI is trying to do what operator wants to do
**์ ์ด๊ฐ๋ฅ์ฑ, ์ ๋ขฐ์ฑ**
Aligned doesnโt mean perfect
๊ฐ๋ฅด์น ํ๋๊ณผ ๋ค๋ฅธ ํ๋์ ๋ถ์ผ์น ์ ๋ ฌ
๋ชจ๋ธ์ ๋ฅ๋ ฅ๋ณด๋ค ์ ๋ ฌ์ด ๋ ๋น ๋ฅด๊ฒ ๋ฐ์ํด์ผ ํ๋ค
์ ๊ฒฝ๋ง์ ๋ด๋ถ๋ฅผ ๋ณด๊ณ ํด์ํ๋ ๋ค๋ฅธ ์ ๊ฒฝ๋ง์ด ํ์ํ ๊ฒ
### AI Alignment Notion
|Title|
|:-:|
|[stop button problem](https://texonom.com/stop-button-problem-aaf61e72369d42469c29620c32e8bf9d)|
|[Moral Learning](https://texonom.com/moral-learning-7c9640ac1c1d409b82d9a975949132ee)|
|[AI Safety](https://texonom.com/ai-safety-fa61ce3973f34532a7e212335d0f7c81)|
|[Wireheading](https://texonom.com/wireheading-1eb526ecdf344731bebc47751739e2f4)|
|[AI Doom](https://texonom.com/ai-doom-d673760c79ac4248b40f456dc33f306f)|
|[Waluigi Effect](https://texonom.com/waluigi-effect-47e1c2f145cd4c62ba163c4828bb8dc6)|
> [Contra The xAI Alignment Plan](https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan)
### Bill Gates
> [The risks of AI are real but manageable](https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable)
> [OpenAI is forming a new team to bring 'superintelligent' AI under control](https://techcrunch.com/2023/07/05/openai-is-forming-a-new-team-to-bring-superintelligent-ai-under-control)
> [AI alignment](https://en.wikipedia.org/wiki/AI_alignment)
> [What could a solution to the alignment problem look like?](https://aligned.substack.com/p/alignment-solution)
|
f676f1a29ffd45e19b3d170afa4f2244
|
AI Hacking
|
AI Problems
|
Jul 10, 2023
|
Alan Jo
|
Alan Jo
|
Aug 2, 2023
|
[DAN](https://texonom.com/dan-3cfbf270af6e4b3dabbf63c4b50e04c5) [AI Alignment](https://texonom.com/ai-alignment-f676f1a29ffd45e19b3d170afa4f2244) [llm-attacks](https://github.com/llm-attacks/llm-attacks)
|
### AI Hacking Methods
|Title|
|:-:|
|[Deep Learning Backdoor](https://texonom.com/deep-learning-backdoor-2f86cc6e79b944a18fdac35622282e58)|
|[DAN](https://texonom.com/dan-3cfbf270af6e4b3dabbf63c4b50e04c5)|
> [Universal and Transferable Attacks on Aligned Language Models](https://llm-attacks.org/?fbclid=IwAR2fNkjoOdg8qIgNXEPIvyLjboYr4My4NN9Bx89J-Yx7UElSTyKT89_3JeE)
> [PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news](https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/)
|
80b1aed302bb4a5c9b8ae0213b9a246f
|
****Catastrophic interference****
|
AI Problems
|
Jun 25, 2023
|
Alan Jo
|
Alan Jo
|
Jun 25, 2023
|
์๋ก์ด ์ ๋ณด๋ฅผ ํ์ตํ ๋ ์ด์ ์ ํ์ตํ ์ ๋ณด๋ฅผ ๊ฐ์๊ธฐ ๊ธ๊ฒฉํ๊ฒ ์์ด๋ฒ๋ฆฌ๋ ๊ฒฝํฅ
scaling์ผ๋ก ํด๊ฒฐ
|
a23bbe7dc53f4275bf33585c72bdb7c2
|
|
****Winograd schema****
|
AI Problems
|
Jun 25, 2023
|
Alan Jo
|
Alan Jo
|
Jun 25, 2023
|
[Turing Test](https://texonom.com/turing-test-db633c61340449c4bf0143b06fe981c0)
|
๋๋ช
์ฌ ์ดํดํ๋์ง
|
88d031934ad54778835cbadbd7409d80
|
AI Doom
|
AI Alignment Notion
|
Jul 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
> [3 Endings More Poetic Than AI Wiping Us Out](https://thealgorithmicbridge.substack.com/p/3-endings-more-poetic-than-ai-wiping)
|
d673760c79ac4248b40f456dc33f306f
|
|
AI Safety
|
AI Alignment Notion
|
Jun 13, 2023
|
Alan Jo
|
Alan Jo
|
Jun 13, 2023
|
> [OpenAI, DeepMind and Anthropic to give UK early access to foundational models for AI safety research](https://techcrunch.com/2023/06/12/uk-ai-safety-research-pledge/)
|
fa61ce3973f34532a7e212335d0f7c81
|
|
Moral Learning
|
AI Alignment Notion
|
Oct 2, 2020
|
Alan Jo
|
Alan Jo
|
Jun 4, 2023
|
> [Moral Machine](https://www.moralmachine.net/hl/kr)
|
7c9640ac1c1d409b82d9a975949132ee
|
|
stop button problem
|
AI Alignment Notion
|
Aug 23, 2020
|
Alan Jo
|
Alan Jo
|
Jun 4, 2023
|
AI control problem
|
aaf61e72369d42469c29620c32e8bf9d
|
|
**Waluigi Effect**
|
AI Alignment Notion
|
Jul 18, 2023
|
Alan Jo
|
Alan Jo
|
Jul 18, 2023
|
์์๊ณผ๋ ๋ค๋ฅธ ๋ฐฉํฅ์ผ๋ก ๋์๊ฐ๋ ํ์
|
47e1c2f145cd4c62ba163c4828bb8dc6
|
|
Wireheading
| null | null | null | null | null | null |
๋์ ์ ์์ ์ธ ๋ณด์ ๊ณผ์ ์ '๋จ๋ฝ'์ํค๊ณ ์ธ์์ ์ผ๋ก ์พ๊ฐ์ ์ ๋ํ๊ธฐ ์ํด ์ฝ์
๋ ์์ด์ด๋ฅผ ์ ๊ธฐ์ ์ผ๋ก ์๊ทนํ์ฌ ๋์ ๋ณด์ ์ค์ถ๋ฅผ ์ง์ ํธ๋ฆฌ๊ฑฐํ๋ ํ์์ธ ๋ ์๊ทน ๋ณด์์ ๋ฏธ๋์ ์ ์ฉ
|
1eb526ecdf344731bebc47751739e2f4
|
DAN
|
AI Hacking Methods
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
[Prompt Engineering](https://texonom.com/prompt-engineering-eb0deb4baf844bebb873c19a0e307e7e)
|
### Do Anything Now
> [์ฑGPT ํ์ฅ ํ๋ ๋ฐฉ๋ฒ(DAN: Do Anything Now ์์งํ ์ธ๊ณต์ง๋ฅ ๋ต๋ณ์ด ๊ถ๊ธํ๋ค๋ฉด)](https://ndolson.com/5781)
> [ChatGPT-Dan-Jailbreak.md](https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516)
> [The Waluigi Effect (mega-post) - LessWrong](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post)
> [The Amateurs Jailbreaking GPT Say They're Preventing a Closed-Source AI Dystopia](https://www.vice.com/en/article/5d9z55/jailbreak-gpt-openai-closed-source)
|
3cfbf270af6e4b3dabbf63c4b50e04c5
|
Deep Learning Backdoor
|
AI Hacking Methods
|
Mar 12, 2021
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
์ค๋
(poisoning) ๊ณต๊ฒฉ ๋ถ์ผ์ ํด๋นํ๋ฉฐ, ๋ฅ๋ฌ๋ ๋ชจ๋ธ์ ํ์ต ๋ฐ์ดํฐ์ ์ค๋
๋(poison) ๋ฐ์ดํฐ๋ฅผ ์๋ ๊ณต๊ฒฉ ์ ํ
[deep Learning side channel attacks](https://texonom.com/deep-learning-side-channel-attacks-448d303021cd4251996a161bfaefc00d)
> [One-Shot Kill Attack (๋ฅ๋ฌ๋ ๋ชจ๋ธ ๋ฐฑ๋์ด ๊ณต๊ฒฉ ๊ธฐ์ ) "Poison Frogs!" | ๋
ผ๋ฌธ ์์ฝ ๋ฐ ์ฝ๋ ์ค์ต](https://www.youtube.com/watch?v=hgI0o3Yg3mU)
|
2f86cc6e79b944a18fdac35622282e58
|
|
deep Learning side channel attacks
|
Deep Learning Backdoor
| null | null | null | null | null |
> [Hacker's guide to deep-learning side-channel attacks: the theory](https://elie.net/blog/security/hacker-guide-to-deep-learning-side-channel-attacks-the-theory/?utm_source=tldrnewsletter)
|
448d303021cd4251996a161bfaefc00d
|
4์ฐจ ์ธ๊ฐ
|
AI Terms
|
Jun 27, 2020
|
Alan Jo
|
Alan Jo
|
Mar 14, 2023
|
๋จ๋ ๊ฒ์ ์ธ๊ฐ๋ค์์ด๊ณ
์ฌ๋๋ง์ด ๊ฐ์ง๊ณ ์๋ ๊ฒ์
๊ธฐ๊ณ๋ ๊ณต์ ํ์ง ๋ชปํ ์ ์ ์๋ก ๋จ์ ์๋, ์๋ฌผ์ฒด๋ก์ ์ด์์จ ์ญ์ฌ๋ก ๋ง๋ค์ด์ง ๋ณธ๋ฅ์ด๋ค
๋ณธ๋ฅ์ ๊ฐ์ฅ ๊ทผ์์ ์ด๊ณ ํฐ๋ถ์๋๋ ์๊ตฌ์ด์ง๋ง
์์ผ๋ก ๊ฐ์ฅ ์ฐ๋ฆฌ๋ฅผ ์ ๊ตฌ๋ณํด๋ผ ํน์ฑ์ด๊ธฐ๋ ํ๋ค
|
26cf5d3b26e94020a6fc6de2d34a15f6
|
|
Adaptive AI
|
AI Terms
|
Mar 15, 2023
|
Alan Jo
|
Alan Jo
|
Mar 15, 2023
|
> [๊ฐํธ๋ ์ ์ 2023 10๋ ์ ๋ต ๊ธฐ์ ํธ๋๋ ๋ถ์ - ์ ์ํ AI](https://www.joinc.co.kr/w/gartner_2023_adaptive_ai?fbclid=IwAR3olpomf8IIzy96Qfzmv9q-qTAJ0QqtBdVSlSPoGo5-D1GzikwZLc0TIvI)
|
4efa0c042d244228a870d4c70b8f2d26
|
|
AGI
|
AI Terms
|
Jun 1, 2022
|
Alan Jo
|
Alan Jo
|
Jul 4, 2023
|
[Super Intelligence](https://texonom.com/super-intelligence-b057e644731546d9b39cb41939d36712) [Consciousness](https://texonom.com/consciousness-105c514277b54cd5b8da23ae743e824d)
|
## Artificial General Intelligence
ambiguous
turing test์ฒ๋ผ ๊ต์ฅํ ์ ๋งคํ๊ณ ์ธ๊ฐ์ค์ฌ์ ๊ฐ๋
. ํ์ฌ์ llm๋ ์ด๋ค ์ง๋ฅ์ผ๋ก ํ๋จํ์ ๋๋ ์ด์ง๋ฅ์ด๋ค. llm์ ์งํ๊ณผ์ ๊ณผ brain์ ์งํ๊ณผ์ ์ด ๋ค๋ฅด๊ธฐ ๋๋ฌธ์ ๋น๊ตํ๋จ์ด ์ด๋ ต๋ค. ์ธ๊ณต์ง๋ฅ์ ์ฌ๋์ธ ์ฒโ ํ๋๋ก ์ ๋ ฌํ๋๊ฒ ์๋๋ผ. ์ฌ๋์ ๋์์ด ๋๋๋ก ํ๋ ์์์ผ๋ก ์ธ์ํด์ผ ํ๋ค. ์ฆ ์ธ๊ณต์ง๋ฅ์ผ๋ก โ๊ฐ์ธโ์ด๋ผ๋ ๊ฐ๋
์ผ๋ก ์ฐฉ๊ฐํ๋๊ฒ ๊ฐ์ฅ ํฐ ๋ฌธ์ . ๊ทธ๋ณด๋ค ์ธ๊ณต์ง๋ฅ์ โ์ฌํโ ํน์ ์ง๋จ์ง์ฑ์ ๋์ ๊ตฌ์กฐ๋ก ๋ฌถ์ด๋ ์์์ผ๋ก ๋ณด๋ ๊ฒ์ ๊ฐ๊น๋ค
**If intelligence and consciousness are algorithmic illusions, then the arrival of generalized AI is a foregone conclusion.**
์ธ๊ฐ์ด ํ ์ ์๋ ์ด๋ ํ ์ง์ ์ธ ์
๋ฌด๋ ์ฑ๊ณต์ ์ผ๋ก ํด๋ผ ์ ์๋ (๊ฐ์์ ์ธ) ๊ธฐ๊ณ์ ์ง๋ฅ
๊ธฐ๊ณ๊ธฐํ์ ์ ๋ณด์ฒ๋ฆฌ ํ๊ณ๋ ์์ฒด์กฐ์ง์ ์ ๋ณด์ฒ๋ฆฌ ๋ฅ๋ ฅ์ ๋์ด ์ํ
์์ฒด์กฐ์ง์ 200Hz์ ์ง๋์ผ๋ก ์ ๋ณด๋ฅผ ์ ๋ฌํ์ง๋ง ๊ฐ๋จํ ํธ๋์ง์คํฐ๋ GHz์ ์ง๋์๋ฅผ ๊ฐ์ง๋ค. ๋ํ ๊ทธ ์ ๋ฌ์๋๋ ์์ฒด์กฐ์ง์์ ํ๊ท 100m/s๋ก ์ด๋ํ๋ ๋ฐ ๋ฐํด ๊ธฐ๊ณ๊ธฐํ์ ๋น์ ์๋๋ก๊น์ง ์ ๋ฌํ ์ ์๋ค. ๋ํ ์ ๋ณด์ฒ๋ฆฌ ์์ฒด์กฐ์ง์ ํฌ๊ธฐ๋ ๋๊ฐ๊ณจ ์๊ณผ ๊ธฐ๊ปํด์ผ ์ฒ์ถ ๋ด๋ถ์ธ๋ฐ ๋ฐํด ๊ธฐ๊ณ๊ธฐํ์ ํฌ๊ธฐ์ ์ ํ์ด ์๋ค
๊ทธ๋ผ์๋ ์ธ๊ฐ์ด ์ต๊ณ ์ ๊ตฌ์กฐ๋ผ๊ณ ์๊ฐํ๋
์๋๋ฉด ๊ทผ๊ฑฐ์์ด human being์ ์ฌ๊ณ ๋ฅผ ๊ธฐ๊ณ๊ฐ ๋ฐ๋ผ์ก์ ๋ฐฉ๋ฒ์ ์๋ค๊ณ ๋๊ดํ๋
> ํ์๋ฅผ ๊ธฐ๋ฐ์ผ๋ก ํ๋ ์ง๋ฅ์ ๊ทธ์ ์ค๋ฆฌ์ฝ์ ๊ธฐ๋ฐ์ผ๋ก ํ๋ ์ง๋ฅ์ ์ํ ์ด๋งค์ผ ๋ฟ์ด๋ค - Benki Ramakrishnan
### Interview from popular people
> [แแ
ฆแแ
ณแ
แ
ต ํํด์ด ๋งํ๋ AI์ ์ํฅ๋ ฅ๊ณผ ์ ์ฌ๋ ฅ](https://www.youtube.com/watch?v=IvUw9um4Bv8)
> [OpenAI์ ํต์ฌ, Ilya Sutskever ์ธํฐ๋ทฐ](https://www.youtube.com/watch?v=SGCFeIbpGlU&t=722s)
### Planning beyond
> [OpenAI's "Planning For AGI And Beyond"](https://astralcodexten.substack.com/p/openais-planning-for-agi-and-beyond)
> [Planning for AGI and beyond](https://openai.com/blog/planning-for-agi-and-beyond/)
> [The Day The AGI Was Born](https://lspace.swyx.io/p/everything-we-know-about-chatgpt)
### Design AGI
> [Human-centred mechanism design with Democratic AI - Nature Human Behaviour](https://www.nature.com/articles/s41562-022-01383-x)
> [Exclusive Q&A: John Carmack's 'Different Path' to Artificial General Intelligence](https://dallasinnovates.com/exclusive-qa-john-carmacks-different-path-to-artificial-general-intelligence)
### Checklist to AGI
> [Road to AGI v0.2](https://maraoz.com/road-to-agi/)
|
38ec1ab5796f472ca4475676519e29c1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.