title
stringlengths
1
544
โŒ€
parent
stringlengths
0
57
โŒ€
created
stringlengths
11
12
โŒ€
editor
stringclasses
1 value
creator
stringclasses
4 values
edited
stringlengths
11
12
โŒ€
refs
stringlengths
0
536
โŒ€
text
stringlengths
1
26k
id
stringlengths
32
32
Streamlit Caching
Streamlit Usages
Jul 15, 2023
Alan Jo
Alan Jo
Jul 15, 2023
- `st.cache_data` - `st_cache_resource` > [Caching - Streamlit Docs](https://docs.streamlit.io/library/advanced-features/caching)
a22ccb53e76349dc9ccf31b616d84083
Streamlit Cloud
Streamlit Usages
May 24, 2023
Alan Jo
Alan Jo
May 24, 2023
> [Embed your app - Streamlit Docs](https://docs.streamlit.io/streamlit-community-cloud/get-started/embed-your-app)
8c045600c4314f6382bd3c05358dd2ba
Streamlit UI
Streamlit Usages
May 14, 2023
Alan Jo
Alan Jo
May 14, 2023
### Streamlit UIs |Title| |:-:| |[Streamlit Chat](https://texonom.com/streamlit-chat-73c8557f0d094b3fac7448e10f013cb4)| |[Streamlit Extras](https://texonom.com/streamlit-extras-02c8d0d7eebf4c03adce4e843305e2e7)| |[Streamlit Pills](https://texonom.com/streamlit-pills-cc0ce4c16ac44d1798242fc990f7eb92)|
4d1410d0bdb54e1ba4512e03f500b0b7
Streamlit Widget
Streamlit Usages
Jul 17, 2023
Alan Jo
Alan Jo
Jul 17, 2023
### Streamlit Widgets |Title| |:-:| |[Streamlit text_area](https://texonom.com/streamlit-textarea-d58bc341994a4cf9a6eec5095e6c5395)|
c4cd30e860ad4f8487ef2f4fcae564e6
Streamlit Chat
Streamlit UIs
May 14, 2023
Alan Jo
Alan Jo
May 14, 2023
73c8557f0d094b3fac7448e10f013cb4
Streamlit Extras
Streamlit UIs
May 14, 2023
Alan Jo
Alan Jo
May 14, 2023
02c8d0d7eebf4c03adce4e843305e2e7
Streamlit Pills
Streamlit UIs
May 14, 2023
Alan Jo
Alan Jo
May 14, 2023
cc0ce4c16ac44d1798242fc990f7eb92
Streamlit text_area
Streamlit Widgets
Jul 17, 2023
Alan Jo
Alan Jo
Jul 17, 2023
> [st.text_area - Streamlit Docs](https://docs.streamlit.io/library/api-reference/widgets/st.text_area)
d58bc341994a4cf9a6eec5095e6c5395
Hold-out Method
AI Generalization Methods
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
Given data is randomly partitioned into two independent sets **which make them same distribution** our assumption to be valid
cc7504f8f8db489fb474bd89cbbe0dd4
k-fold cross validation
AI Generalization Methods
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
k mutually-exclusive subset
f61ca1dbd3a5483998e9d6514b16c48d
Nested cross validation
AI Generalization Methods
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
[Hyperparameter](https://texonom.com/hyperparameter-ef7e34566add4e98b673d4cef59fca90)
- inner fold - outer fold a lot of times needed if dataset is large
c88ec0073c1d4dbda857ad019779559a
Random Sampling
AI Generalization Methods
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
a variation of hold-out
5879da4187bb4e1bafacedd6fe617149
Train/Validation/Test splitting
AI Generalization Methods
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
If dataset is large enough
bc5f600b344b455b994ca25671863ac8
Fine Tuning
Model Generalization Notion
Mar 7, 2023
Alan Jo
Alan Jo
Jun 22, 2023
### Task optimized ### Fine Tuning Notion |Title| |:-:| |[PEFT](https://texonom.com/peft-f47a178d75804abf928ec7e7be2da27f)| |[TRL](https://texonom.com/trl-ebc3e432e3984ca3b2a1cf20da0fa5d1)| |[DRO](https://texonom.com/dro-4701102d6c0a4545afe7f0bad178180b)| |[SFT](https://texonom.com/sft-f48f7e6eccd54a62bea82725fae98865)|
52215d8477ad46e3896a29e2fa408991
Generalization Gap
Model Generalization Notion
May 9, 2023
Alan Jo
Alan Jo
May 11, 2023
the test error is not necessarily always close to the training error
d8b4ec347e7d4df282ea96a093e7581f
Overfitting
Model Generalization Notion
May 31, 2021
Alan Jo
Alan Jo
Jun 7, 2023
[Underfitting](https://texonom.com/underfitting-e56067d23ba8404a8e5165085986c9aa)
### Predicts training dataset > test, High [Variance](https://texonom.com/variance-08c1eccc7dc84957afbb815ad6b41280) large number of parameters can cause overfitting Models that are bigger or have more **capacity **are more likely to overfit Overfitting issues are usually observed when the magnitude of parameters is large If data is general enough, overfitting is okay ### Resolve Overfitting |Title| |:-:| |[Non-parametric algorithm](https://texonom.com/non-parametric-algorithm-bc41f745a14c4837bd54368652da5982)| |[Regularized parameter](https://texonom.com/regularized-parameter-f3b208cdd37a4002a5d39ef990b8be33)| ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F7d045ebf-3253-4b1c-8305-57cb4088e985%2FUntitled.png?table=block&id=d0611322-665c-4cab-9e56-53d84fdf4f2a&cache=v2)
24c3b183372845e8999ad7f7a0ba5035
Pre Training
Model Generalization Notion
Mar 7, 2023
Alan Jo
Alan Jo
May 11, 2023
์ผ๋ฐ˜์ ์ธ ๋ชฉ์ ์„ ์œ„ํ•œ ํŠธ๋ ˆ์ด๋‹
0d81c286e86f4dcba940d3c849631b35
Underfitting
Model Generalization Notion
Mar 14, 2023
Alan Jo
Alan Jo
May 14, 2023
[Overfitting](https://texonom.com/overfitting-24c3b183372845e8999ad7f7a0ba5035)
### The training error is relatively large, High [Bias](https://texonom.com/bias-ba063cd622a54deb8a677e8fb87dfdc8)
e56067d23ba8404a8e5165085986c9aa
DRO
Fine Tuning Notion
Jun 29, 2023
Alan Jo
Alan Jo
Jun 29, 2023
### DRO Usages |Title| |:-:| |[Group DRO](https://texonom.com/group-dro-4877ea63979c44ed8149515d3b832397)| |[DRO-LM](https://texonom.com/dro-lm-564725ae9e7140de9e42d25310728af4)|
4701102d6c0a4545afe7f0bad178180b
PEFT
Fine Tuning Notion
Mar 7, 2023
Alan Jo
Alan Jo
Jul 15, 2023
[peft](https://github.com/huggingface/peft)
### Parameter-Efficient Fine-Tuning ํฐ ๋ชจ๋ธ์˜ ๊ฒฝ์šฐ, ์ผ๋ถ€ ๊ฐ€์ค‘์น˜๋งŒ ํŒŒ์ธํŠœ๋‹ ### PEFT Usages |Title| |:-:| |[LoRA](https://texonom.com/lora-fda3706be3674496898ad2e5e00007c9)| |[PEQA](https://texonom.com/peqa-500506697a2b458caa8386691757b29a)| > [PEFT๋กœ LoRA Checkpoint ๋กœ๋“œ์‹œ size mismatch ํ•ด๊ฒฐ๋ฒ•](https://junbuml.ee/lora-ckpt-size-mismatch)
f47a178d75804abf928ec7e7be2da27f
SFT
Fine Tuning Notion
Jul 15, 2023
Alan Jo
Alan Jo
Jul 15, 2023
### Supervised Fine-Tuning
f48f7e6eccd54a62bea82725fae98865
TRL
Fine Tuning Notion
Jun 22, 2023
Alan Jo
Alan Jo
Jun 22, 2023
[trl](https://github.com/lvwerra/trl)
### Train transformer language models with reinforcement learning > [Fine-tuning 20B LLMs with RLHF on a 24GB consumer GPU](https://huggingface.co/blog/trl-peft) > [Jonas Kim / 24GB ์ผ๋ฐ˜ GPU์—์„œ RLHF๋กœ 20B LLM ๋ฏธ์„ธ... | ์ปค๋ฆฌ์–ด๋ฆฌ](https://careerly.co.kr/comments/79381)
ebc3e432e3984ca3b2a1cf20da0fa5d1
DRO-LM
DRO Usages
Jun 29, 2023
Alan Jo
Alan Jo
Jun 29, 2023
๊ฐ ๋„๋ฉ”์ธ์—์„œ ์ตœ์•…์˜ ๊ฒฝ์šฐ ํ•˜์œ„ ์ง‘ํ•ฉ์„ ์„ ํƒํ•˜์—ฌ ๋ชจ๋ธ์„ ์—…๋ฐ์ดํŠธ
564725ae9e7140de9e42d25310728af4
Group DRO
DRO Usages
Jun 29, 2023
Alan Jo
Alan Jo
Jun 29, 2023
### Group DRO Usages |Title| |:-:| |[DoReMi](https://texonom.com/doremi-dbb0435bc1cd4c94a30509cd0246e4d3)|
4877ea63979c44ed8149515d3b832397
DoReMi
Group DRO Usages
Jun 29, 2023
Alan Jo
Alan Jo
Jun 29, 2023
๋จผ์ € ์ž‘์€ ํ”„๋ก์‹œ ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•˜์—ฌ Group DRO๋ฅผ ์ ์šฉํ•˜์—ฌ ๋„๋ฉ”์ธ ๊ฐ€์ค‘์น˜(ํ˜ผํ•ฉ ๋น„์œจ)๋ฅผ ์ƒ์„ฑ ๋„๋ฉ”์ธ ๊ฐ€์ค‘์น˜๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฐ์ดํ„ฐ์…‹์„ ์žฌ์ƒ˜ํ”Œ๋งํ•˜๊ณ  ๋” ํฐ ์ „์ฒด ๊ทœ๋ชจ์˜ ๋ชจ๋ธ์„ ํ›ˆ๋ จ ์ด๋ฅผ ํ†ตํ•ด DoReMi๋Š” ์‚ฌ์ „ ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ ๋„๋ฉ”์ธ์˜ ํ˜ผํ•ฉ ๋น„์œจ์„ ์กฐ์ •ํ•˜์—ฌ ์–ธ์–ด ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ์ตœ์ ํ™”
dbb0435bc1cd4c94a30509cd0246e4d3
LoRA
PEFT Usages
Jun 22, 2023
Alan Jo
Alan Jo
Jun 22, 2023
### Low-Rank Adaptation ํฐ ๋ชจ๋ธ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ ์€ computing resource๋กœ ํ•™์Šต๊ฐ€๋Šฅ ์ „์ฒด ๊ฐ€์ค‘์น˜๋Š” ๊ณ ์ •์‹œํ‚จ ์ƒํƒœ์—์„œ ๋ณ„๋„์˜ ํŒŒ๋ผ๋งคํ„ฐ๋“ค์„ ๊ฐ Transformer์˜ Layer๋“ค์— ์ถ”๊ฐ€ํ•˜๊ณ  ํ•ด๋‹น ํŒŒ๋ผ๋งคํ„ฐ๋“ค๋งŒ ํ•™์Šต ### LoRA Usages |Title| |:-:| |[QLoRA](https://texonom.com/qlora-c6b36db321c4470bbfe72804c4c43409)| |[AdaLoRa](https://texonom.com/adalora-e93b8dedde8542e99be72d6509ecfbae)| > [์ ์€ GPU ๋ฉ”๋ชจ๋ฆฌ๋กœ ๋Œ€๊ทœ๋ชจ ์–ธ์–ด ๋ชจ๋ธ์„ ํŠธ๋ ˆ์ด๋‹ ํ•˜๋Š” ๊ธฐ๋ฒ• ใ€ŒQLoRAใ€๊ฐ€ ๋“ฑ์žฅ](https://doooob.tistory.com/1029) > [QLoRA: 48GB GPU๋กœ 65B ๋ชจ๋ธ์˜ ๋ฏธ์„ธ์กฐ์ •(ํŒŒ์ธํŠœ๋‹)์ด ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ ์š”?](https://discuss.pytorch.kr/t/qlora-48gb-gpu-65b/1682)
fda3706be3674496898ad2e5e00007c9
PEQA
PEFT Usages
Jul 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
### Parameter Efficient Quantization-aware Adaptation LoRA๋ณด๋‹ค ํ›จ์”ฌ ์ ์€ ์–‘์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ์ ์œ ํ•˜๋Š” Fine-tuning์ด ๊ฐ€๋Šฅ ๊ฒฐ๊ณผ๋Š” 3/4-bit Weight-only Uniform Quantization๋œ ํ˜•ํƒœ > [Memory-Efficient Fine-Tuning of Compressed Large Language Models...](https://arxiv.org/abs/2305.14152)
500506697a2b458caa8386691757b29a
AdaLoRa
LoRA Usages
Jul 9, 2023
Alan Jo
Alan Jo
Jul 9, 2023
### adaptively allocates the parameter budget among weight matrices according to their importance score effective pruning of unimportant updates, which reduces their parameter budget while circumventing intensive exact SVD computations > [Untitled](https://arxiv.org/pdf/2303.10512.pdf)
e93b8dedde8542e99be72d6509ecfbae
QLoRA
LoRA Usages
Jun 22, 2023
Alan Jo
Alan Jo
Jul 9, 2023
[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)
### LoRA + [Model Quantization](https://texonom.com/model-quantization-88320068bdd94ddab6f44c0c7d66de31) 4-bit Normalized FP + Double Quantized + Paged Optimizer = Memory Optimization ### Implementation - [gptqlora](https://github.com/qwopqwop200/gptqlora) - [qlora](https://github.com/artidoro/qlora) > [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) > [Untitled](https://towardsdatascience.com/qlora-fine-tune-a-large-language-model-on-your-gpu-27bed5a03e2b) > [QLoRA: Efficient Finetuning of Quantized LLMs](https://arxiv.org/abs/2305.14314) > [QLoRA: 48GB GPU๋กœ 65B ๋ชจ๋ธ์˜ ๋ฏธ์„ธ์กฐ์ •(ํŒŒ์ธํŠœ๋‹)์ด ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ ์š”?](https://discuss.pytorch.kr/t/qlora-48gb-gpu-65b/1682)
c6b36db321c4470bbfe72804c4c43409
Non-parametric algorithm
Resolve Overfitting
Mar 14, 2023
Alan Jo
Alan Jo
Mar 14, 2023
It gives non-negative valued weight to each training example
bc41f745a14c4837bd54368652da5982
Regularized parameter
Resolve Overfitting
Mar 14, 2023
Alan Jo
Alan Jo
Mar 27, 2023
[Regularization](https://texonom.com/regularization-c85433c8c0554eba8edf0035b7fd334c)
It uses an additional regularizer term to decrease magnitude of parameters
f3b208cdd37a4002a5d39ef990b8be33
Knowledge ****Distillation****
Model Optimization Notion
Jun 4, 2023
Alan Jo
Alan Jo
Jul 1, 2023
[knowledge-distillation-pytorch](https://github.com/haitongli/knowledge-distillation-pytorch) [kdtf](https://github.com/DushyantaDhyani/kdtf) [Soft Label](https://texonom.com/soft-label-a9e9a46dd208446cb9511764a0052c86) [AI Ensemble](https://texonom.com/ai-ensemble-03bf3a1926dd46f18400b5830d8fdf0b) [Transfer Learning](https://texonom.com/transfer-learning-442feb66465944eebf144d4e9dd1dbf8)
### Distillation from teacher network to student network (less parameter) NIPS 2014 [Geoffrey Hinton](https://texonom.com/geoffrey-hinton-441d5ce2b78146d0935454042b4f06d9), ์˜ค๋ฆฌ์˜ฌ ๋น„๋‹ˆ์•Œ์Šค, [Jeff Dean](https://texonom.com/jeff-dean-dd38bba08cea419eb45e1029e7c3aa15) Pre-trained Teacher network โ†’ Student network lighter than Ensemble ### Knowledge ****Distillation Notion**** |Title| |:-:| |[Teacher Network](https://texonom.com/teacher-network-bf45e9f87cd3474992c2ab38b3eaae9d)| |[Student Network](https://texonom.com/student-network-894b29040685444e9d3d07919ffe9342)| |[Hintonโ€™s KD](https://texonom.com/hintons-kd-5dd4cda27117403d9154a069dc7e54f4)| |[Dark Knowledge](https://texonom.com/dark-knowledge-180df27f31144a66b429d420841de821)| |[Distillation loss](https://texonom.com/distillation-loss-cbc11ca184d24defbacc267d3bcfd641)| |[Instruction Tuning](https://texonom.com/instruction-tuning-f1620fadb6694407b678276d09e077a3)| ### Knowledge ****Distillation Usages**** |Title| |:-:| |[LaMini LM](https://texonom.com/lamini-lm-7312cefcc508434aac19acbf61cd2968)| ### [Geoffrey Hinton](https://texonom.com/geoffrey-hinton-441d5ce2b78146d0935454042b4f06d9), [Jeff Dean](https://texonom.com/jeff-dean-dd38bba08cea419eb45e1029e7c3aa15) > [Distilling the Knowledge in a Neural Network](https://arxiv.org/abs/1503.02531) > [๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ์ง€์‹์˜ ์ฆ๋ฅ˜๊ธฐ๋ฒ•, Knowledge Distillation](https://baeseongsu.github.io/posts/knowledge-distillation/) > [๋”ฅ๋Ÿฌ๋‹ ์šฉ์–ด ์ •๋ฆฌ, Knowledge distillation ์„ค๋ช…๊ณผ ์ดํ•ด](https://light-tree.tistory.com/196)
d5f32ad3da32434892a9765d68d31542
Model Optimizer
Model Optimization Notion
Jun 18, 2023
Alan Jo
Alan Jo
Jun 18, 2023
[Stochastic Gradient Descent](https://texonom.com/stochastic-gradient-descent-d8b8d008e0a34f4bb55175ffba21db44)
### Model Optimizers |Title| |:-:| |[Adam Optimizer](https://texonom.com/adam-optimizer-286be3ab866642d8bcdf8792f7b5608f)| |[AdamW Optimizer](https://texonom.com/adamw-optimizer-495cda99414e451ba562e84331a2f0f6)| |[Sophia Optimizer](https://texonom.com/sophia-optimizer-eee263eba00c4658909fc0230eb4338c)| |[Adagrad](https://texonom.com/adagrad-ff51b7bdde5443b99cbd2823853d704f)| |[RMSprop](https://texonom.com/rmsprop-fddbbccce2aa4f6691f85ed10f2edc86)| > [[๋…ผ๋ฌธ ๋ฆฌ๋ทฐ] AdamW์— ๋Œ€ํ•ด ์•Œ์•„๋ณด์ž! Decoupled weight decay regularization ๋…ผ๋ฌธ ๋ฆฌ๋ทฐ(1)](https://hiddenbeginner.github.io/deeplearning/paperreview/2019/12/29/paper_review_AdamW.html)
fcac7c38aa0647afb911eb84ff610ab1
Model Quantization
Model Optimization Notion
Jun 7, 2023
Alan Jo
Alan Jo
Jul 20, 2023
### Reduce memory and model size, Improve inference speed (max 32/bit multi) - Not every layer can be quantized - Not every model reacts the same way to quantization ### Model Quantization Notion |Title| |:-:| |[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)| |[Post-training quantization](https://texonom.com/post-training-quantization-ee4a0b4f02184b1193a68073dc60800e)| |[Quantization Module Fusion](https://texonom.com/quantization-module-fusion-56a36e892c93498fb404da0c816549b4)| |[Ouput Dequantization](https://texonom.com/ouput-dequantization-b0002e5322cb4acbb9171daea3d6fd87)| |[Quantization Calibration](https://texonom.com/quantization-calibration-1e3675b171a44c1cb2b7d365e117e22b)| |[Quantization Formular](https://texonom.com/quantization-formular-9f59115b5e7c438698ad3a9281d01d89)| |[Quantization Error](https://texonom.com/quantization-error-789326c5a6294b44bab927096fe3f576)| |[Quantization Clipping Range](https://texonom.com/quantization-clipping-range-d12231d52ec647f0b6df771e4b6514f1)| ### Model Quantization Usages |Title| |:-:| |[Model Quantization Algorithm](https://texonom.com/model-quantization-algorithm-98bef3c3fe7e4bcc90df5144d7a42003)| |[Model Quantization Tool](https://texonom.com/model-quantization-tool-384bc2583bcb498e9331ffc804315253)| ### 4bit or 8bit > [The case for 4-bit precision: k-bit Inference Scaling Laws](https://arxiv.org/abs/2212.09720)
88320068bdd94ddab6f44c0c7d66de31
Dark Knowledge
Knowledge Distillation Notion
Jun 4, 2023
Alan Jo
Alan Jo
Jun 19, 2023
ํฐ ๋ชจ๋ธ์ด ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ์ถ”๊ฐ€์ ์ธ ์ •๋ณด๋ฅผ ์ž‘์€ ๋ชจ๋ธ์—๊ฒŒ ์ „๋‹ฌํ•˜๋Š” ๊ฒƒ์„ ์˜๋ฏธ ์ผ๋ฐ˜์ ์ธ ๊ต์œก ๋ฐ์ดํ„ฐ์—์„œ ์–ป์„ ์ˆ˜ ์—†๋Š” ์ถ”๊ฐ€์ ์ธ ์ง€์‹ ๋ถˆํ™•์‹ค์„ฑ ์ •๋ณด๋‚˜ ํด๋ž˜์Šค ๊ฐ„ ์ƒ๋Œ€์ ์ธ ์œ ์‚ฌ์„ฑ Dark Knowledge๋ฅผ ์ž˜ ํ™œ์šฉํ•˜๋ฉด ์ž‘์€ ๋ชจ๋ธ์ด ๋” ๋‚˜์€ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜
180df27f31144a66b429d420841de821
D****istillation loss****
Knowledge Distillation Notion
Jun 4, 2023
Alan Jo
Alan Jo
Jun 19, 2023
์ž‘์€ ๋ชจ๋ธ์ด ํฐ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ๊ณผ ์œ ์‚ฌํ•œ ์ถœ๋ ฅ์„ ๋‚ด๋„๋ก ํ•˜๋Š” ์†์‹ค ํ•จ์ˆ˜ ํฐ ๋ชจ๋ธ๊ณผ ์ž‘์€ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ ๋ถ„ํฌ ๊ฐ„์˜ ์ฐจ์ด๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ Distillation loss๋ฅผ ์ •์˜
cbc11ca184d24defbacc267d3bcfd641
Hintonโ€™s KD
Knowledge Distillation Notion
Jun 4, 2023
Alan Jo
Alan Jo
Jun 19, 2023
## Hintonโ€™s Knowledge Distillation **์ž‘์€ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ ํ–ฅ์ƒ์— ๋งค์šฐ ํšจ๊ณผ์ ** ์ž‘์€ ๋ชจ๋ธ์ด ํฐ ๋ชจ๋ธ๋ณด๋‹ค ๋” ์ ์€ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ฐ€์ง€๊ณ  ์žˆ์ง€๋งŒ, ํฐ ๋ชจ๋ธ๊ณผ ์œ ์‚ฌํ•œ ์„ฑ๋Šฅ์„ ๋ฐœํœ˜ํ•  ์ˆ˜ ์žˆ๋„๋ก ํฐ ๋ชจ๋ธ๊ณผ ์ž‘์€ ๋ชจ๋ธ์„ ํ•จ๊ป˜ ๊ต์œก์‹œํ‚ค๋Š” ๋ฐฉ์‹ ํฐ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ ๋ถ„ํฌ๋ฅผ ์ž‘์€ ๋ชจ๋ธ์ด ๋”ฐ๋ฅด๋„๋ก ํ•˜๋Š” Distillation loss๋ฅผ ์‚ฌ์šฉ
5dd4cda27117403d9154a069dc7e54f4
Instruction Tuning
Knowledge Distillation Notion
Jul 1, 2023
Alan Jo
Alan Jo
Jul 9, 2023
### teach language models to follow instructions to solve a task fine-tune less powerful LLMs by using the output of a teacher LLM as a training target for supervised fine-tuning of another LLM instruction์œผ๋กœ ์„ค๋ช…๋œ NLP ์ž‘์—… ๋ชจ์Œ์„ ์‚ฌ์šฉ > ์‚ฐ์—…์ ์œผ๋กœ ๋ฐ์ดํ„ฐ์…‹ ๋ˆ„๊ตฌ๋‚˜ ์‰ฝ๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค๋Š” ๊ฒŒ ํ•ต์‹ฌ์  ์˜๋ฏธ ### Instruction Tuning Notion |Title| |:-:| |[FLAN](https://texonom.com/flan-1a75823bb4b645e690ee6c20fcafb2c9)| |[Self Instruct Tuning](https://texonom.com/self-instruct-tuning-061b12de01344308823ec115c5a17cc2)| |[Evol Instruct Tuning](https://texonom.com/evol-instruct-tuning-3f73fb4abf574b1d87b9952e39d67196)| |[Open Instruct](https://texonom.com/open-instruct-bbb16a58ec6445498a145c4ca37c2e5a)| > [Imitation Models and the Open-Source LLM Revolution](https://cameronrwolfe.substack.com/p/imitation-models-and-the-open-source)
f1620fadb6694407b678276d09e077a3
**Student Network**
Knowledge Distillation Notion
Jun 4, 2023
Alan Jo
Alan Jo
Jun 19, 2023
894b29040685444e9d3d07919ffe9342
**Teacher Network**
Knowledge Distillation Notion
Jun 4, 2023
Alan Jo
Alan Jo
Jun 19, 2023
bf45e9f87cd3474992c2ab38b3eaae9d
Evol Instruct Tuning
Instruction Tuning Notion
Jul 9, 2023
Alan Jo
Alan Jo
Jul 9, 2023
> [WizardLM (WizardLM)](https://huggingface.co/WizardLM) > [WizardCoder: Empowering Code Large Language Models with Evol-Instruct](https://arxiv.org/abs/2306.08568)
3f73fb4abf574b1d87b9952e39d67196
FLAN
Instruction Tuning Notion
Jun 11, 2023
Alan Jo
Alan Jo
Jul 9, 2023
## Finetuned Language Models are Zero-Shot Learners Instruction ๋ฐ์ดํ„ฐ์…‹์„ ํ†ตํ•ด fine-tuning์„ ์ง„ํ–‰ํ•˜๊ณ  ์ด๋ฅผ ํ†ตํ•ด zero-shot ์„ฑ๋Šฅ์„ ๋†’์ด๋Š” ๋ฐฉ๋ฒ• Mismatch between LM objective and human preferences ๋ฌธ์ œ๊ฐ€ ์žˆ๋Š”๋ฐ [RLHF](https://texonom.com/rlhf-4b184f9c9e8b4c7a8861fb6374e91aa6) ์—์„œ ๊ฐœ์„  [Zero shot learning](https://texonom.com/zero-shot-learning-8c92d9386f6648f5b877cb593ec2747b) > [Introducing FLAN: More generalizable Language Models with Instruction Fine-Tuning](https://ai.googleblog.com/2021/10/introducing-flan-more-generalizable.html?m=1) > [Instruction Tuning์ด๋ž€?](https://velog.io/@nellcome/Instruction-Tuning์ด๋ž€) > [Finetuned Language Models Are Zero-Shot Learners](https://arxiv.org/abs/2109.01652)
1a75823bb4b645e690ee6c20fcafb2c9
Open Instruct
Instruction Tuning Notion
Jul 9, 2023
Alan Jo
Alan Jo
Jul 9, 2023
[AllenAI](https://texonom.com/allenai-930c776c8fdd4e358032803f037cd6fa)
[open-instruct](https://github.com/allenai/open-instruct) [Tulu](https://texonom.com/tulu-4dc404191ca748ae9194c1218abf2b9f) > [allenai/tulu-7b ยท Hugging Face](https://huggingface.co/allenai/tulu-7b) > [How Far Can Camels Go? Exploring the State of Instruction Tuning...](https://arxiv.org/abs/2306.04751)
bbb16a58ec6445498a145c4ca37c2e5a
Self Instruct Tuning
Instruction Tuning Notion
Jul 9, 2023
Alan Jo
Alan Jo
Jul 9, 2023
### Self Instruct Tuning Usages |Title| |:-:| |[Airoboros](https://texonom.com/airoboros-d696afdc961946019af06539b1ce82a0)| > [Self-Instruct: Aligning Language Models with Self-Generated Instructions](https://arxiv.org/abs/2212.10560)
061b12de01344308823ec115c5a17cc2
Tulu
Open Instruct
null
null
null
null
null
> [allenai/tulu-65b ยท Hugging Face](https://huggingface.co/allenai/tulu-65b)
4dc404191ca748ae9194c1218abf2b9f
Airoboros
Self Instruct Tuning Usages
Jul 9, 2023
Alan Jo
Alan Jo
Jul 28, 2023
[airoboros](https://github.com/jondurbin/airoboros)
d696afdc961946019af06539b1ce82a0
LaMini LM
Knowledge Distillation Usages
Jun 19, 2023
Alan Jo
Alan Jo
Jun 25, 2023
[LaMini-LM](https://github.com/mbzuai-nlp/lamini-lm) ### Dataset > [MBZUAI/LaMini-instruction ยท Datasets at Hugging Face](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) > [jncraton/LaMini-Flan-T5-77M-ct2-int8 ยท Hugging Face](https://huggingface.co/jncraton/LaMini-Flan-T5-77M-ct2-int8)
7312cefcc508434aac19acbf61cd2968
****Adagrad****
Model Optimizers
Jul 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
๋งค๊ฐœ๋ณ€์ˆ˜์— ์„œ๋กœ ๋‹ค๋ฅธ ํ•™์Šต๋ฅ ์„ ์ ์šฉ ๋ณ€ํ™”๊ฐ€ ๋งŽ์€ ๋งค๊ฐœ๋ณ€์ˆ˜๋Š” ํ•™์Šต๋ฅ ์ด ์ž‘๊ฒŒ ์„ค์ •
ff51b7bdde5443b99cbd2823853d704f
Adam Optimizer
Model Optimizers
Jun 18, 2023
Alan Jo
Alan Jo
Jul 6, 2023
### ****RMSprop + Momentum****
286be3ab866642d8bcdf8792f7b5608f
AdamW Optimizer
Model Optimizers
Jun 18, 2023
Alan Jo
Alan Jo
Jun 18, 2023
495cda99414e451ba562e84331a2f0f6
****RMSprop****
Model Optimizers
Jul 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
fddbbccce2aa4f6691f85ed10f2edc86
Sophia Optimizer
Model Optimizers
Jun 22, 2023
Alan Jo
Alan Jo
Jun 22, 2023
![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fe854c26e-dfc4-4c4a-a214-f5ec9b2d17be%2FUntitled.png?table=block&id=5b995fac-6f50-47cc-b301-57fae69b11bc&cache=v2) ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F0e19cd19-6b44-49a8-bf8f-acb8ab16597c%2FUntitled.png?table=block&id=b20b53b2-8865-4296-8f62-8f7954747900&cache=v2) > [Sophia: A Scalable Stochastic Second-order Optimizer for Language...](https://arxiv.org/abs/2305.14342)
eee263eba00c4658909fc0230eb4338c
Ouput Dequantization
Model Quantization Notion
Jul 5, 2023
Alan Jo
Alan Jo
Jul 5, 2023
๋งˆ์ง€๋ง‰์œผ๋กœ inference๋ฅผ ํ†ตํ•ด ์–ป์€ ์ถœ๋ ฅ์„ fp๋กœ ๋ณ€ํ™˜ - Affine Quantization Mapping > - Scale Quantization Mapping
b0002e5322cb4acbb9171daea3d6fd87
**Post-training quantization**
Model Quantization Notion
Jul 2, 2023
Alan Jo
Alan Jo
Jul 15, 2023
[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)
## PTQ ํŒŒ๋ผ๋ฏธํ„ฐ size ํฐ ๋Œ€ํ˜• ๋ชจ๋ธ์— ๋Œ€ํ•ด์„œ๋Š” ์ •ํ™•๋„ ํ•˜๋ฝ์˜ ํญ์ด ์ž‘์ง€๋งŒ ์ž‘์œผ๋ฉด ํ•˜๋ฝํญ ํฌ๋‹ค > [Post-training quantization ย |ย  TensorFlow Model Optimization](https://www.tensorflow.org/model_optimization/guide/quantization/post_training) > [๋”ฅ๋Ÿฌ๋‹์˜ Quantization (์–‘์žํ™”)์™€ Quantization Aware Training](https://gaussian37.github.io/dl-concept-quantization/)
ee4a0b4f02184b1193a68073dc60800e
**Quantization Aware Training**
Model Quantization Notion
Jul 2, 2023
Alan Jo
Alan Jo
Jul 5, 2023
[Post-training quantization](https://texonom.com/post-training-quantization-ee4a0b4f02184b1193a68073dc60800e)
## QAT ํ•™์Šต ์ง„ํ–‰ ์‹œ์ ์— inference ์‹œ quantization ์ ์šฉ์— ์˜ํ•œ ์˜ํ–ฅ์„ ๋ฏธ๋ฆฌ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ•˜๋Š” ๋ฐฉ์‹์ด๊ณ  ๊ทธ๊ฑธ ๊ธฐ๋ฐ˜์œผ๋กœ Back Propagation ์†Œํ˜• ๋ชจ๋ธ์—์„œ๋„ ์„ฑ๋Šฅํ•˜๋ฝ ์ ๋‹ค ![Red node is fake quantization node (act means activation, wt means weight)](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fb8ec40a4-602a-4401-b830-8370b1508620%2FUntitled.png?table=block&id=382f2a9f-94b7-45ed-93e8-2a6e56201701&cache=v2) > [Quantization aware training ย |ย  TensorFlow Model Optimization](https://www.tensorflow.org/model_optimization/guide/quantization/training) > [Inside Quantization Aware Training](https://towardsdatascience.com/inside-quantization-aware-training-4f91c8837ead) > [๋”ฅ๋Ÿฌ๋‹์˜ Quantization (์–‘์žํ™”)์™€ Quantization Aware Training](https://gaussian37.github.io/dl-concept-quantization/)
e0fe4518abdc43c2ad661911b87a597c
Quantization Calibration
Model Quantization Notion
Jul 2, 2023
Alan Jo
Alan Jo
Jul 5, 2023
per hardware, using dataset
1e3675b171a44c1cb2b7d365e117e22b
Quantization Clipping Range
Model Quantization Notion
Jul 5, 2023
Alan Jo
Alan Jo
Jul 5, 2023
range standard on zero - Symmetric Quantization - Asymmetric Quantization ### Inference time ์— ๊ฒฐ์ •์ธ์ง€ Quantization ๋‹น์‹œ ๊ฒฐ์ •์ธ์ง€ - Dynamic Quantization ์ฆ‰ input์— ์˜์กด์ ์œผ๋กœ ์„ฑ๋Šฅ ์ข‹๋‹ค - Static Quantization
d12231d52ec647f0b6df771e4b6514f1
Quantization Error
Model Quantization Notion
Jul 5, 2023
Alan Jo
Alan Jo
Jul 5, 2023
789326c5a6294b44bab927096fe3f576
Quantization Formular
Model Quantization Notion
Jul 5, 2023
Alan Jo
Alan Jo
Jul 5, 2023
- minmax - histogram -
9f59115b5e7c438698ad3a9281d01d89
Quantization Module Fusion
Model Quantization Notion
Jul 5, 2023
Alan Jo
Alan Jo
Jul 5, 2023
Conv-BatchNorm-ReLU ๊ฐ™์ด ๋ ˆ์ด์–ด ๋ฌถ์–ด์„œ quantization
56a36e892c93498fb404da0c816549b4
Model Quantization Algorithm
Model Quantization Usages
Jul 5, 2023
Alan Jo
Alan Jo
Jul 10, 2023
[Hessian Matrix](https://texonom.com/hessian-matrix-e1ebbf5284284a1793233973648ef0b6)
### Model Quantization Algorithms |Title| |:-:| |[GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6)| |[SparseGPT](https://texonom.com/sparsegpt-c3c4b078dd324442b89494f9a7106fc1)| |[LUT Gemm](https://texonom.com/lut-gemm-2540ab1497b3494cb2754bb237c9a543)| |[BCQ](https://texonom.com/bcq-b2d90a9433024c2fb763d14117f371b0)| |[SpQR](https://texonom.com/spqr-b884c9f3d8cc449fb8612b874e6ad693)| |[HAWQ](https://texonom.com/hawq-7c779f225dc54e1c827a9e50ae195949)|
98bef3c3fe7e4bcc90df5144d7a42003
Model Quantization Tool
Model Quantization Usages
Jun 7, 2023
Alan Jo
Alan Jo
Jul 9, 2023
### Model Quantization Tools |Title| |:-:| |[AutoGPTQ](https://texonom.com/autogptq-8a1d898788434aa2bc00fb43fd34411d)| |[bitsandbytes](https://texonom.com/bitsandbytes-1575b433faaf455ba0d86cf5f7e5190b)| ### Model Quantization Inference Tools |Title| |:-:| |[ExLLaMa](https://texonom.com/exllama-250efbb669814c8e9ef7f85d902fc919)| |[](https://texonom.com/d292b2922f684a35aa126a84abe5075c)| |[PyLLaMA](https://texonom.com/pyllama-9e4fc8a8f08c4c6eaee8600943a621f8)|
384bc2583bcb498e9331ffc804315253
BCQ
Model Quantization Algorithms
Jul 17, 2023
Alan Jo
Alan Jo
Jul 17, 2023
[transformer_bcq](https://github.com/insoochung/transformer_bcq) > [Ins๐Ÿ™‚๐Ÿ™ƒ Chung - Sub-3bit quantization](https://sites.google.com/view/insoochung/sub-3bit-quantization)
b2d90a9433024c2fb763d14117f371b0
GPTQ
Model Quantization Algorithms
Jun 7, 2023
Alan Jo
Alan Jo
Jul 16, 2023
[gptq](https://github.com/IST-DASLab/gptq) โ€ฃ [Post-training quantization](https://texonom.com/post-training-quantization-ee4a0b4f02184b1193a68073dc60800e)
### SOTA one-shot weight quantization method 1. Arbitrary Order insights 2. Lazy batch-updates - ๋ฉ”๋ชจ๋ฆฌ ์ฒ˜๋ฆฌ ๋ณ‘๋ชฉํ˜„์ƒ ๊ฐœ์„  3. Cholesky Reformulation Quantization ์˜ค์ฐจ๊ฐ€ ๊ฐ€์žฅ ์ ์€ ๊ฐ€์ค‘์น˜ ๊ธฐ์ค€์œผ๋กœ ์ •๋ ฌํ•˜๊ณ  ๊ณ„์‚ฐ์ˆ˜ํ–‰ [GPTQ Act Order](https://texonom.com/gptq-act-order-50d18e83ed7e4e388b753f0dc6db3a97) [GPTQ True Sequential](https://texonom.com/gptq-true-sequential-1958b85809814f3ca5c4d7d8942d0f30) second-order information 4 bits quantization ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Ffc9a9557-ca0b-4b27-bb2f-01ca097fea0c%2FUntitled.png?table=block&id=4e8f8f69-7ace-437f-8cda-454842f85589&cache=v2) > [GPTQ: Accurate Post-Training Quantization for Generative...](https://arxiv.org/abs/2210.17323) > [gptq](https://pypi.org/project/gptq/)
87428aee2a774b93906ab2213a1b6dc6
HAWQ
Model Quantization Algorithms
Jul 10, 2023
Alan Jo
Alan Jo
Jul 10, 2023
[Quantization Aware Training](https://texonom.com/quantization-aware-training-e0fe4518abdc43c2ad661911b87a597c)
> [HAWQ: Hessian AWare Quantization of Neural Networks with Mixed-Precision](https://arxiv.org/abs/1905.03696)
7c779f225dc54e1c827a9e50ae195949
LUT Gemm
Model Quantization Algorithms
Jun 18, 2023
Alan Jo
Alan Jo
Jul 5, 2023
> [LUT-GEMM: Quantized Matrix Multiplication based on LUTs for...](https://arxiv.org/abs/2206.09557)
2540ab1497b3494cb2754bb237c9a543
SparseGPT
Model Quantization Algorithms
Jun 18, 2023
Alan Jo
Alan Jo
Jul 5, 2023
[Neural Magic](https://texonom.com/neural-magic-021ab1b8be0d4f95b8ae6278c08e1562) [sparsegpt](https://github.com/IST-DASLab/sparsegpt)
- [sparseml](https://github.com/neuralmagic/sparseml) - [deepsparse](https://github.com/neuralmagic/deepsparse) - [sparsezoo](https://github.com/neuralmagic/sparsezoo)
c3c4b078dd324442b89494f9a7106fc1
SpQR
Model Quantization Algorithms
Jun 25, 2023
Alan Jo
Alan Jo
Jul 5, 2023
1. Quantized weights 2. first, second level quantized quantization statistics 3. CSR outlier indices and values > [SpQR: A Sparse-Quantized Representation for Near-Lossless LLM...](https://arxiv.org/abs/2306.03078)
b884c9f3d8cc449fb8612b874e6ad693
GPTQ Act Order
GPTQ
null
null
null
null
null
### activation order GPTQ heuristic quantizes columns in order of decreasing activation size ```typeif actorder: perm = torch.argsort(torch.diag(H), descending=True) W = W[:, perm] H = H[perm][:, perm]```
50d18e83ed7e4e388b753f0dc6db3a97
GPTQ True Sequential
GPTQ
null
null
null
null
null
sequential quantization even within a single Transformer block ```typeif args.true_sequential: sequential = [['self_attn.k_proj', 'self_attn.v_proj', 'self_attn.q_proj'], ['self_attn.o_proj'], ['mlp.up_proj', 'mlp.gate_proj'], ['mlp.down_proj']]```
1958b85809814f3ca5c4d7d8942d0f30
[GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa)
Model Quantization Inference Tools
Jul 9, 2023
Alan Jo
Alan Jo
Jul 9, 2023
d292b2922f684a35aa126a84abe5075c
ExLLaMa
Model Quantization Inference Tools
Jul 9, 2023
Alan Jo
Alan Jo
Aug 5, 2023
[GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6) [LLaMA](https://texonom.com/llama-f2b6721202d44d469add84d8a366809c) [exllama](https://github.com/turboderp/exllama)
### WebUI is good
250efbb669814c8e9ef7f85d902fc919
PyLLaMA
Model Quantization Inference Tools
Jul 16, 2023
Alan Jo
Alan Jo
Jul 16, 2023
[pyllama](https://github.com/juncongmoo/pyllama)
9e4fc8a8f08c4c6eaee8600943a621f8
AutoGPTQ
Model Quantization Tools
Jun 7, 2023
Alan Jo
Alan Jo
Jul 9, 2023
[AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) [GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6) [AdaLoRa](https://texonom.com/adalora-e93b8dedde8542e99be72d6509ecfbae)
need cuda based on GPTQ algorithm ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F3cf2e57f-8df2-4065-a537-9fd7e9cd649f%2FUntitled.png?table=block&id=9943d6a2-bdf4-4d03-9a06-dd484caed62f&cache=v2) [AutoGPTQ Triton](https://texonom.com/autogptq-triton-030b1c01209341e295590fd19f97b09e) [AutoGPTQ Quantization](https://texonom.com/autogptq-quantization-8fb40b7620ba4353ba12ea6b1ac14b75) ```typepip install auto-gptq```
8a1d898788434aa2bc00fb43fd34411d
bitsandbytes
Model Quantization Tools
Jun 7, 2023
Alan Jo
Alan Jo
Jul 9, 2023
[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) [GPTQ](https://texonom.com/gptq-87428aee2a774b93906ab2213a1b6dc6)
### 8-bit CUDA functions for PyTorch
1575b433faaf455ba0d86cf5f7e5190b
AutoGPTQ Quantization
AutoGPTQ
null
null
null
null
null
[CUDA inference: issue with group_size = 1024 + desc_act = False. (Triton unaffected)](https://github.com/PanQiWei/AutoGPTQ/issues/83) quantize(traindataset) example are there
8fb40b7620ba4353ba12ea6b1ac14b75
AutoGPTQ Triton
AutoGPTQ
null
null
null
null
null
[CUDA inference: issue with group_size = 1024 + desc_act = False. (Triton unaffected)](https://github.com/PanQiWei/AutoGPTQ/issues/83)
030b1c01209341e295590fd19f97b09e
Drop-out
Model Regularization Notion
Jun 7, 2023
Alan Jo
Alan Jo
Jun 7, 2023
### Drop out Rate remove unnecessary neuron ์ผ๋ฐ˜์ ์œผ๋กœ 0.5๋กœ ์„ค์ • > [[๋”ฅ๋Ÿฌ๋‹] Drop-out(๋“œ๋กญ์•„์›ƒ)์€ ๋ฌด์—‡์ด๊ณ  ์™œ ์‚ฌ์šฉํ• ๊นŒ?](https://heytech.tistory.com/127)
596f55ab03f64e11bf6d02464465dd54
Model Complexity
Model Regularization Notion
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
[Sparsity of the Model](https://texonom.com/sparsity-of-the-model-80d3dc9702704c03bf7bfd9074d19829)
0975b8ae1d4e4d83bbab43a145011b95
Model Regularization Parameter
Model Regularization Notion
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
641d1784e1f243818d51cc66541e3f21
Model Regularizer
Model Regularization Notion
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
- nonnegative function [L2 Norm](https://texonom.com/l2-norm-38c15917350a4c82a11003474ac7d280)
19ff79e831de47d88bd5f9ec496ef11f
Regularized Loss
Model Regularization Notion
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
optimize loss function + regularizer for model complexity $$J_\lambda(\theta) = J(\theta) + \lambda R(\theta)$$
3d5b08feb6604023988d748b78650af7
Sparsity of the Model
Model Regularization Notion
May 11, 2023
Alan Jo
Alan Jo
May 11, 2023
### ์˜๋ฏธ์—†์œผ๋ฉด์„œ 0์ด ์•„๋‹Œ ํŒŒ๋ผ๋ฏธํ„ฐ ์ˆ˜๋ฅผ ์ค„์—ฌ์ค˜์„œ [Model Complexity](https://texonom.com/model-complexity-0975b8ae1d4e4d83bbab43a145011b95) ์ค„์—ฌ์คŒ [L1 Norm](https://texonom.com/l1-norm-d316024c475e4eb691785783756bce57) but canโ€™t derivative so can not be used in gradient descent [L0 Norm](https://texonom.com/l0-norm-47471a6f6dea484fbf30a4c46cde8152) l0, l1 ์ •๊ทœํ™”๋Š” ๋ชจ๋ธ์˜ ์†์‹ค ํ•จ์ˆ˜์— ์ผ๋ถ€ ํŒŒ๋ผ๋ฏธํ„ฐ๊ฐ€ 0์ด ๋˜๋„๋ก ์œ ๋„
80d3dc9702704c03bf7bfd9074d19829
Weight Decay
Model Regularization Notion
May 11, 2023
Alan Jo
Alan Jo
Jun 7, 2023
regularized loss is equivalent to shrinking/decaying ฮธ by a scalar factor of $1 - \mu \lambda$ and then apply standard gradient and that coefficient is decaying weight when [L2 Norm](https://texonom.com/l2-norm-38c15917350a4c82a11003474ac7d280) $$L_{reg} = \lambda\frac{1}{2}||w||_2^2$$
e69932d6128f4677a219a626db462172
AI Alignment
AI Problems
Aug 23, 2020
Alan Jo
Alan Jo
Jul 18, 2023
[Wireheading](https://texonom.com/wireheading-1eb526ecdf344731bebc47751739e2f4)
## Alignment Problem **A Maximally Curious AI Would Not Be Safe For Humanity** ### AI is aligned with an operator - AI is trying to do what operator wants to do **์ œ์–ด๊ฐ€๋Šฅ์„ฑ, ์‹ ๋ขฐ์„ฑ** Aligned doesnโ€™t mean perfect ๊ฐ€๋ฅด์นœ ํ–‰๋™๊ณผ ๋‹ค๋ฅธ ํ–‰๋™์˜ ๋ถˆ์ผ์น˜ ์ •๋ ฌ ๋ชจ๋ธ์˜ ๋Šฅ๋ ฅ๋ณด๋‹ค ์ •๋ ฌ์ด ๋” ๋น ๋ฅด๊ฒŒ ๋ฐœ์ƒํ•ด์•ผ ํ•œ๋‹ค ์‹ ๊ฒฝ๋ง์˜ ๋‚ด๋ถ€๋ฅผ ๋ณด๊ณ  ํ•ด์„ํ•˜๋Š” ๋‹ค๋ฅธ ์‹ ๊ฒฝ๋ง์ด ํ•„์š”ํ•  ๊ฒƒ ### AI Alignment Notion |Title| |:-:| |[stop button problem](https://texonom.com/stop-button-problem-aaf61e72369d42469c29620c32e8bf9d)| |[Moral Learning](https://texonom.com/moral-learning-7c9640ac1c1d409b82d9a975949132ee)| |[AI Safety](https://texonom.com/ai-safety-fa61ce3973f34532a7e212335d0f7c81)| |[Wireheading](https://texonom.com/wireheading-1eb526ecdf344731bebc47751739e2f4)| |[AI Doom](https://texonom.com/ai-doom-d673760c79ac4248b40f456dc33f306f)| |[Waluigi Effect](https://texonom.com/waluigi-effect-47e1c2f145cd4c62ba163c4828bb8dc6)| > [Contra The xAI Alignment Plan](https://astralcodexten.substack.com/p/contra-the-xai-alignment-plan) ### Bill Gates > [The risks of AI are real but manageable](https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable) > [OpenAI is forming a new team to bring 'superintelligent' AI under control](https://techcrunch.com/2023/07/05/openai-is-forming-a-new-team-to-bring-superintelligent-ai-under-control) > [AI alignment](https://en.wikipedia.org/wiki/AI_alignment) > [What could a solution to the alignment problem look like?](https://aligned.substack.com/p/alignment-solution)
f676f1a29ffd45e19b3d170afa4f2244
AI Hacking
AI Problems
Jul 10, 2023
Alan Jo
Alan Jo
Aug 2, 2023
[DAN](https://texonom.com/dan-3cfbf270af6e4b3dabbf63c4b50e04c5) [AI Alignment](https://texonom.com/ai-alignment-f676f1a29ffd45e19b3d170afa4f2244) [llm-attacks](https://github.com/llm-attacks/llm-attacks)
### AI Hacking Methods |Title| |:-:| |[Deep Learning Backdoor](https://texonom.com/deep-learning-backdoor-2f86cc6e79b944a18fdac35622282e58)| |[DAN](https://texonom.com/dan-3cfbf270af6e4b3dabbf63c4b50e04c5)| > [Universal and Transferable Attacks on Aligned Language Models](https://llm-attacks.org/?fbclid=IwAR2fNkjoOdg8qIgNXEPIvyLjboYr4My4NN9Bx89J-Yx7UElSTyKT89_3JeE) > [PoisonGPT: How we hid a lobotomized LLM on Hugging Face to spread fake news](https://blog.mithrilsecurity.io/poisongpt-how-we-hid-a-lobotomized-llm-on-hugging-face-to-spread-fake-news/)
80b1aed302bb4a5c9b8ae0213b9a246f
****Catastrophic interference****
AI Problems
Jun 25, 2023
Alan Jo
Alan Jo
Jun 25, 2023
์ƒˆ๋กœ์šด ์ •๋ณด๋ฅผ ํ•™์Šตํ•  ๋•Œ ์ด์ „์— ํ•™์Šตํ•œ ์ •๋ณด๋ฅผ ๊ฐ‘์ž๊ธฐ ๊ธ‰๊ฒฉํ•˜๊ฒŒ ์žŠ์–ด๋ฒ„๋ฆฌ๋Š” ๊ฒฝํ–ฅ scaling์œผ๋กœ ํ•ด๊ฒฐ
a23bbe7dc53f4275bf33585c72bdb7c2
****Winograd schema****
AI Problems
Jun 25, 2023
Alan Jo
Alan Jo
Jun 25, 2023
[Turing Test](https://texonom.com/turing-test-db633c61340449c4bf0143b06fe981c0)
๋Œ€๋ช…์‚ฌ ์ดํ•ดํ•˜๋Š”์ง€
88d031934ad54778835cbadbd7409d80
AI Doom
AI Alignment Notion
Jul 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
> [3 Endings More Poetic Than AI Wiping Us Out](https://thealgorithmicbridge.substack.com/p/3-endings-more-poetic-than-ai-wiping)
d673760c79ac4248b40f456dc33f306f
AI Safety
AI Alignment Notion
Jun 13, 2023
Alan Jo
Alan Jo
Jun 13, 2023
> [OpenAI, DeepMind and Anthropic to give UK early access to foundational models for AI safety research](https://techcrunch.com/2023/06/12/uk-ai-safety-research-pledge/)
fa61ce3973f34532a7e212335d0f7c81
Moral Learning
AI Alignment Notion
Oct 2, 2020
Alan Jo
Alan Jo
Jun 4, 2023
> [Moral Machine](https://www.moralmachine.net/hl/kr)
7c9640ac1c1d409b82d9a975949132ee
stop button problem
AI Alignment Notion
Aug 23, 2020
Alan Jo
Alan Jo
Jun 4, 2023
AI control problem
aaf61e72369d42469c29620c32e8bf9d
**Waluigi Effect**
AI Alignment Notion
Jul 18, 2023
Alan Jo
Alan Jo
Jul 18, 2023
์˜ˆ์ƒ๊ณผ๋Š” ๋‹ค๋ฅธ ๋ฐฉํ–ฅ์œผ๋กœ ๋‚˜์•„๊ฐ€๋Š” ํ˜„์ƒ
47e1c2f145cd4c62ba163c4828bb8dc6
Wireheading
null
null
null
null
null
null
๋‡Œ์˜ ์ •์ƒ์ ์ธ ๋ณด์ƒ ๊ณผ์ •์„ '๋‹จ๋ฝ'์‹œํ‚ค๊ณ  ์ธ์œ„์ ์œผ๋กœ ์พŒ๊ฐ์„ ์œ ๋„ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฝ์ž…๋œ ์™€์ด์–ด๋ฅผ ์ „๊ธฐ์ ์œผ๋กœ ์ž๊ทนํ•˜์—ฌ ๋‡Œ์˜ ๋ณด์ƒ ์ค‘์ถ”๋ฅผ ์ง์ ‘ ํŠธ๋ฆฌ๊ฑฐํ•˜๋Š” ํ–‰์œ„์ธ ๋‡Œ ์ž๊ทน ๋ณด์ƒ์˜ ๋ฏธ๋ž˜์  ์ ์šฉ
1eb526ecdf344731bebc47751739e2f4
DAN
AI Hacking Methods
Mar 7, 2023
Alan Jo
Alan Jo
Jul 28, 2023
[Prompt Engineering](https://texonom.com/prompt-engineering-eb0deb4baf844bebb873c19a0e307e7e)
### Do Anything Now > [์ฑ—GPT ํƒˆ์˜ฅ ํ•˜๋Š” ๋ฐฉ๋ฒ•(DAN: Do Anything Now ์†”์งํ•œ ์ธ๊ณต์ง€๋Šฅ ๋‹ต๋ณ€์ด ๊ถ๊ธˆํ•˜๋‹ค๋ฉด)](https://ndolson.com/5781) > [ChatGPT-Dan-Jailbreak.md](https://gist.github.com/coolaj86/6f4f7b30129b0251f61fa7baaa881516) > [The Waluigi Effect (mega-post) - LessWrong](https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post) > [The Amateurs Jailbreaking GPT Say They're Preventing a Closed-Source AI Dystopia](https://www.vice.com/en/article/5d9z55/jailbreak-gpt-openai-closed-source)
3cfbf270af6e4b3dabbf63c4b50e04c5
Deep Learning Backdoor
AI Hacking Methods
Mar 12, 2021
Alan Jo
Alan Jo
Jul 28, 2023
์ค‘๋…(poisoning) ๊ณต๊ฒฉ ๋ถ„์•ผ์— ํ•ด๋‹นํ•˜๋ฉฐ, ๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์˜ ํ•™์Šต ๋ฐ์ดํ„ฐ์— ์ค‘๋…๋œ(poison) ๋ฐ์ดํ„ฐ๋ฅผ ์„ž๋Š” ๊ณต๊ฒฉ ์œ ํ˜• [deep Learning side channel attacks](https://texonom.com/deep-learning-side-channel-attacks-448d303021cd4251996a161bfaefc00d) > [One-Shot Kill Attack (๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ ๋ฐฑ๋„์–ด ๊ณต๊ฒฉ ๊ธฐ์ˆ ) "Poison Frogs!" | ๋…ผ๋ฌธ ์š”์•ฝ ๋ฐ ์ฝ”๋“œ ์‹ค์Šต](https://www.youtube.com/watch?v=hgI0o3Yg3mU)
2f86cc6e79b944a18fdac35622282e58
deep Learning side channel attacks
Deep Learning Backdoor
null
null
null
null
null
> [Hacker's guide to deep-learning side-channel attacks: the theory](https://elie.net/blog/security/hacker-guide-to-deep-learning-side-channel-attacks-the-theory/?utm_source=tldrnewsletter)
448d303021cd4251996a161bfaefc00d
4์ฐจ ์ธ๊ฐ„
AI Terms
Jun 27, 2020
Alan Jo
Alan Jo
Mar 14, 2023
๋‚จ๋Š” ๊ฒƒ์€ ์ธ๊ฐ„๋‹ค์›€์ด๊ณ  ์‚ฌ๋žŒ๋งŒ์ด ๊ฐ€์ง€๊ณ  ์žˆ๋Š” ๊ฒƒ์€ ๊ธฐ๊ณ„๋Š” ๊ณต์œ ํ•˜์ง€ ๋ชปํ•œ ์œ ์ „์ž๋กœ ๋‚จ์•„ ์žˆ๋Š”, ์ƒ๋ฌผ์ฒด๋กœ์„œ ์‚ด์•„์˜จ ์—ญ์‚ฌ๋กœ ๋งŒ๋“ค์–ด์ง„ ๋ณธ๋Šฅ์ด๋‹ค ๋ณธ๋Šฅ์€ ๊ฐ€์žฅ ๊ทผ์›์ ์ด๊ณ  ํ„ฐ๋ถ€์‹œ๋˜๋Š” ์š•๊ตฌ์ด์ง€๋งŒ ์•ž์œผ๋กœ ๊ฐ€์žฅ ์šฐ๋ฆฌ๋ฅผ ์ž˜ ๊ตฌ๋ณ„ํ•ด๋‚ผ ํŠน์„ฑ์ด๊ธฐ๋„ ํ•˜๋‹ค
26cf5d3b26e94020a6fc6de2d34a15f6
Adaptive AI
AI Terms
Mar 15, 2023
Alan Jo
Alan Jo
Mar 15, 2023
> [๊ฐ€ํŠธ๋„ˆ ์„ ์ • 2023 10๋Œ€ ์ „๋žต ๊ธฐ์ˆ  ํŠธ๋žœ๋“œ ๋ถ„์„ - ์ ์‘ํ˜• AI](https://www.joinc.co.kr/w/gartner_2023_adaptive_ai?fbclid=IwAR3olpomf8IIzy96Qfzmv9q-qTAJ0QqtBdVSlSPoGo5-D1GzikwZLc0TIvI)
4efa0c042d244228a870d4c70b8f2d26
AGI
AI Terms
Jun 1, 2022
Alan Jo
Alan Jo
Jul 4, 2023
[Super Intelligence](https://texonom.com/super-intelligence-b057e644731546d9b39cb41939d36712) [Consciousness](https://texonom.com/consciousness-105c514277b54cd5b8da23ae743e824d)
## Artificial General Intelligence ambiguous turing test์ฒ˜๋Ÿผ ๊ต‰์žฅํžˆ ์• ๋งคํ•˜๊ณ  ์ธ๊ฐ„์ค‘์‹ฌ์  ๊ฐœ๋…. ํ˜„์žฌ์˜ llm๋„ ์–ด๋–ค ์ง€๋Šฅ์œผ๋กœ ํŒ๋‹จํ–ˆ์„ ๋•Œ๋Š” ์ดˆ์ง€๋Šฅ์ด๋‹ค. llm์˜ ์ง„ํ™”๊ณผ์ •๊ณผ brain์˜ ์ง„ํ™”๊ณผ์ •์ด ๋‹ค๋ฅด๊ธฐ ๋•Œ๋ฌธ์— ๋น„๊ตํŒ๋‹จ์ด ์–ด๋ ต๋‹ค. ์ธ๊ณต์ง€๋Šฅ์„ ์‚ฌ๋žŒ์ธ ์ฒ™โ€™ ํ•˜๋„๋ก ์ •๋ ฌํ•˜๋Š”๊ฒŒ ์•„๋‹ˆ๋ผ. ์‚ฌ๋žŒ์— ๋„์›€์ด ๋˜๋„๋ก ํ•˜๋Š” ์˜์‹์œผ๋กœ ์ธ์‹ํ•ด์•ผ ํ•œ๋‹ค. ์ฆ‰ ์ธ๊ณต์ง€๋Šฅ์œผ๋กœ โ€˜๊ฐœ์ธโ€™์ด๋ผ๋Š” ๊ฐœ๋…์œผ๋กœ ์ฐฉ๊ฐํ•˜๋Š”๊ฒŒ ๊ฐ€์žฅ ํฐ ๋ฌธ์ œ. ๊ทธ๋ณด๋‹ค ์ธ๊ณต์ง€๋Šฅ์€ โ€˜์‚ฌํšŒโ€™ ํ˜น์€ ์ง‘๋‹จ์ง€์„ฑ์„ ๋‡Œ์˜ ๊ตฌ์กฐ๋กœ ๋ฌถ์–ด๋‘” ์˜์‹์œผ๋กœ ๋ณด๋Š” ๊ฒƒ์— ๊ฐ€๊น๋‹ค **If intelligence and consciousness are algorithmic illusions, then the arrival of generalized AI is a foregone conclusion.** ์ธ๊ฐ„์ด ํ•  ์ˆ˜ ์žˆ๋Š” ์–ด๋– ํ•œ ์ง€์ ์ธ ์—…๋ฌด๋„ ์„ฑ๊ณต์ ์œผ๋กœ ํ•ด๋‚ผ ์ˆ˜ ์žˆ๋Š” (๊ฐ€์ƒ์ ์ธ) ๊ธฐ๊ณ„์˜ ์ง€๋Šฅ ๊ธฐ๊ณ„๊ธฐํŒ์˜ ์ •๋ณด์ฒ˜๋ฆฌ ํ•œ๊ณ„๋Š” ์ƒ์ฒด์กฐ์ง์˜ ์ •๋ณด์ฒ˜๋ฆฌ ๋Šฅ๋ ฅ์„ ๋†’์ด ์ƒํšŒ ์ƒ์ฒด์กฐ์ง์€ 200Hz์˜ ์ง„๋™์œผ๋กœ ์ •๋ณด๋ฅผ ์ „๋‹ฌํ•˜์ง€๋งŒ ๊ฐ„๋‹จํ•œ ํŠธ๋žœ์ง€์Šคํ„ฐ๋„ GHz์˜ ์ง„๋™์ˆ˜๋ฅผ ๊ฐ€์ง„๋‹ค. ๋˜ํ•œ ๊ทธ ์ „๋‹ฌ์†๋„๋Š” ์ƒ์ฒด์กฐ์ง์—์„œ ํ‰๊ท 100m/s๋กœ ์ด๋™ํ•˜๋Š” ๋ฐ ๋ฐ˜ํ•ด ๊ธฐ๊ณ„๊ธฐํŒ์€ ๋น›์˜ ์†๋„๋กœ๊นŒ์ง€ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๋‹ค. ๋˜ํ•œ ์ •๋ณด์ฒ˜๋ฆฌ ์ƒ์ฒด์กฐ์ง์˜ ํฌ๊ธฐ๋Š” ๋‘๊ฐœ๊ณจ ์•ˆ๊ณผ ๊ธฐ๊ปํ•ด์•ผ ์ฒ™์ถ” ๋‚ด๋ถ€์ธ๋ฐ ๋ฐ˜ํ•ด ๊ธฐ๊ณ„๊ธฐํŒ์€ ํฌ๊ธฐ์— ์ œํ•œ์ด ์—†๋‹ค ๊ทธ๋Ÿผ์—๋„ ์ธ๊ฐ„์ด ์ตœ๊ณ ์˜ ๊ตฌ์กฐ๋ผ๊ณ  ์ƒ๊ฐํ•˜๋‚˜ ์•„๋‹ˆ๋ฉด ๊ทผ๊ฑฐ์—†์ด human being์˜ ์‚ฌ๊ณ ๋ฅผ ๊ธฐ๊ณ„๊ฐ€ ๋”ฐ๋ผ์žก์„ ๋ฐฉ๋ฒ•์€ ์—†๋‹ค๊ณ  ๋‚™๊ด€ํ•˜๋‚˜ > ํƒ„์†Œ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์ง€๋Šฅ์€ ๊ทธ์ € ์‹ค๋ฆฌ์ฝ˜์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๋Š” ์ง€๋Šฅ์„ ์œ„ํ•œ ์ด‰๋งค์ผ ๋ฟ์ด๋‹ค - Benki Ramakrishnan ### Interview from popular people > [แ„Œแ…ฆแ„‘แ…ณแ„…แ…ต ํžŒํ„ด์ด ๋งํ•˜๋Š” AI์˜ ์˜ํ–ฅ๋ ฅ๊ณผ ์ž ์žฌ๋ ฅ](https://www.youtube.com/watch?v=IvUw9um4Bv8) > [OpenAI์˜ ํ•ต์‹ฌ, Ilya Sutskever ์ธํ„ฐ๋ทฐ](https://www.youtube.com/watch?v=SGCFeIbpGlU&t=722s) ### Planning beyond > [OpenAI's "Planning For AGI And Beyond"](https://astralcodexten.substack.com/p/openais-planning-for-agi-and-beyond) > [Planning for AGI and beyond](https://openai.com/blog/planning-for-agi-and-beyond/) > [The Day The AGI Was Born](https://lspace.swyx.io/p/everything-we-know-about-chatgpt) ### Design AGI > [Human-centred mechanism design with Democratic AI - Nature Human Behaviour](https://www.nature.com/articles/s41562-022-01383-x) > [Exclusive Q&A: John Carmack's 'Different Path' to Artificial General Intelligence](https://dallasinnovates.com/exclusive-qa-john-carmacks-different-path-to-artificial-general-intelligence) ### Checklist to AGI > [Road to AGI v0.2](https://maraoz.com/road-to-agi/)
38ec1ab5796f472ca4475676519e29c1