title
stringlengths
1
544
โŒ€
parent
stringlengths
0
57
โŒ€
created
stringlengths
11
12
โŒ€
editor
stringclasses
1 value
creator
stringclasses
4 values
edited
stringlengths
11
12
โŒ€
refs
stringlengths
0
536
โŒ€
text
stringlengths
1
26k
id
stringlengths
32
32
PPO
Reinforcement Learnings
Jul 15, 2023
Alan Jo
Alan Jo
Jul 15, 2023
### Proximal Policy Optimization balance between ease of implementation, sample complexity, and tuning 2017๋…„ OpenAI์—์„œ ๊ฐœ๋ฐœ๋œ ๋ชจ๋ธ ์—†๋Š” ๊ฐ•ํ™” ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜ ํƒœ-๋™์ž‘ ์Œ์— ๊ฐ’์„ ํ• ๋‹นํ•˜๋Š” ๋Œ€์‹  ์ •์ฑ… ๊ณต๊ฐ„์„ ๊ฒ€์ƒ‰ data๋ฅผ ๋งŒ๋“ค์–ด๋‚ธ old policy์™€ train ๋Œ€์ƒ์ด ๋˜๋Š” ํ˜„์žฌ์˜ network์˜ new policy๋ฅผ ๊ตฌ๋ถ„ [PPO2](https://texonom.com/ppo2-378944bb3b8848a185c20fc529c390d2) > [Proximal Policy Optimization Algorithms](https://arxiv.org/abs/1707.06347) > [Proximal Policy Optimization](https://openai.com/research/openai-baselines-ppo)
87ce8ebe81f84ac1a7617b1f6def9e26
Q-Learning
Reinforcement Learnings
Nov 5, 2019
Alan Jo
Seong-lae Cho
Aug 31, 2023
[SARSA](https://texonom.com/sarsa-e5d847a4fb6e41cdad5b9ffb6e974a10)
### Approximate Q-Learning Off police ์‹œ๊ฐ„์ฐจ์ œ์–ด ํ˜„์žฌ ํ–‰๋™ํ•˜๋Š” ์ •์ฑ…๊ณผ๋Š” ๋…๋ฆฝ์ ์œผ๋กœ ํ•™์Šต ### Reinforce > [๊ฐ•ํ™”ํ•™์Šต ๊ธฐ์ดˆ(Q-learning)](https://bluediary8.tistory.com/18)
6fb81e4a53ab4e3097784cde99c8c038
RLHF
Reinforcement Learnings
Apr 30, 2023
Alan Jo
Alan Jo
Jul 15, 2023
[AI Alignment](https://texonom.com/ai-alignment-f676f1a29ffd45e19b3d170afa4f2244) [Active Learning](https://texonom.com/active-learning-85d42ac892e84e5ba4fd2727f1791f65)
## Reinforcement learning from human feedback ### Limitation LM์˜ ๊ทผ๋ณธ์ ์ธ ๋ฌธ์ œ์ธ Size, hallucination์„ ์•„์ง๊นŒ์ง€๋Š” ๊ฐœ์„ ํ•  ์ˆ˜๋Š” ์—†๋Š” ํ•œ๊ณ„์  Scaling ์ด์Šˆ, ๋„ˆ๋ฌด ๋ณต์žก > [RLHF๋ž€?](https://velog.io/@nellcome/RLHF๋ž€) > [Reinforcement learning from human feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback)
4b184f9c9e8b4c7a8861fb6374e91aa6
RRHF
Reinforcement Learnings
Jul 15, 2023
Alan Jo
Alan Jo
Jul 15, 2023
[RLHF](https://texonom.com/rlhf-4b184f9c9e8b4c7a8861fb6374e91aa6)
### ***R****ankย ****R****esponse to alignย ****H****umanย ****F****eedback* efficiently align language model output probabilities with human preferences as robust as fine-tuning and it only needs 1 to 2 models during tuning
073c6bdf48444a1b9acad9e65057f3d0
SARSA
Reinforcement Learnings
Jul 18, 2023
Alan Jo
Alan Jo
Aug 31, 2023
## state-action-reward-state-action ํ˜„์žฌ ๊ฐ€์ง€๊ณ  ์žˆ๋Š”ย **ํํ•จ์ˆ˜**ย ๋ฅผ ํ† ๋Œ€๋กœ ์ƒ˜ํ”Œ์„ ํƒ์š• ์ •์ฑ…์œผ๋กœ ๋ชจ์œผ๊ณ , ๊ทธ ์ƒ˜ํ”Œ๋กœ ๋ฐฉ๋ฌธํ•œ ํํ•จ์ˆ˜๋ฅผ ์—…๋ฐ์ดํŠธํ•˜๋Š” ๊ณผ์ •์„ ๋ฐ˜๋ณต Almost start of Reinforcement learning after [Policy Iteration](https://texonom.com/policy-iteration-55cbe9219f8a48fbb850f64b677e847a) GPI์—์„œ๋Š” ๋ฒจ๋งŒ ๋ฐฉ์ •์‹์— ๋”ฐ๋ผ ์ •์ฑ…์„ ํ‰๊ฐ€ Temporal-Difference ๋ฐฉ๋ฒ•์—์„œ๋Š” ๊ฐ€์น˜ ์ดํ„ฐ๋ ˆ์ด์…˜์˜ ๋ฐฉ๋ฒ•์„ ๋„์ž… **ํ˜„์žฌ ์ƒํƒœ์˜ ํํ•จ์ˆ˜๋ฅผ ๋ณด๊ณ  ํŒ๋‹จํ•œ๋‹ค๋ฉด ํ™˜๊ฒฝ์˜ ๋ชจ๋ธ์„ ๋ชฐ๋ผ๋„ ๋œ๋‹ค** ์‹œ๊ฐ„์ฐจ ์ œ์–ด์—์„œ๋Š” **ํํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•œ ํƒ์š• ์ •์ฑ…**ย ์„ ํ†ตํ•ด ํ–‰๋™์„ ์„ ํƒ ์ดˆ๊ธฐ์˜ ์—์ด์ „ํŠธ์—๊ฒŒ ํƒ์š•์ •์ฑ…์€ย **์ž˜๋ชป๋œ ํ•™์Šต์œผ๋กœ ๊ฐ€๊ฒŒํ•  ๊ฐ€๋Šฅ์„ฑ์ด ํฌ๊ธฐ ๋•Œ๋ฌธ์—** epsilon**-ํƒ์š• ์ •์ฑ…**ย ์„ ์‚ฌ์šฉ ### Limitation ํŠน์ • state์—ย **๊ฐ‡ํ˜€๋ฒ„๋ฆฌ๋Š” ํ˜„์ƒ** ์ž์‹ ์ด ํ–‰๋™ํ•œ ๋Œ€๋กœ ํ•™์Šตํ•˜๋Š” ๊ฒƒ์„ย **On-Policy ์‹œ๊ฐ„์ฐจ ์ œ์–ด** ๊ทธ๋ž˜์„œ [Q-Learning](https://texonom.com/q-learning-6fb81e4a53ab4e3097784cde99c8c038) > [(7) ์‚ด์‚ฌ(SARSA)์™€ ํ๋Ÿฌ๋‹(Q-Learning)](https://jang-inspiration.com/sarsa-qlearning#132c2000aab74666a2273d8b2d71cdac)
e5d847a4fb6e41cdad5b9ffb6e974a10
TRPO
Reinforcement Learnings
Jul 15, 2023
Alan Jo
Alan Jo
Jul 15, 2023
### ****Trustย Region Policy Optimization****
23c0f917124d419cbfe478d20b804276
PPO2
PPO
null
null
null
null
null
> [Untitled](https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py)
378944bb3b8848a185c20fc529c390d2
ML Compiler Optimization
Machine Learning Techniques
Jul 9, 2022
Alan Jo
Alan Jo
Apr 25, 2023
[Compiler Optimization](https://texonom.com/compiler-optimization-3eee7067387b45d7875df50f4473ad18) [Parallel Training](https://texonom.com/parallel-training-4a8896bc837b4dddb47c3700b715cdc8)
[relax](https://github.com/mlc-ai/relax) ### ML Compiler Optimization Tools |Title| |:-:| |[XLA](https://texonom.com/xla-4f75dff43dc0451aa5ca92e9218a3028)| |[MLGO](https://texonom.com/mlgo-60d88727f0b944ec9ce62d254c2e8a76)| |[Hidet](https://texonom.com/hidet-7b706024cb3040269f71e47b5b87d2b2)|
011e7bd0ba8f417bb111ec5ea2171c8e
Parallel Training
Machine Learning Techniques
Mar 15, 2022
Alan Jo
Alan Jo
Apr 25, 2023
[ML Compiler Optimization](https://texonom.com/ml-compiler-optimization-011e7bd0ba8f417bb111ec5ea2171c8e)
### data parallelism or model parallelism - In data parallelism, the data is split into multiple parts - in model parallelism, different parts of the model are processed by separate processors ### Parallel Training Notion |Title| |:-:| |[Model Parallelism](https://texonom.com/model-parallelism-76dd813ada7b4e50b645af2f05821d48)| |[Data Parallelism](https://texonom.com/data-parallelism-8e90f1c595a84dad9f4e921e74f86ba6)| ### Parallel Training Usages |Title| |:-:| |[Parallel Learning Tool](https://texonom.com/parallel-learning-tool-2a9741aa76c14c16a1240f3422f11421)| |[Parallel Training Example](https://texonom.com/parallel-training-example-3c16ddf97fbe43359e7da3dbd3ce96ee)| ![https://xiandong79.github.io](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F74479c70-d665-4f9e-a0df-dda36042a7ee%2FUntitled.png?table=block&id=cf76f928-10e0-4fb6-bb03-ec8afb321aa7&cache=v2) > [๋”ฅ๋Ÿฌ๋‹ ๋ชจ๋ธ์˜ ๋ถ„์‚ฐํ•™์Šต์ด๋ž€? (Data parallelism๊ณผ Model parallelism)](https://lifeisenjoyable.tistory.com/21)
4a8896bc837b4dddb47c3700b715cdc8
Quantum Machine Learning
Machine Learning Techniques
Mar 9, 2022
Alan Jo
Alan Jo
Mar 5, 2023
### Quantum Machine Learnings |Title| |:-:| > [Spooky Action Could Help Boost Quantum Machine Learning](https://spectrum.ieee.org/quantum-machine-learning)
8f5276045f4d43b8b96f3b4ec6646f66
Weight Initialization
Machine Learning Techniques
Jun 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
- weight to small random numbers - bias (zero or small nonzero) ### Weight Initialization Usages |Title| |:-:| |[He initialization](https://texonom.com/he-initialization-a255fdaeec8e485faf3215a28ed5fdb9)| |[Xavier Initialization](https://texonom.com/xavier-initialization-40045bfdf72343aea3a234214145f9dd)| > [0025 Initialization - Deepest Documentation](https://deepestdocs.readthedocs.io/en/latest/002_deep_learning_part_1/0025/)
6cfc10eb06f948528aa76a9814a9ac85
Hidet
ML Compiler Optimization Tools
May 1, 2023
Alan Jo
Alan Jo
May 1, 2023
[Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686)
> [PyTorch](https://pytorch.org/blog/introducing-hidet)
7b706024cb3040269f71e47b5b87d2b2
MLGO
ML Compiler Optimization Tools
Jul 9, 2022
Alan Jo
Alan Jo
Mar 11, 2023
[LLVM](https://texonom.com/llvm-5dc6acb10b5244a2af349319ef87c797) [ml-compiler-opt](https://github.com/google/ml-compiler-opt)
### Infrastructure for Machine Learning Guided Optimization > [MLGO: A Machine Learning Framework for Compiler Optimization](https://ai.googleblog.com/2022/07/mlgo-machine-learning-framework-for.html)
60d88727f0b944ec9ce62d254c2e8a76
XLA
ML Compiler Optimization Tools
Mar 11, 2023
Alan Jo
Alan Jo
Mar 11, 2023
[xla](https://github.com/openxla/xla)
- pytorch - tensorflow - jax
4f75dff43dc0451aa5ca92e9218a3028
Data Parallelism
Parallel Training Notion
Apr 25, 2023
Alan Jo
Alan Jo
Apr 25, 2023
ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ์—ฌ๋Ÿฌ GPU์— ๋‚˜๋ˆ  ํ•™์Šต ### Data Parallelism Usages |Title| |:-:|
8e90f1c595a84dad9f4e921e74f86ba6
Model Parallelism
Parallel Training Notion
Apr 25, 2023
Alan Jo
Alan Jo
Apr 25, 2023
๋ชจ๋ธ์„ ์—ฌ๋Ÿฌ GPU์— ๋‚˜๋ˆ„๋Š” ### Model Parallelism Usages |Title| |:-:|
76dd813ada7b4e50b645af2f05821d48
Parallel Learning Tool
Parallel Training Usages
Apr 25, 2023
Alan Jo
Alan Jo
Apr 25, 2023
### Parallel Training System |Title| |:-:| |[Colossal AI](https://texonom.com/colossal-ai-031c5480b24249ce903fea4e0f8d435c)| |[Megatrom LM](https://texonom.com/megatrom-lm-87be23e9b623465395a0d6a4e94470ae)| |[DeepSpeed](https://texonom.com/deepspeed-3866b23c00eb4d529de6e33dc48ffae7)|
2a9741aa76c14c16a1240f3422f11421
Parallel Training Example
Parallel Training Usages
Apr 25, 2023
Alan Jo
Alan Jo
Apr 25, 2023
### Parallel Learning Examples |Title| |:-:| |[](https://texonom.com/48f191fcffa04c068978381a78b4ca8d)|
3c16ddf97fbe43359e7da3dbd3ce96ee
Colossal AI
Parallel Training System
Mar 15, 2022
Alan Jo
Alan Jo
Apr 25, 2023
[ColossalAI](https://github.com/hpcaitech/ColossalAI)
> [Colossal-AI](https://colossalai.org/)
031c5480b24249ce903fea4e0f8d435c
DeepSpeed
Parallel Training System
Feb 19, 2021
Alan Jo
Alan Jo
Apr 25, 2023
[DeepSpeed](https://github.com/microsoft/DeepSpeed)
### Pipeline Parallelism > [DeepSpeed Pipeline Parallelism](https://velog.io/@nawnoes/DeepSpeed-Pipeline-Parallelism) > [PyTorch Lightning DeepSpeed](https://velog.io/@nawnoes/PyTorch-Lightning-DeepSpeed)
3866b23c00eb4d529de6e33dc48ffae7
Megatrom LM
Parallel Training System
Apr 25, 2023
Alan Jo
Alan Jo
Apr 25, 2023
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
87be23e9b623465395a0d6a4e94470ae
[gpt-neox](https://github.com/EleutherAI/gpt-neox)
Parallel Learning Examples
Apr 25, 2023
Alan Jo
Alan Jo
Apr 25, 2023
[Megatrom LM](https://texonom.com/megatrom-lm-87be23e9b623465395a0d6a4e94470ae) [DeepSpeed](https://texonom.com/deepspeed-3866b23c00eb4d529de6e33dc48ffae7)
48f191fcffa04c068978381a78b4ca8d
**He initialization**
Weight Initialization Usages
Jul 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
[ReLU](https://texonom.com/relu-e582549804da48b893758895e446ffb9)
ReLU + He ์ดˆ๊ธฐํ™” ๋ฐฉ๋ฒ•์ด ๋ณดํŽธ์  > [Untitled](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf)
a255fdaeec8e485faf3215a28ed5fdb9
**Xavier Initialization**
Weight Initialization Usages
Jul 6, 2023
Alan Jo
Alan Jo
Jul 6, 2023
## Glorot Initialization ์ด์ „ ์ธต์˜ ๋‰ด๋Ÿฐ ๊ฐœ์ˆ˜์™€ ๋‹ค์Œ ์ธต์˜ ๋‰ด๋Ÿฐ ๊ฐœ์ˆ˜ ์ด์šฉ ์—ฌ๋Ÿฌ ์ธต์˜ ๊ธฐ์šธ๊ธฐ ๋ถ„์‚ฐ ์‚ฌ์ด์— ๊ท ํ˜•์„ ๋งž์ถ˜๋‹ค S์ž ํ˜•ํƒœ์ธ ํ™œ์„ฑํ™” ํ•จ์ˆ˜์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ์—๋Š” ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์ด์ง€๋งŒ, ReLU์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ์—๋Š” ์„ฑ๋Šฅ์ด ์ข‹์ง€ ์•Š๋‹ค ### Uniform Distribution ### Normal distribution > [07-07 ๊ธฐ์šธ๊ธฐ ์†Œ์‹ค(Gradient Vanishing)๊ณผ ํญ์ฃผ(Exploding)](https://wikidocs.net/61375)
40045bfdf72343aea3a234214145f9dd
Function Transformers
Machine Learning Tools
Jun 13, 2021
Alan Jo
Alan Jo
Jan 9, 2023
### Composable transformations > [Transformers](https://huggingface.co/transformers/)
c4396d81a01a4425ba0c6702501c911a
ML Accelerator
Machine Learning Tools
Jun 1, 2022
Alan Jo
Alan Jo
Jun 1, 2022
### ML Accelerators |Title| |:-:|
0df3a271237b4e63a342aa7ce704870d
ML Analyze Tool
Machine Learning Tools
Aug 5, 2021
Alan Jo
Alan Jo
Apr 19, 2022
### ML Analyze Tools |Title| |:-:| |[Uptrain](https://texonom.com/uptrain-a12b6a1c3410434ea318cb76a2f99a98)| |[Evidently](https://texonom.com/evidently-0500b048a7c34d8e91e18207e43cefd1)|
5aeb48a9850245eb97e20ec56448a15f
ML Container Tool
Machine Learning Tools
May 11, 2022
Alan Jo
Alan Jo
May 11, 2022
### ML Container Tools |Title| |:-:| |[Cog](https://texonom.com/cog-1b9175a7b63c43b5b6cd69a223f9d99d)|
14223d2021c748268fba092dee1fa357
ML Feature Store
Machine Learning Tools
Apr 19, 2022
Alan Jo
Alan Jo
Apr 19, 2022
### ML Feature Stores |Title| |:-:| |[Feathr](https://texonom.com/feathr-5ff843203b4745df949b956391bf9423)|
f53069a7083b4719b1e5fab18a5a9bbd
ML Platform
Machine Learning Tools
Sep 8, 2021
Alan Jo
Alan Jo
Aug 4, 2022
[](https://texonom.com/fee31eaaf53a45c28b0b305c3874b856)
### ML Platforms |Title| |:-:| |[diffgram](https://texonom.com/diffgram-99e0ffaf73f84615a556b6bbe71c4572)| |[Wandb](https://texonom.com/wandb-2219d228212940068aa5a604af7d5dbc)|
9d4142db8db042ed9e4a79085348cc55
Evidently
ML Analyze Tools
Aug 5, 2021
null
null
null
null
> [GitHub - evidentlyai/evidently: Interactive reports to analyze machine learning models during validation or production monitoring.](https://github.com/evidentlyai/evidently?ref=producthunt?utm_source=tldrnewsletter)
0500b048a7c34d8e91e18207e43cefd1
Uptrain
ML Analyze Tools
Mar 9, 2023
null
null
null
null
[uptrain](https://github.com/uptrain-ai/uptrain)
a12b6a1c3410434ea318cb76a2f99a98
Cog
ML Container Tools
May 11, 2022
Alan Jo
Alan Jo
May 11, 2022
[cog](https://github.com/replicate/cog)
1b9175a7b63c43b5b6cd69a223f9d99d
Feathr
ML Feature Stores
Apr 19, 2022
Alan Jo
Alan Jo
Apr 19, 2022
[LinkedIn](https://texonom.com/linkedin-1c0eb8ae1ca346a388e79c15b34355dc) [feathr](https://github.com/linkedin/feathr)
> [Open sourcing Feathr - LinkedIn's feature store for productive machine learning](https://engineering.linkedin.com/blog/2022/open-sourcing-feathr linkedin-s-feature-store-for-productive-m) ### Template Gallery |Title| |:-:| |[Template Page](https://texonom.com/template-page-b6dd128730be402fbf47e98d1a81c5f2)|
5ff843203b4745df949b956391bf9423
Template Page
Template Gallery
Apr 19, 2022
Alan Jo
Alan Jo
Apr 19, 2022
b6dd128730be402fbf47e98d1a81c5f2
diffgram
ML Platforms
Sep 8, 2021
null
null
null
> [GitHub - diffgram/diffgram: Complete training data platform for machine learning delivered as a single application.](https://github.com/diffgram/diffgram)
99e0ffaf73f84615a556b6bbe71c4572
Wandb
ML Platforms
Aug 4, 2022
null
null
null
[wandb](https://github.com/wandb/wandb)
2219d228212940068aa5a604af7d5dbc
AdaBoost
ML Meta Algorithms
Oct 6, 2021
Alan Jo
Alan Jo
Oct 6, 2021
์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ๋‹ค๋ฅธ ๋งŽ์€ ํ˜•ํƒœ์˜ ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ ๊ฒฐํ•ฉํ•˜์—ฌ ์‚ฌ์šฉ
79feac6204384e7299e08e3cfa40d05e
Deep Learning
Neural Network Notion
Nov 5, 2019
Alan Jo
Seong-lae Cho
Jul 5, 2023
[Neuroscience ](https://texonom.com/neuroscience-b45d9f638a2b4330906556c402307925)
### Neural Network based Machine Learning method composition of differentiable functions Brain Algorithm + [Neural Network](https://texonom.com/neural-network-86f54f9f1de848c1a29c56c24f7d5094) + [Big Data](https://texonom.com/big-data-236ec9f0ed844f4d8a5ca3236dfa442c) limit - explainability, fairness, generalizability, causality Main feature of neural network It can find non-heuristic feature representation ### Deep Learning Notion |Title| |:-:| |[Deep Learning Math](https://texonom.com/deep-learning-math-57c204edef3042568bbd0d5268b877fd)| |[Deep Learning Network](https://texonom.com/deep-learning-network-f368faf8fa634699aeda503d01c193f0)| |[End-to-end Deep Learning](https://texonom.com/end-to-end-deep-learning-16bd62f447a144f1b251acf25f1f8789)| ### Deep Learning Usages |Title| |:-:| |[Deep Learning Tool](https://texonom.com/deep-learning-tool-a14ea6f4574342ef974443634e27c6ce)| |[Deep Learning Compiler](https://texonom.com/deep-learning-compiler-7d79af9683764b6b983793c1856578c6)| |[Sentiment Neuron](https://texonom.com/sentiment-neuron-2e44e9754b894534af6c121b6d6074d6)| |[Learn Deep Learning](https://texonom.com/learn-deep-learning-4382083fb54d4705984e7f45a9af2d86)| ### Interview from popular people > [แ„Œแ…ฆแ„‘แ…ณแ„…แ…ต ํžŒํ„ด์ด ๋งํ•˜๋Š” AI์˜ ์˜ํ–ฅ๋ ฅ๊ณผ ์ž ์žฌ๋ ฅ](https://www.youtube.com/watch?v=IvUw9um4Bv8) > [OpenAI์˜ ํ•ต์‹ฌ, Ilya Sutskever ์ธํ„ฐ๋ทฐ](https://www.youtube.com/watch?v=SGCFeIbpGlU&t=722s)
7d3c8b9ce05b49cf9eed92dbcdc80cfd
Neural Network History
Neural Network Notion
May 11, 2023
Alan Jo
Alan Jo
Jun 7, 2023
> [[๋ฏธ๋ผํด๋ ˆํ„ฐ] AI์‹œ๋Œ€ ํ•„์ˆ˜ํ’ˆ ์•„๋‚ ๋กœ๊ทธ ์ปดํ“จํ„ฐ!!](https://stibee.com/api/v1.0/emails/share/KeeaUnUVO5o8muoWpT9bjEtpsL5hny0=)
2127bdb56da54a659e570f47704e40b1
Neural Network Structure
Neural Network Notion
May 11, 2023
Alan Jo
Alan Jo
Jul 4, 2023
### Neural Network Components |Title| |:-:| |[Activation Function](https://texonom.com/activation-function-8e52ee5f83a244d88abeeee3fb9497a8)| |[Neural Network Layer](https://texonom.com/neural-network-layer-e10ef2afe1954cf6b909f8aa40077393)| |[Forward Forward Algorithm](https://texonom.com/forward-forward-algorithm-6c989313e382466bba02d15c412fb17f)| |[skip connections](https://texonom.com/skip-connections-d7ad187f3487468db6eea278f0236a22)| ### Neural Networks |Title| |:-:| |[Perceptron](https://texonom.com/perceptron-1deb66f486d54c93bb928d8afba1864c)| |[FFNN](https://texonom.com/ffnn-89ecee87d8b7482e86995950db90eb31)| |[CNN](https://texonom.com/cnn-002bf81a77bc40d1858740d26b61d97b)| |[RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16)| |[ANN](https://texonom.com/ann-d4232205ecf9463c95a911d179c87a84)| |[SNN](https://texonom.com/snn-6cf69239a87e4df9a32b3494862374e4)| |[GNN](https://texonom.com/gnn-58adb81cb2b649d9af19019182960bb2)|
400bbea8029c4eb1a97c0dd063735551
Deep Learning Math
Deep Learning Notion
Nov 5, 2019
Alan Jo
Seong-lae Cho
Mar 26, 2023
### Sub-field of ML: learning representations of data Existing ML uses manually designed features - often over-specified and incomplete - take a long time to design and validate DP - Learned Features are easy to adapt, fast - Deep learning provides a very flexible, (almost?) universal - Effective end-to-end joint system learning > speach recog not good > visual perception good > Question Answering good > 4 + 2 = 6 neurons (not counting inputs) - biases has same number (for each resulting node) > [3 x 4] + [4 x 2] = 20 weights > > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F5986fe3d-198a-4608-959f-6bd5bc738065%2FUntitled.png?table=block&id=8f6601d1-344a-4488-b982-926c53650f7f&cache=v2) > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F4110b2cb-bf0f-4221-b2c9-cc8c6f2c8786%2FUntitled.png?table=block&id=0e436d3f-24b4-4328-bc11-1e91e5aa5747&cache=v2) ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Ff7503f3f-0f31-446e-b91b-32842c7f119d%2FUntitled.png?table=block&id=4e369bcf-c0fd-4d82-9523-8156a1c29aac&cache=v2) Optimize (min. or max.)** objective/cost function ๐ฝ(๐œƒ)** Generate **error signal **that measures difference between predictions and target values word representation is 74 72 65 65 in x0 but it is pool so we need new presentation of word ## wordnet WordNet: contains the list of synonyms/hypernyms โ† using human resources Wordโ€™s meaning is given by words that frequently appear close-by ### Context: set of words that appears nearby in the fixed-size window (ex: before/after 5 words) Vector dimension = Number of words * Number of words memroy O(n^2) poor so we use dimensionality reduction (pca, svd) โ† almost all value is 0 so l;we can โ†’ word embedding ์ด์ œ ๋ฒกํ„ฐ๊ณฑ์œผ๋กœ similarํŒ๋‹จ๊ฐ€๋Šฅ (become dense vector) - from one hot vector > We can make embedding for between other languagex > vector similarity (cosine) > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F30c1772d-b6df-4eed-ad3f-2777aebcef9a%2FUntitled.png?table=block&id=e97e701f-21dd-4ab7-8e3e-d039ad4d6d08&cache=v2) # Language Modeling: Models ofP(text) - sentance score sentance > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F3c48b3c2-5971-4a4d-8c78-896f6b721be1%2FUntitled.png?table=block&id=70af901f-dd8c-4e16-a2b5-408adf172390&cache=v2) > add all is exampel (linearfeatures) > usually softmax the sum > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F66fd7479-7a77-416a-9684-a30972d659e4%2FUntitled.png?table=block&id=4f96ddb9-9fa4-48df-b29f-2012e73945f1&cache=v2) > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F7bcd216a-04a9-411b-9c95-c2c8728c44e7%2FUntitled.png?table=block&id=2b6a582b-5666-47cd-8fa5-0185c4b045bb&cache=v2) soft max is for before output ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F52fd4795-45f9-4e20-8f4a-b9a18df4d4d7%2FUntitled.png?table=block&id=5a5b7c77-f67f-486b-b148-0265844a4461&cache=v2) # 1. P(text) - linear model ## CBOW Predict word based on sum of surroundingembeddings ## Skip-gram Predict each word in the context given theword [very good, good, neutral, bad, verybad] Linear Models canโ€™tLearn FeatureCombinations # 2. Models of P(label |text) ## BOW each word has its own 5 elements corresponding to ## DeepCBOW CombinationFeatures - Each vector has โ€œfeaturesโ€ (e.g. is this an animate object? is this a positive word, etc.) # ConvolutionalNetworks ### x/ CNN - pooling weak for long distance feature extractor don't have holistic view of # x/ RNN good for long disatnce fraeture extractor weakness - Indirect passing of information, credit assignment moredifficult Can be slow, due to incrementalprocessing ## ModelingSentences w/n-grams # P(text |text) > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F547f55b2-840c-4e50-80a0-8b936f4a64c3%2FUntitled.png?table=block&id=7fa3ac7f-fd55-4575-b6b1-afbae72438fe&cache=v2) > Conditional LanguageModels > ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fba8eaa2d-8c36-4f5b-b222-e60b091aa063%2FUntitled.png?table=block&id=6f689ac1-dfc7-45ac-9f72-5704d81f6dcf&cache=v2) > Calculating the Probability of aSentence > RNN is frequently used in language modeling since RNN can capture long-distance dependencies
57c204edef3042568bbd0d5268b877fd
Deep Learning Network
Deep Learning Notion
Oct 6, 2021
Alan Jo
Alan Jo
May 29, 2023
### Deep Learning Models |Title| |:-:| |[Seq2Seq](https://texonom.com/seq2seq-01a9854dffa6417c87d92c11a607250c)| |[GAN](https://texonom.com/gan-66482b5f518d47f6b337eba9a30ff792)| |[Capsule Network](https://texonom.com/capsule-network-a14fc7e569864154aae1ef44106e8991)| |[MANN](https://texonom.com/mann-84ed691391c8439ba8e6b297623c9c0e)|
f368faf8fa634699aeda503d01c193f0
End-to-end Deep Learning
Deep Learning Notion
Apr 3, 2023
Alan Jo
Alan Jo
Apr 3, 2023
์ž…๋ ฅ์—์„œ ๋ฐ”๋กœ ์ถœ๋ ฅ์„ ๊ตฌํ•  ์ˆ˜ ์žˆ๋‹ค ๋ชจ๋“  ๋งค๊ฐœ๋ณ€์ˆ˜๊ฐ€ ํ•˜๋‚˜์˜ ์†์‹คํ•จ์ˆ˜์— ๋Œ€ํ•ด ๋™์‹œ์— ํ›ˆ๋ จ๋˜๋Š” ๊ฒฝ๋กœ๊ฐ€ ๊ฐ€๋Šฅํ•œ ๋„คํŠธ์›Œํฌ๋ฅผ ๋œปํ•œ๋‹ค [Text Tokenizer](https://texonom.com/text-tokenizer-2bbc41eaa76c4674a4f4b9127fbe5da1) [Text Encoding](https://texonom.com/text-encoding-ab7377cfc5c648059de4860510ad9134) ํ•˜์ง€์•Š๊ณ  ๋ฉ”๋ชจ๋ฆฌ ๋งŽ์ด ์‚ฌ์šฉ > [What is end-to-end deep learning?](https://velog.io/@jeewoo1025/What-is-end-to-end-deep-learning)
16bd62f447a144f1b251acf25f1f8789
Capsule Network
Deep Learning Models
Aug 21, 2021
Alan Jo
Alan Jo
Oct 6, 2021
[CNN](https://texonom.com/cnn-002bf81a77bc40d1858740d26b61d97b)
## CapsNet cnn์—์„œ ์ด๋ฏธ์ง€ recognition ์—์„œ ์ƒ๊ธฐ๋Š” ๋ฌธ์ œ๊ฐ€ ์—†๋Š” ์ธ๊ณต์‹ ๊ฒฝ๋ง detects the rotation and leans it as one of the activation vector ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fb1ec5a35-c647-46dd-90c3-07dcc5e94b7c%2FUntitled.png?table=block&id=9eb6f960-e681-498c-8c63-3b6096168011&cache=v2) > [What is a CapsNet or Capsule Network?](https://medium.com/hackernoon/what-is-a-capsnet-or-capsule-network-2bfbe48769cc) > [Why Do Capsule Networks Work Better Than Convolutional Neural Networks?](https://medium.com/@ashukumar27/why-do-capsule-networks-work-better-than-convolutional-neural-networks-f4a105a53aff)
a14fc7e569864154aae1ef44106e8991
GAN
Deep Learning Models
Nov 18, 2019
Alan Jo
Alan Jo
Jun 1, 2023
[Unsupervised learning](https://texonom.com/unsupervised-learning-8cb6e253bfa845b5931d22963ea93019) [Generative Model](https://texonom.com/generative-model-6e5204d2982b4042847aa42e88eb8fb5) [Transfer Learning](https://texonom.com/transfer-learning-442feb66465944eebf144d4e9dd1dbf8)
## Generative Adversarial Network Learn how to generate samples, ์œ„์กฐ์ง€ํ์™€ ๊ฒฝ์ฐฐ๊ฐ™์ด ๊ฒฝ์Ÿํ•˜๋ฉฐ ๋ฐœ์ „ ### GAN Notion |Title| |:-:| |[Generator Network](https://texonom.com/generator-network-3d3fb237d8f149979c7ed172aca65529)| |[Discriminator Network](https://texonom.com/discriminator-network-a02287522a264d779574871285eed3b6)| |[GAN Minmax Game](https://texonom.com/gan-minmax-game-d3b08fa32fc34cf5920bfc0e80c34b90)| |[GAN Issues](https://texonom.com/gan-issues-be6312c3a8184b55ba443167164101ba)| ### GANs |Title| |:-:| |[3D GAN](https://texonom.com/3d-gan-672af0e2639041e484ed1a26b56f84cb)| |[DCGAN](https://texonom.com/dcgan-ae93b511fed5402c926e818243d8a966)| |[DragGan](https://texonom.com/draggan-bd335de6767243e7b7ec48ff88dea800)| > [Generative adversarial network](https://en.wikipedia.org/wiki/Generative_adversarial_network)
66482b5f518d47f6b337eba9a30ff792
****MANN****
Deep Learning Models
Apr 29, 2023
Alan Jo
Alan Jo
May 29, 2023
### Memory Augmented Neural Networks RNN, CNN ๋“ฑ์˜ ๊ธฐ๋ฐ˜ ๋ชจ๋ธ๊ณผ ๋ฉ”๋ชจ๋ฆฌ ๊ตฌ์กฐ๋ฅผ ๊ฒฐํ•ฉํ•œ ๋ชจ๋ธ ### ****MANN Notion**** |Title| |:-:| |[Differentiable Neural Computer](https://texonom.com/differentiable-neural-computer-2c182410c5c34222b41605b63c37c777)| |[Neural Turing Machine](https://texonom.com/neural-turing-machine-c03efee39e1942f197f4b3d6553e4ac1)|
84ed691391c8439ba8e6b297623c9c0e
Seq2Seq
Deep Learning Models
Mar 4, 2023
Alan Jo
Alan Jo
Jul 30, 2023
### Variable Length of inputs and outputs ์ธ์ฝ”๋”-๋””์ฝ”๋” ๊ตฌ์กฐ๋Š” ์ฃผ๋กœ ์ž…๋ ฅ ๋ฌธ์žฅ๊ณผ ์ถœ๋ ฅ ๋ฌธ์žฅ์˜ ๊ธธ์ด๊ฐ€ ๋‹ค๋ฅผ ๊ฒฝ์šฐ์— ์‚ฌ์šฉ - Encoder takes the input sequence and converts it into a fixed length vector representation - Decoder uses this vector to generate the output sequence ### Seq2Seq Notion |Title| |:-:| |[Attention Mechanism](https://texonom.com/attention-mechanism-762711860abb45f59904f1ac4e4af285)| |[Copy mechanism](https://texonom.com/copy-mechanism-b5dee9c80ca24bb993b5f152129b3577)| ### Seq2Seq Models |Title| |:-:| |[Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b)| |[Transformer Model](https://texonom.com/transformer-model-f3e8053cc5b447a2bc9c6b5d0874dafc)| |[Encoder Model](https://texonom.com/encoder-model-321d79943c8940fcaac0c9ccca0b5f6f)| > [14-01 ์‹œํ€€์Šค-ํˆฌ-์‹œํ€€์Šค(Sequence-to-Sequence, seq2seq)](https://wikidocs.net/24996)
01a9854dffa6417c87d92c11a607250c
Discriminator Network
GAN Notion
Nov 18, 2019
Alan Jo
Alan Jo
Jun 1, 2023
tries to distinguish between real and fake images Discriminator $\phi$ aims at maximizing the objective - $D(x)$ to be close to 1 for real - $D(G(z))$ to be close to 0 for fake
a02287522a264d779574871285eed3b6
GAN Issues
GAN Notion
Jun 1, 2023
Alan Jo
Alan Jo
Jun 14, 2023
### 1. Non-convergence minmax objective **cycle **generated so repeated. Even with a small learning rate, it will not converge example. $min_xmax_yV(x, y ) = xy$ ### 2. Mode-Collapse What if the generator keeps generating a **single realistic image**? The discriminator will be always fooled by the single sample ### Mini Batch Trick **Compute the similarity** of the image $x$ with other images in the same batch to avoid Mode-Collapse. **This measures the diversity of the batch.** **Feed the similarity score** along with the image to the discriminator as an input feature. This penalizes the generator and encourages it to generate less similar images Many more advanced techniques have also been proposed so far.
be6312c3a8184b55ba443167164101ba
GAN Minmax Game
GAN Notion
Jun 1, 2023
Alan Jo
Alan Jo
Jun 1, 2023
## Training GAN Aims to $D(G(z))$ to be close to 1, i.e. discriminator is fooled ### Gradient descent on generator ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F19a41272-13ca-4ad9-b94a-c3fe8dbd031f%2FUntitled.png?table=block&id=17e51d57-7290-4e22-8703-8a64cf6e1b95&cache=v2) If $G$ is very bad compared to $D$, then we have almost zero gradient. Hence, $-D$ term can be used ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F5c0d0cce-916a-44e9-a971-c7201830e7db%2FUntitled.png?table=block&id=5841e261-a075-413d-860c-0186cb106361&cache=v2) ### Gradient ascent on discriminator Generator $\theta$ and Discriminator $\phi$ has very different objective so hard to train stable
d3b08fa32fc34cf5920bfc0e80c34b90
Generator Network
GAN Notion
Nov 18, 2019
Alan Jo
Alan Jo
Jun 1, 2023
tries to fool the discriminator by generating realistic sample Generator $\theta $ aims at minimizing the objective
3d3fb237d8f149979c7ed172aca65529
3D GAN
GANs
Feb 24, 2022
Alan Jo
Alan Jo
Jun 1, 2023
> [3D Generative Adversarial Network](http://3dgan.csail.mit.edu/)
672af0e2639041e484ed1a26b56f84cb
DCGAN
GANs
Mar 5, 2023
Alan Jo
Alan Jo
Jun 1, 2023
[DCGAN-tensorflow](https://github.com/carpedm20/DCGAN-tensorflow)
ae93b511fed5402c926e818243d8a966
DragGan
GANs
May 29, 2023
Alan Jo
Alan Jo
Jun 1, 2023
[DragGAN](https://github.com/XingangPan/DragGAN)
### Unofficial Huggingface implementation > [DragGAN - a Hugging Face Space by fffiloni](https://huggingface.co/spaces/fffiloni/DragGAN) > [Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold](https://vcai.mpi-inf.mpg.de/projects/DragGAN/)
bd335de6767243e7b7ec48ff88dea800
Differentiable Neural Computer
MANN Notion
May 29, 2023
Alan Jo
Alan Jo
May 29, 2023
2c182410c5c34222b41605b63c37c777
Neural Turing Machine
MANN Notion
May 29, 2023
Alan Jo
Alan Jo
May 29, 2023
c03efee39e1942f197f4b3d6553e4ac1
Decoder Model
Seq2Seq Models
Mar 6, 2023
Alan Jo
Alan Jo
Jul 17, 2023
[Encoder Model](https://texonom.com/encoder-model-321d79943c8940fcaac0c9ccca0b5f6f) [Text Generation](https://texonom.com/text-generation-cb0e475216c043fdbb4f309d324c9d19)
## **Autoregressive Model, **Causal language model ์ด์ „ ์‹œ์ ์˜ ์ถœ๋ ฅ์„ ํ˜„์žฌ ์‹œ์ ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ, ์‹œํ€€์Šค ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ชจ๋ธ ๋‹จ๋ฐฉํ–ฅ ๋ชจ๋ธ์„ ํ†ตํ•ด ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธก, ์ถ”์ธกํ•˜๋Š” ์–ธ์–ด๋ชจ๋ธ [Decoder Input IDs](https://texonom.com/decoder-input-ids-2b1e1d3c7c274b0880191a54290bed27) ### Autoregressive ํ˜„์žฌ ๊ฐ’์ด ์ด์ „ ๊ฐ’๋“ค์˜ ์„ ํ˜• ์กฐํ•ฉ์œผ๋กœ ๋‚˜ํƒ€๋‚˜๋Š” ๋ชจ๋ธ์„ ์˜๋ฏธ Transformer์˜ Decoder๋งŒ ์‚ฌ์šฉํ•˜๋ฏ€๋กœ ์ด์ „ ์‹œ์ ์˜ ์ถœ๋ ฅ์„ ํ˜„์žฌ ์‹œ์ ์˜ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•˜์—ฌ, ์‹œํ€€์Šค ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐฉ์‹์„ ์‚ฌ์šฉ ์ด ๋ฐฉ์‹์€ ๋ฌธ์žฅ ์ „์ฒด๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ณ , ๋‹จ์–ด๋‚˜ ํ† ํฐ ๋‹จ์œ„๋กœ ์ฒ˜๋ฆฌ ์ด์ „ ๋‹จ์–ด๋“ค์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธก ์ฆ‰ ๋ฌธ์žฅ ์ „์ฒด๋ฅผ ํ•œ ๋ฒˆ์— ์ฒ˜๋ฆฌํ•˜์ง€ ์•Š๊ธฐ ๋•Œ๋ฌธ์—, ๋ฌธ์žฅ ์ „์ฒด๋ฅผ ๋ณผ ์ˆ˜ ์—†๋‹ค > [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/) > [Decoder models - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)
36e78a40265c473d90197089aebfa83b
Encoder Model
Seq2Seq Models
Mar 6, 2023
Alan Jo
Alan Jo
Jul 30, 2023
[Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b)
## Auto*-encoding model, Masked Language Model* transform text or images into a condensed numerical representation called an embedding ### These models are often characterized as having bi-directional attention ์ž…๋ ฅ ๋ฌธ์žฅ์„ ๋ฒกํ„ฐํ™”ํ•˜๊ณ  ๋‹ค์‹œ ๋ณต์›ํ•˜๋Š” ๋ชจ๋ธ ์ž…๋ ฅ ๋ฌธ์žฅ์˜ ์˜๋ฏธ๋ฅผ ๋ณด์กดํ•˜๋ฉด์„œ, ๋ชจ๋ธ์ด ์ฒ˜๋ฆฌํ•˜๊ธฐ ์‰ฌ์šด ํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ The pretraining of these models usually revolves around somehow corrupting a given sentence and tasking the model with finding or reconstructing the initial sentence ### *auto-encoding* ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ์ธ์ฝ”๋”ฉํ•˜์—ฌ ์ „๋ถ€ [Latent Space](https://texonom.com/latent-space-d67b6bdef18b4058bfbc3d25f87ec087) ์— ๋งคํ•‘ํ•œ๋‹ค ์ž…๋ ฅ ์‹œํ€€์Šค๋ฅผ ํ† ํฐ ๋‹จ์œ„๋กœ ์ฒ˜๋ฆฌํ•˜์ง€๋งŒ Self-Attention์œผ๋กœ ์ž…๋ ฅ ์‹œํ€€์Šค์˜ ๊ฐ ์œ„์น˜๋งˆ๋‹ค ํ•ด๋‹น ์œ„์น˜์™€ ๋‹ค๋ฅธ ๋ชจ๋“  ์œ„์น˜ ๊ฐ„์˜ ์œ ์‚ฌ๋„๋ฅผ ๊ณ„์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ฌธ๋งฅ๊ณ ๋ ค ๊ฐ€๋Šฅํ•˜๋‹ค. ์ด๋•Œ ์œ ์‚ฌ๋„๊ฐ€ ๋†’์€ ์œ„์น˜๋“ค์€ ํ•ด๋‹น ์œ„์น˜์˜ ์ž„๋ฒ ๋”ฉ ๋ฒกํ„ฐ์— ๋” ๋งŽ์€ ๊ฐ€์ค‘์น˜๊ฐ€ ๋ถ€์—ฌ > [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/) > [Encoder models - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter1/5?fw=pt)
321d79943c8940fcaac0c9ccca0b5f6f
Transformer Model
Seq2Seq Models
Aug 17, 2020
Alan Jo
Alan Jo
May 29, 2023
[Attention Mechanism](https://texonom.com/attention-mechanism-762711860abb45f59904f1ac4e4af285) [Attention is all you need](https://texonom.com/attention-is-all-you-need-f52e664318eb47c0aa6cbd27b9a4c491)
## Self Attention is the core feature the Transformer gains a wider perspective and can attend to multiple interaction levels within the input sentence. ์–ดํ…์…˜์„ Encoder ๋ณด์ •์„ ์œ„ํ•œ ์šฉ๋„๋กœ์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ์–ดํ…์…˜๋งŒ์œผ๋กœ ์ธ์ฝ”๋”์™€ ๋””์ฝ”๋” ์ƒ์„ฑ ๋ชจ๋“  ํ† ํฐ์„ ๋™์‹œ์— ๋ฐ›์•„ ์—ฐ์‚ฐํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋ณ‘๋ ฌ์—ฐ์‚ฐ์ด ๊ฐ€๋Šฅ ์ธ์ฝ”๋” ๋ธ”๋Ÿญ๊ณผ ๋””์ฝ”๋” ๋ธ”๋Ÿญ์ด ๊ฐ๊ฐ 6๊ฐœ์”ฉ ๋ชจ์—ฌ์žˆ๋Š” ๊ตฌ์กฐ ์ธ์ฝ”๋” ๋ธ”๋ก์€ 2๊ฐœ์˜ sub-layer(Multi-Head(self) Attention, Feed Forward)๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ๊ณ  ๋””์ฝ”๋” ๋ธ”๋ก์€ 3๊ฐœ์˜ sub-layer(Masked Multi-Head(self) Attention, Multi-Head (Encoder-Decoder) Attention, Feed Forward)๋กœ ๋‚˜๋ˆŒ ์ˆ˜ ์žˆ๋‹ค ๊ธฐ์กด์˜ Attention๊ณผ๋Š” ๋‹ค๋ฅด๊ฒŒ ๊ฐ ๋ฒกํ„ฐ๊ฐ€ ๋ชจ๋‘ ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ๋ผ๋Š” ์  ### Transformer Model Notion |Title| |:-:| |[Attention is all you need](https://texonom.com/attention-is-all-you-need-f52e664318eb47c0aa6cbd27b9a4c491)| |[NLP Transformer Encoder](https://texonom.com/nlp-transformer-encoder-d5bc8cebe43d4fc7896d8e0290683b18)| |[NLP Transformer Decoder](https://texonom.com/nlp-transformer-decoder-1ee79f4754d24aa8b64312399782cbec)| |[Transformer Attention](https://texonom.com/transformer-attention-84d340d69722409a96ec1d806970608a)| |[Transformer Model Tool ](https://texonom.com/transformer-model-tool-01c749c1e9254d8c85ec5ef2feb0566b)| ### Transformer Models |Title| |:-:| |[RETRO Transformer](https://texonom.com/retro-transformer-e3efdb06419948ebb1be444f4e124867)| |[Torchscale](https://texonom.com/torchscale-c4b82c1245d74c7092617e70b9ddab51)| |[BART](https://texonom.com/bart-48bd2258eee74af3b2d929b03bb9553b)| |[RMT](https://texonom.com/rmt-f076841449f2419a82b3ce09281b9bb9)| |[Transformer-XL](https://texonom.com/transformer-xl-eb2128e01dc744539b6825f3880da761)| ### Architecture > [Transformerโ€™s Encoder-Decoder: Letโ€™s Understand The Model Architecture - KiKaBeN](https://kikaben.com/transformers-encoder-decoder/) ### Pseudo Source code > [Transformers for software engineers](https://blog.nelhage.com/post/transformers-for-software-engineers/) ### Korean > [์ ํ”„ ํˆฌ ํŒŒ์ด์ฌ](https://wikidocs.net/31379) > [[๋”ฅ๋Ÿฌ๋‹] ์–ธ์–ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐœ๋… ์ •๋ฆฌ](https://velog.io/@rsj9987/๋”ฅ๋Ÿฌ๋‹-์šฉ์–ด์ •๋ฆฌ)
f3e8053cc5b447a2bc9c6b5d0874dafc
Decoder Input IDs
Decoder Model
null
null
null
null
null
## token indices
2b1e1d3c7c274b0880191a54290bed27
Attention is all you need
Transformer Model Notion
Aug 23, 2020
Alan Jo
Alan Jo
May 30, 2023
[Transformer Model](https://texonom.com/transformer-model-f3e8053cc5b447a2bc9c6b5d0874dafc)
2017๋…„ ๋ฐœํ‘œ๋œ ํŠธ๋žœ์Šคํฌ๋จธ ๊ตฌ์กฐ๋ฅผ ์ฒ˜์Œ ๋ฐœํ‘œํ•œ ๋…ผ๋ฌธ ํ•ด๋‹น ๋…ผ๋ฌธ์€ Non-recurrent sequence to sequence encoder-decoder model์„ ๋งŒ๋“œ๋Š” ๊ฒƒ์ด ๋ชฉํ‘œ Replaced [RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16) Encoder Decoder Model ## **Background** ### **1. Sequential computation** sequence to sequenceํ•œ ๋ฌธ์ œ๋ฅผ ํ‘ธ๋Š” ๊ณผ์ •์—์„œ, Encoder-Decoder ๊ตฌ์กฐ์˜ RNN ๋ชจ๋ธ๋“ค์ด ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ƒˆ๋‹ค. ### **2. Long term dependency** RNN์˜ ๊ฒฝ์šฐ, Long term dependency์˜ ๋ฌธ์ œ๊ฐ€ ํ•ญ์ƒ ๋”ฐ๋ผ๋‹ค๋‹ˆ๊ณ , CNN์˜ ๊ฒฝ์šฐ kernel ์•ˆ์—์„œ O(1)์ด๋‚˜, kernel ๊ฐ„ ์ •๋ณด๊ฐ€ ๊ณต์œ ๋˜์ง€ ์•Š๋Š”๋‹ค. # Model Architecture 6๊ฐœ์˜ stack - ํ•˜๋‚˜์˜ ์ธ์ฝ”๋”๋Š” Self-Attention Layer์™€ Feed Forward Neural Network(2๊ฐœ์˜ Sub-layer) - Encoder - Multi-Head Attention - Positional Encoding - Relative Positioning - The Residuals - Decoder ### Author > [ashVaswani](https://twitter.com/ashVaswani) ### pdf > [Untitled](https://arxiv.org/pdf/1706.03762.pdf) > [Attention Is All You Need(transformer) paper ์ •๋ฆฌ](https://medium.com/@omicro03/attention-is-all-you-need-transformer-paper-%EC%A0%95%EB%A6%AC-83066192d9ab) > [์ ํ”„ ํˆฌ ํŒŒ์ด์ฌ](https://wikidocs.net/31379)
f52e664318eb47c0aa6cbd27b9a4c491
NLP Transformer Decoder
Transformer Model Notion
Mar 7, 2023
Alan Jo
Alan Jo
Mar 7, 2023
### 3 Sub Layer [Masked Self-Attention](https://texonom.com/masked-self-attention-cb0e29589c93423780cc0ca60260f3e4) + [Multi-head Attention](https://texonom.com/multi-head-attention-d9dfc39b27494123ae4c81f3b98e50b5) + [Position-wise FFNN](https://texonom.com/position-wise-ffnn-6fe30c96aa9245d898c73ec34625377d) ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fa99c4277-03ed-4ec6-95ff-c4e91a06f844%2FUntitled.png?table=block&id=00eb4408-f746-4d0e-8313-6f5a8e6b94e1&cache=v2)
1ee79f4754d24aa8b64312399782cbec
NLP Transformer Encoder
Transformer Model Notion
Mar 7, 2023
Alan Jo
Alan Jo
Mar 7, 2023
### 2 Sub Layer [Multi-head Attention](https://texonom.com/multi-head-attention-d9dfc39b27494123ae4c81f3b98e50b5) + [Position-wise FFNN](https://texonom.com/position-wise-ffnn-6fe30c96aa9245d898c73ec34625377d) ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fa99c4277-03ed-4ec6-95ff-c4e91a06f844%2FUntitled.png?table=block&id=d48db9af-a165-4db5-842f-80d8046d66c6&cache=v2)
d5bc8cebe43d4fc7896d8e0290683b18
Transformer Attention
Transformer Model Notion
Apr 3, 2023
Alan Jo
Alan Jo
Apr 3, 2023
### Transformer Attentions |Title| |:-:| |[Encoder-Decoder Attention](https://texonom.com/encoder-decoder-attention-4b3bb5b4aeac4681bf6bdca66ea79e04)| |[Encoder Self-Attention](https://texonom.com/encoder-self-attention-c1e03ab0c57f44c395f2056e201f4326)| |[Masked Self-Attention](https://texonom.com/masked-self-attention-cb0e29589c93423780cc0ca60260f3e4)|
84d340d69722409a96ec1d806970608a
Transformer Model Tool
Transformer Model Notion
May 22, 2023
Alan Jo
Alan Jo
May 22, 2023
[Transformers.js](https://texonom.com/transformersjs-3e7f8bb1d88940298936f13ee1ce7ed7)
### Transformer Model Tools |Title| |:-:| |[ ](https://texonom.com/3ddce975d3aa4f6dbca6e3d4b3eb6e6e)|
01c749c1e9254d8c85ec5ef2feb0566b
Encoder-Decoder Attention
Transformer Attentions
Mar 5, 2023
Alan Jo
Alan Jo
Apr 3, 2023
input๊ณผ output์˜ ์ •๋ณด๋ฅผ ์—ฎ์–ด์ฃผ๋Š” ์—ญํ• 
4b3bb5b4aeac4681bf6bdca66ea79e04
Encoder Self-Attention
Transformer Attentions
Mar 5, 2023
Alan Jo
Alan Jo
Apr 3, 2023
c1e03ab0c57f44c395f2056e201f4326
Masked Self-Attention
Transformer Attentions
Mar 7, 2023
Alan Jo
Alan Jo
Apr 3, 2023
๋””์ฝ”๋” ๋ธ”๋Ÿญ์—์„œ ์‚ฌ์šฉ๋˜๋Š” ํŠน์ˆ˜ํ•œ Self-Attention ๋””์ฝ”๋”๋Š” [Autoregressive](https://texonom.com/autoregressive-93cf5710b4b54730a7e7efcc6e0fc642) ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ดํ›„๋‹จ์–ด ๋ณด์ง€์•Š๊ณ  ์˜ˆ์ธกํ•ด์•ผ ๊ทธ๋ž˜์„œ ๋’ค์— ๋ณด์ง€ ์•Š๋„๋ก Maskingํ•œ๋‹ค > [[๋”ฅ๋Ÿฌ๋‹] ์–ธ์–ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐœ๋… ์ •๋ฆฌ](https://velog.io/@rsj9987/๋”ฅ๋Ÿฌ๋‹-์šฉ์–ด์ •๋ฆฌ)
cb0e29589c93423780cc0ca60260f3e4
[trl](https://github.com/lvwerra/trl)
Transformer Model Tools
May 22, 2023
Alan Jo
Alan Jo
May 22, 2023
3ddce975d3aa4f6dbca6e3d4b3eb6e6e
BART
Transformer Models
Mar 25, 2023
Alan Jo
Alan Jo
May 31, 2023
[AI Summarization](https://texonom.com/ai-summarization-a314a6fb3162447086b8d2526ae8ef16) [KoBART-summarization](https://github.com/seujung/KoBART-summarization) [BERT](https://texonom.com/bert-e282abe8a34543988a0e71f6c8701ad2)
****Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension**** 2019 > [BART: Denoising Sequence-to-Sequence Pre-training for Natural...](https://arxiv.org/abs/1910.13461) > [BART Text Summarization vs. GPT-3 vs. BERT: An In-Depth Comparison | Width.ai](https://www.width.ai/post/bart-text-summarization) > [BART ๋…ผ๋ฌธ ๋ฆฌ๋ทฐ](https://dladustn95.github.io/nlp/BART_paper_review/)
48bd2258eee74af3b2d929b03bb9553b
RETRO Transformer
Transformer Models
Jul 5, 2022
Alan Jo
Alan Jo
May 22, 2023
[Vector Database](https://texonom.com/vector-database-5dfdb6e2bc294fed8ae80eaea2ee5c26) [Deepmind](https://texonom.com/deepmind-5eb171c77b344d4786a9a5b23ae70eca)
### Retrieval-Enhanced ### Fast ๋ฐ–์—๋‹ค retrieval database๋ฅผ ๋‘๋Š” ํ˜•ํƒœ ### Implementations [RETRO-pytorch](https://github.com/lucidrains/RETRO-pytorch) > [Improving language models by retrieving from trillions of tokens](https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens) > [The Illustrated Retrieval Transformer](https://jalammar.github.io/illustrated-retrieval-transformer/) > [RETRO Is Blazingly Fast](http://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html) ### Korean > [RETRO: Improving language models by retrieving from trillions of tokens](https://velog.io/@nawnoes/RETRO-Improving-language-models-by-retrieving-from-trillions-of-tokens)
e3efdb06419948ebb1be444f4e124867
RMT
Transformer Models
May 3, 2023
Alan Jo
Alan Jo
May 3, 2023
[LM-RMT](https://github.com/booydar/LM-RMT) [RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16)
## **Recurrent Memory Transformer** GPT-4โ€™s maximum input token for inference is 32000204 This model can 2 million > [Recurrent Memory Transformer](https://arxiv.org/abs/2207.06881) > [โ€œ๋ชจ๋“  ๊ฒƒ ๋ฐ”๊ฟ€์ง€๋„โ€... ๊ธฐ์–ต๋ ฅ GPT-4 63๋ฐฐ โ€˜RMTโ€™ ๊ธฐ๋ฐ˜ AI ๋“ฑ์žฅ](https://contents.premium.naver.com/themiilk/business/contents/230426095632265ym)
f076841449f2419a82b3ce09281b9bb9
Torchscale
Transformer Models
Dec 2, 2022
Alan Jo
Alan Jo
May 22, 2023
[Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686) [torchscale](https://github.com/microsoft/torchscale)
c4b82c1245d74c7092617e70b9ddab51
**Transformer-XL**
Transformer Models
May 3, 2023
Alan Jo
Alan Jo
May 3, 2023
[transformer-xl](https://github.com/kimiyoung/transformer-xl)
**Attentive Language Models Beyond a Fixed-Length Context** > [Recurrent Memory Transformer](https://arxiv.org/abs/2207.06881)
eb2128e01dc744539b6825f3880da761
Attention Mechanism
Seq2Seq Notion
Mar 5, 2023
Alan Jo
Alan Jo
Jul 28, 2023
[Encoder Model](https://texonom.com/encoder-model-321d79943c8940fcaac0c9ccca0b5f6f) [Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b) [RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16)
**ํ•œ ๋ฒˆ์— ์ „์ฒด ๋ฌธ์žฅ์„ ์ฝ๊ณ  ๋ฌธ์žฅ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฐ ๋‹จ์–ด์˜ ํ‘œํ˜„์„ ๋ณ‘๋ ฌ๋กœ ๊ณ„์‚ฐ๊ฐ€๋Šฅ** Imagine yourself in a library. You have a specific question (**query**). Books on the shelves have titles on their spines (**keys**) that suggest their content. You compare your question to these titles to decide how relevant each book is, and how muchย **attention**ย to give each book. Then, you get the information (**value**) from the relevant books to answer your question. the query vector points to the current input word (aka context). **Theย *****keys*****ย represent the words in the input sentence. The key vectors help the model understand how each word relates to the context word.** **Attention is how much weight the query word should give each word in the sentence. This is computed via a dot product between the query vector and all the key vectors**. These dot products then go through aย softmax which makes the attention scores (across all keys) sum to 1. **Each word is also represented by aย *****value*****ย which contains the information of that word. As a result, each context word is now represented by an attention-based weightage of all the words in the sentence.** ### NLP Attention Notion |Title| |:-:| |[Self-Attention](https://texonom.com/self-attention-d06bde44563f455b951d17955d820f77)| |[Multi-head Attention](https://texonom.com/multi-head-attention-d9dfc39b27494123ae4c81f3b98e50b5)| |[Cross-Attention](https://texonom.com/cross-attention-6365d422d2bf4be78dac038df3a19aae)| ![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2F36626d64-ac93-430d-aa3b-11ead7cb27e3%2FUntitled.png?table=block&id=6ce231d7-c03f-4556-b56f-0d2d3032f7a3&cache=v2) ### NLP Attention Usages |Title| |:-:| |[Flash Attention](https://texonom.com/flash-attention-ca8093deb6c648059aff1580b0ddbf68)| |[Dilated Attention](https://texonom.com/dilated-attention-d9000e3460f547bca3f15ac9b8d7e36c)| |[Multi Query Attention](https://texonom.com/multi-query-attention-5641aba38a8b47caa3a9f364c13789f1)| |[Group Query Attentiion](https://texonom.com/group-query-attentiion-d72d2da59fba4e9d937a4ec856dac90f)| |[PagedAttention](https://texonom.com/pagedattention-abf197357343437681fb878ae5700926)| > [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/) > [[๋”ฅ๋Ÿฌ๋‹] ์–ธ์–ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐœ๋… ์ •๋ฆฌ](https://velog.io/@rsj9987/๋”ฅ๋Ÿฌ๋‹-์šฉ์–ด์ •๋ฆฌ) > [16-01 ํŠธ๋žœ์Šคํฌ๋จธ(Transformer)](https://wikidocs.net/31379)
762711860abb45f59904f1ac4e4af285
***Copy mechanism***
Seq2Seq Notion
Mar 8, 2023
Alan Jo
Alan Jo
May 29, 2023
๋””์ฝ”๋”ฉ ๊ณผ์ •์—์„œ ๋ฌธ์žฅ์„ ์ƒ์„ฑํ•  ๋•Œ ํ•„์š”ํ•œ ์–ดํœ˜๊ฐ€ ์ถœ๋ ฅ ์‚ฌ์ „(output vocabulary)์— ์—†๋Š” ๋ฌธ์ œ(Out-of-Vocabulary)์™€๊ณ ์œ ๋ช…์‚ฌ๋“ค์˜ ์ถœ๋ ฅ ํ™•๋ฅ ์ด ์ž‘์•„์ง€๋Š” ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ๊ณ ์•ˆ๋œ ๋ฐฉ๋ฒ•์œผ๋กœ, ์ถœ๋ ฅ์— ํ•„์š”ํ•œ ์–ดํœ˜๋ฅผ ์ž…๋ ฅ ์—ด์—์„œ์ฐพ์•„ ์ถœ๋ ฅ ์—†์ด ๋ณต์‚ฌ(copy)ํ•˜๋Š” ์‚ฌ์šฉํ•˜๋Š” ๋ฐฉ๋ฒ• > [Untitled](https://koreascience.kr/article/CFKO201612470014629.pdf)
b5dee9c80ca24bb993b5f152129b3577
Cross-Attention
NLP Attention Notion
Apr 9, 2023
Alan Jo
Alan Jo
Jul 28, 2023
[MLLM](https://texonom.com/mllm-98172e9446c04bcc9cf52b2bc5d0bd17)
## Encoder-Decoder Attention in [Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b) Query : ๋””์ฝ”๋” ๋ฒกํ„ฐ / **Key **= Value : ์ธ์ฝ”๋” ๋ฒกํ„ฐ ์ธ์ฝ”๋”์˜ ์ถœ๋ ฅ๊ณผ ๋””์ฝ”๋”์˜ ํ˜„์žฌ ์ƒํƒœ๋ฅผ ์ด์šฉํ•˜์—ฌ ๋””์ฝ”๋”๊ฐ€ ๋‹ค์Œ ๋‹จ์–ด๋ฅผ ์˜ˆ์ธกํ•˜๋Š” ๋ฐ ์‚ฌ์šฉ MLLM์—์„œ ๋งค์šฐ ์ค‘์š”ํ•œ ๋ถ€๋ถ„ ์ด๋ฏธ์ง€๋‚˜ ๋น„๋””์˜ค ๋ฐ์ดํ„ฐ์™€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ๋ชจ๋ธ๋ง $$h_i = softmax((x_i * K_v) / sqrt(d_k)) * V_v$$
6365d422d2bf4be78dac038df3a19aae
****Multi-head Attention****
NLP Attention Notion
Mar 5, 2023
Alan Jo
Alan Jo
Jul 28, 2023
### **Multiple heads lets the model consider multiple words simultaneously.** Because we use the softmax function in attention, it amplifies the highest value while squashing the lower ones. As a result, each head tends to focus on a single element. Multiple heads let us attend to several words.ย **It also provides redundancy**, where if any single head fails, we have the other attention heads to rely on. ์ž…๋ ฅ ๋ฒกํ„ฐ๋ฅผ ์—ฌ๋Ÿฌ ๊ฐœ์˜ ํ—ค๋“œ๋กœ ๋ถ„ํ• ํ•˜์—ฌ ๊ฐ๊ฐ ์–ดํ…์…˜์„ ์ˆ˜ํ–‰ํ•˜๊ณ  ๊ฒฐ๊ณผ๋ฅผ ๊ฒฐํ•ฉํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๋™์ž‘ > [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/)
d9dfc39b27494123ae4c81f3b98e50b5
Self-Attention
NLP Attention Notion
Mar 5, 2023
Alan Jo
Alan Jo
Jul 28, 2023
### ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ ๋‚ด์—์„œ ๋‹จ์–ด ๊ฐ„์˜ ์ƒํ˜ธ์ž‘์šฉ์„ ๋ชจ๋ธ๋ง Self-attention enables the encoder to weigh the importance of each word and capture both short and long-range dependencies. Self-attention enables the decoder to focus on different parts of the output generated so far. - ์ธ์ฝ”๋”์˜ ์…€ํ”„ ์–ดํ…์…˜ : Query = **Key **= Value - ๋””์ฝ”๋”์˜ ๋งˆ์Šคํฌ๋“œ ์…€ํ”„ ์–ดํ…์…˜ : Query = **Key **= Value self-attention ์€ ์–ดํ…์…˜์„ ์ž๊ธฐ ์ž์‹ ์—๊ฒŒ ์ˆ˜ํ–‰ํ•œ๋‹ค๋Š” ์˜๋ฏธ ๋ฌธ์žฅ ๋‚ด๋ถ€ ์š”์†Œ์˜ ๊ด€๊ณ„๋ฅผ ์ž˜ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด์„œ ๋ฌธ์žฅ ์ž์‹ ์— ๋Œ€ํ•ด ์–ดํ…์…˜ ๋งค์ปค๋‹ˆ์ฆ˜์„ ์ ์šฉ - ์ฟผ๋ฆฌ : ๋ถ„์„ํ•˜๊ณ  ํ•˜๋Š” ๋‹จ์–ด์— ๋Œ€ํ•œ ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ - ํ‚ค : ๊ฐ ๋‹จ์–ด๊ฐ€ ์ฟผ๋ฆฌ์— ํ•ด๋‹นํ•˜๋Š” ๋‹จ์–ด์™€ ์–ผ๋งˆ๋‚˜ ์—ฐ๊ด€์žˆ๋Š” ์ง€๋ฅผ ๋น„๊ตํ•˜๊ธฐ ์œ„ํ•œ ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ - ๋ฐธ๋ฅ˜ : ๊ฐ ๋‹จ์–ด์˜ ์˜๋ฏธ๋ฅผ ์‚ด๋ ค์ฃผ๊ธฐ ์œ„ํ•œ ๊ฐ€์ค‘์น˜ ๋ฒกํ„ฐ ### self Attention ๊ณผ์ • 1. ํŠน์ • ๋‹จ์–ด์˜ ์ฟผ๋ฆฌ(q) ๋ฒกํ„ฐ์™€ ๋ชจ๋“  ๋‹จ์–ด์˜ ํ‚ค(k) ๋ฒกํ„ฐ๋ฅผ ๋‚ด์ ํ•œ๋‹ค. ๋‚ด์ ํ•ด์„œ ๋‚˜์˜จ ๊ฐ’์€ Attention Score๊ฐ€ ๋œ๋‹ค. 2. ๋ณด์ •์œผ๋กœ ํŠธ๋žœ์Šคํฌ๋จธ์—์„œ๋Š” ์ด ๊ฐ€์ค‘์น˜๋ฅผ q,k,v ๋ฒกํ„ฐ ์ฐจ์›ย *dk*ย ์˜ ์ œ๊ณฑ๊ทผ์ธย *d**k*๋กœ ๋‚˜๋ˆ„์–ด ์ค€๋‹ค. 3. Softmax๋กœ ์ฟผ๋ฆฌ์— ํ•ด๋‹นํ•˜๋Š” ๋‹จ์–ด์™€ ๋ฌธ์žฅ ๋‚ด ๋‹ค๋ฅธ ๋‹จ์–ด๊ฐ€ ๊ฐ€์ง€๋Š” ๊ด€๊ณ„์˜ ๋น„์œจ ๊ณ„์‚ฐ 4. Value ๊ฐ ๋‹จ์–ด์˜ ๋ฒกํ„ฐ๋ฅผ ๊ณฑํ•ด์ค€ ํ›„ ๋ชจ๋‘ ๋”ํ•œ๋‹ค. > [[๋”ฅ๋Ÿฌ๋‹] ์–ธ์–ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐœ๋… ์ •๋ฆฌ](https://velog.io/@rsj9987/๋”ฅ๋Ÿฌ๋‹-์šฉ์–ด์ •๋ฆฌ)
d06bde44563f455b951d17955d820f77
Dilated Attention
NLP Attention Usages
Jul 13, 2023
Alan Jo
Alan Jo
Jul 28, 2023
![](https://www.notion.so/image/https%3A%2F%2Fs3-us-west-2.amazonaws.com%2Fsecure.notion-static.com%2Fe84a0f0b-ccda-469a-978d-c0ef60558a4b%2FUntitled.png?table=block&id=6014ac5c-6ab5-4d96-affe-7f504b2c4355&cache=v2) > [Microsoftโ€™s LongNet Scales Transformer to One Billion Tokens](https://medium.com/syncedreview/microsofts-longnet-scales-transformer-to-one-billion-tokens-af02ff657d87)
d9000e3460f547bca3f15ac9b8d7e36c
Flash Attention
NLP Attention Usages
Jun 29, 2023
Alan Jo
Alan Jo
Jul 28, 2023
[flash-attention](https://github.com/HazyResearch/flash-attention)
**Tiling**๊ณผย **Recomputation**์„ ์‚ฌ์šฉํ•˜์—ฌ Attention์„ ๊ฐ€์†ํ™” softmax๋ฅผ block๋‹จ์œ„๋ฅผ ์ชผ๊ฐœ์„œ ๊ณ„์‚ฐ CUDA kernel๋ฅผ ์‚ฌ์šฉ > [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Wonbeom Jang](https://www.wonbeomjang.kr/blog/2023/fastattention/) > [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Wonbeom Jang](https://www.wonbeomjang.kr/blog/2023/fastattention/)
ca8093deb6c648059aff1580b0ddbf68
Group Query Attentiion
NLP Attention Usages
Aug 2, 2023
Alan Jo
Alan Jo
Aug 2, 2023
d72d2da59fba4e9d937a4ec856dac90f
Multi Query Attention
NLP Attention Usages
Aug 2, 2023
Alan Jo
Alan Jo
Aug 2, 2023
5641aba38a8b47caa3a9f364c13789f1
**PagedAttention**
NLP Attention Usages
Aug 3, 2023
Alan Jo
Alan Jo
Aug 3, 2023
Efficient management of attention key and value memory
abf197357343437681fb878ae5700926
Deep Learning Compiler
Deep Learning Usages
Mar 22, 2022
Alan Jo
Alan Jo
Mar 22, 2022
### Deep Learning Compiler Tools |Title| |:-:| |[Nebullvm](https://texonom.com/nebullvm-d967865085f84f5ab2b1a4a5ea661dfd)|
7d79af9683764b6b983793c1856578c6
Deep Learning Tool
Deep Learning Usages
Oct 6, 2021
Alan Jo
Alan Jo
May 29, 2023
[tuning_playbook](https://github.com/google-research/tuning_playbook)
### Deep Learning Hubs |Title| |:-:| |[HuggingFace](https://texonom.com/huggingface-eb25c513b432477e9da51ca19bb06833)| |[Pytorch Hub](https://texonom.com/pytorch-hub-d46f5b09e91241578a0c63b4847396fb)| ### Deep Learning Tools |Title| |:-:| |[fastbook](https://texonom.com/fastbook-d022baa785a14ce5af5cf1ef59995cb1)| |[Netron](https://texonom.com/netron-2e410ef29aaa4a7d9e2a2777ce4dd3ee)|
a14ea6f4574342ef974443634e27c6ce
Learn Deep Learning
Deep Learning Usages
Jun 6, 2023
Alan Jo
Alan Jo
Jun 6, 2023
> [0020 DL Terms & Concepts - Deepest Documentation](https://deepestdocs.readthedocs.io/en/latest/002_deep_learning_part_1/0020/)
4382083fb54d4705984e7f45a9af2d86
Sentiment Neuron
Deep Learning Usages
May 20, 2023
Alan Jo
Alan Jo
May 20, 2023
### Proof of compressed data in deep network before gpt1 ### Why ImportantWhy Important > [แ„‹แ…ตแ†ฏแ„…แ…ตแ„‹แ…ฃ ์ˆ˜์ธ ์ผ€๋ฒ„์™€ AGI์˜ ๋ฏธ์‹ฑ๋งํฌ](https://www.youtube.com/watch?v=LQviQS24uQY&t=840) > [Unsupervised sentiment neuron](https://openai.com/research/unsupervised-sentiment-neuron) > [Sentiment Neuron](https://tensorflow.blog/2017/04/07/sentiment-neuron/)
2e44e9754b894534af6c121b6d6074d6
Nebullvm
Deep Learning Compiler Tools
Mar 22, 2022
Alan Jo
Alan Jo
Mar 22, 2022
[nebuly](https://github.com/nebuly-ai/nebullvm)
d967865085f84f5ab2b1a4a5ea661dfd
HuggingFace
Deep Learning Hubs
Jun 27, 2022
Alan Jo
Alan Jo
Jul 17, 2023
[AI Industry](https://texonom.com/ai-industry-d8709bd0498145e3a66af6da3f963fa7) [Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686)
### [Pytorch Hub](https://texonom.com/pytorch-hub-d46f5b09e91241578a0c63b4847396fb) + [Lightening AI](https://texonom.com/lightening-ai-af651491f5474031bc51e6d1a99d5f22) ### HuggingFace Usages |Title| |:-:| |[HuggingFace Hub](https://texonom.com/huggingface-hub-9b16b4841f2d4bdd9827120c47677b29)| |[Huggingface Model](https://texonom.com/huggingface-model-f985be46b10e462f91e65576b81f452f)| |[Huggingface Dataset](https://texonom.com/huggingface-dataset-43582d2257474a83874ecec2d4c6ab44)| |[HuggingFace Space](https://texonom.com/huggingface-space-e37f59c218ca48cebe509d3ab2381b34)| ### LLM Leaderboard > [Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) > [[D] HuggingFace ecosystem vs. Pytorch Lightning for big research NLP project with many collaborators.](https://www.reddit.com/r/MachineLearning/comments/si08qt/d_huggingface_ecosystem_vs_pytorch_lightning_for/)
eb25c513b432477e9da51ca19bb06833
Pytorch Hub
Deep Learning Hubs
May 29, 2023
Alan Jo
Alan Jo
May 29, 2023
[HuggingFace](https://texonom.com/huggingface-eb25c513b432477e9da51ca19bb06833) [Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686)
### Pytorch Hub Usages |Title| |:-:|
d46f5b09e91241578a0c63b4847396fb
Huggingface Dataset
HuggingFace Usages
May 23, 2023
Alan Jo
Alan Jo
Jul 19, 2023
### Huggingface Dataset Usages |Title| |:-:| |[ HuggingFace Datasets Jax](https://texonom.com/huggingface-datasets-jax-a92d62f342de4ddc83820623059ca02e)| > [Create a dataset](https://huggingface.co/docs/datasets/create_dataset)
43582d2257474a83874ecec2d4c6ab44
HuggingFace Hub
HuggingFace Usages
Jun 19, 2023
Alan Jo
Alan Jo
Jul 17, 2023
[huggingface_hub](https://github.com/huggingface/huggingface_hub) > [Quickstart](https://huggingface.co/docs/huggingface_hub/quick-start)
9b16b4841f2d4bdd9827120c47677b29
Huggingface Model
HuggingFace Usages
May 23, 2023
Alan Jo
Alan Jo
Jun 29, 2023
### Huggingface Model Usages |Title| |:-:| |[Huggingface Provider](https://texonom.com/huggingface-provider-8bd64d5951774d6f9ed623abbe471b4c)| |[Huggingface H4](https://texonom.com/huggingface-h4-9b33bfe491704142929a794edd95a7df)| |[Huggingface Model Card](https://texonom.com/huggingface-model-card-39d3f0d8805c4cb0939b623b8bac64ea)| > [Models](https://huggingface.co/docs/hub/models)
f985be46b10e462f91e65576b81f452f
HuggingFace Space
HuggingFace Usages
Mar 26, 2023
Alan Jo
Alan Jo
Jul 13, 2023
[Gradio](https://texonom.com/gradio-7e1647e9bc174161b4d7e8f44dd53707) [Streamlit](https://texonom.com/streamlit-9e295c64d27e4999878a022b1c538964)
### HuggingFace Space SDK |Title| |:-:| |[HuggingFace Docker Space](https://texonom.com/huggingface-docker-space-0f250609ad1f493bb91146813de8d8a6)|
e37f59c218ca48cebe509d3ab2381b34
HuggingFace Datasets Jax
Huggingface Dataset Usages
May 30, 2023
Alan Jo
Alan Jo
Jul 17, 2023
> [Use with JAX](https://huggingface.co/docs/datasets/use_with_jax)
a92d62f342de4ddc83820623059ca02e
Huggingface H4
Huggingface Model Usages
Jul 9, 2023
Alan Jo
Alan Jo
Jul 9, 2023
### helpful, honest, harmless, and huggy [StarChat](https://texonom.com/starchat-ea316ec509564a51b56ad92b22831220)
9b33bfe491704142929a794edd95a7df
Huggingface Model Card
Huggingface Model Usages
Jul 19, 2023
Alan Jo
Alan Jo
Jul 30, 2023
### Disable API ```type inference: false ``` - tags - pipeline tags - etc ### Widgets > [Widgets](https://huggingface.co/docs/hub/models-widgets) > [Disable Hosted inference API](https://discuss.huggingface.co/t/disable-hosted-inference-api/10379)
39d3f0d8805c4cb0939b623b8bac64ea
Huggingface Provider
Huggingface Model Usages
Jun 29, 2023
Alan Jo
Alan Jo
Aug 5, 2023
### Companies > [amazon (Amazon Web Services)](https://huggingface.co/amazon) > [stabilityai (Stability AI)](https://huggingface.co/stabilityai) > [EleutherAI (EleutherAI)](https://huggingface.co/EleutherAI) > [allenai (Allen Institute for AI)](https://huggingface.co/allenai) ### Model Users > [ehartford (Eric Hartford)](https://huggingface.co/ehartford) > [psmathur (Pankaj Mathur)](https://huggingface.co/psmathur) > [TheBloke (Tom Jobbins)](https://huggingface.co/TheBloke) > [jncraton (Jon)](https://huggingface.co/jncraton) > [bhenrym14 (Brandon)](https://huggingface.co/bhenrym14) ### Organization > [decapoda-research (Decapoda Research)](https://huggingface.co/decapoda-research) > [openchat (OpenChat)](https://huggingface.co/openchat) > [MBZUAI (Mohamed Bin Zayed University of Artificial Intelligence)](https://huggingface.co/MBZUAI) > [openchat/openchat ยท Hugging Face](https://huggingface.co/openchat/openchat) > [OpenAssistant (OpenAssistant)](https://huggingface.co/OpenAssistant)
8bd64d5951774d6f9ed623abbe471b4c