title
stringlengths 1
544
โ | parent
stringlengths 0
57
โ | created
stringlengths 11
12
โ | editor
stringclasses 1
value | creator
stringclasses 4
values | edited
stringlengths 11
12
โ | refs
stringlengths 0
536
โ | text
stringlengths 1
26k
| id
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|
PPO
|
Reinforcement Learnings
|
Jul 15, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
### Proximal Policy Optimization
balance between ease of implementation, sample complexity, and tuning
2017๋
OpenAI์์ ๊ฐ๋ฐ๋ ๋ชจ๋ธ ์๋ ๊ฐํ ํ์ต ์๊ณ ๋ฆฌ์ฆ
ํ-๋์ ์์ ๊ฐ์ ํ ๋นํ๋ ๋์ ์ ์ฑ
๊ณต๊ฐ์ ๊ฒ์
data๋ฅผ ๋ง๋ค์ด๋ธ old policy์ train ๋์์ด ๋๋ ํ์ฌ์ network์ new policy๋ฅผ ๊ตฌ๋ถ
[PPO2](https://texonom.com/ppo2-378944bb3b8848a185c20fc529c390d2)
> [Proximal Policy Optimization Algorithms](https://arxiv.org/abs/1707.06347)
> [Proximal Policy Optimization](https://openai.com/research/openai-baselines-ppo)
|
87ce8ebe81f84ac1a7617b1f6def9e26
|
|
Q-Learning
|
Reinforcement Learnings
|
Nov 5, 2019
|
Alan Jo
|
Seong-lae Cho
|
Aug 31, 2023
|
[SARSA](https://texonom.com/sarsa-e5d847a4fb6e41cdad5b9ffb6e974a10)
|
### Approximate Q-Learning
Off police ์๊ฐ์ฐจ์ ์ด
ํ์ฌ ํ๋ํ๋ ์ ์ฑ
๊ณผ๋ ๋
๋ฆฝ์ ์ผ๋ก ํ์ต
### Reinforce
> [๊ฐํํ์ต ๊ธฐ์ด(Q-learning)](https://bluediary8.tistory.com/18)
|
6fb81e4a53ab4e3097784cde99c8c038
|
RLHF
|
Reinforcement Learnings
|
Apr 30, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
[AI Alignment](https://texonom.com/ai-alignment-f676f1a29ffd45e19b3d170afa4f2244) [Active Learning](https://texonom.com/active-learning-85d42ac892e84e5ba4fd2727f1791f65)
|
## Reinforcement learning from human feedback
### Limitation
LM์ ๊ทผ๋ณธ์ ์ธ ๋ฌธ์ ์ธ Size, hallucination์ ์์ง๊น์ง๋ ๊ฐ์ ํ ์๋ ์๋ ํ๊ณ์
Scaling ์ด์, ๋๋ฌด ๋ณต์ก
> [RLHF๋?](https://velog.io/@nellcome/RLHF๋)
> [Reinforcement learning from human feedback](https://en.wikipedia.org/wiki/Reinforcement_learning_from_human_feedback)
|
4b184f9c9e8b4c7a8861fb6374e91aa6
|
RRHF
|
Reinforcement Learnings
|
Jul 15, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
[RLHF](https://texonom.com/rlhf-4b184f9c9e8b4c7a8861fb6374e91aa6)
|
### ***R****ankย ****R****esponse to alignย ****H****umanย ****F****eedback*
efficiently align language model output probabilities with human preferences as robust as fine-tuning and it only needs 1 to 2 models during tuning
|
073c6bdf48444a1b9acad9e65057f3d0
|
SARSA
|
Reinforcement Learnings
|
Jul 18, 2023
|
Alan Jo
|
Alan Jo
|
Aug 31, 2023
|
## state-action-reward-state-action
ํ์ฌ ๊ฐ์ง๊ณ ์๋ย **ํํจ์**ย ๋ฅผ ํ ๋๋ก ์ํ์ ํ์ ์ ์ฑ
์ผ๋ก ๋ชจ์ผ๊ณ , ๊ทธ ์ํ๋ก ๋ฐฉ๋ฌธํ ํํจ์๋ฅผ ์
๋ฐ์ดํธํ๋ ๊ณผ์ ์ ๋ฐ๋ณต
Almost start of Reinforcement learning after [Policy Iteration](https://texonom.com/policy-iteration-55cbe9219f8a48fbb850f64b677e847a)
GPI์์๋ ๋ฒจ๋ง ๋ฐฉ์ ์์ ๋ฐ๋ผ ์ ์ฑ
์ ํ๊ฐ
Temporal-Difference ๋ฐฉ๋ฒ์์๋ ๊ฐ์น ์ดํฐ๋ ์ด์
์ ๋ฐฉ๋ฒ์ ๋์
**ํ์ฌ ์ํ์ ํํจ์๋ฅผ ๋ณด๊ณ ํ๋จํ๋ค๋ฉด ํ๊ฒฝ์ ๋ชจ๋ธ์ ๋ชฐ๋ผ๋ ๋๋ค**
์๊ฐ์ฐจ ์ ์ด์์๋ **ํํจ์๋ฅผ ์ฌ์ฉํ ํ์ ์ ์ฑ
**ย ์ ํตํด ํ๋์ ์ ํ
์ด๊ธฐ์ ์์ด์ ํธ์๊ฒ ํ์์ ์ฑ
์ย **์๋ชป๋ ํ์ต์ผ๋ก ๊ฐ๊ฒํ ๊ฐ๋ฅ์ฑ์ด ํฌ๊ธฐ ๋๋ฌธ์** epsilon**-ํ์ ์ ์ฑ
**ย ์ ์ฌ์ฉ
### Limitation
ํน์ state์ย **๊ฐํ๋ฒ๋ฆฌ๋ ํ์**
์์ ์ด ํ๋ํ ๋๋ก ํ์ตํ๋ ๊ฒ์ย **On-Policy ์๊ฐ์ฐจ ์ ์ด**
๊ทธ๋์ [Q-Learning](https://texonom.com/q-learning-6fb81e4a53ab4e3097784cde99c8c038)
> [(7) ์ด์ฌ(SARSA)์ ํ๋ฌ๋(Q-Learning)](https://jang-inspiration.com/sarsa-qlearning#132c2000aab74666a2273d8b2d71cdac)
|
e5d847a4fb6e41cdad5b9ffb6e974a10
|
|
TRPO
|
Reinforcement Learnings
|
Jul 15, 2023
|
Alan Jo
|
Alan Jo
|
Jul 15, 2023
|
### ****Trustย Region Policy Optimization****
|
23c0f917124d419cbfe478d20b804276
|
|
PPO2
|
PPO
| null | null | null | null | null |
> [Untitled](https://github.com/openai/baselines/blob/master/baselines/ppo2/ppo2.py)
|
378944bb3b8848a185c20fc529c390d2
|
ML Compiler Optimization
|
Machine Learning Techniques
|
Jul 9, 2022
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
[Compiler Optimization](https://texonom.com/compiler-optimization-3eee7067387b45d7875df50f4473ad18) [Parallel Training](https://texonom.com/parallel-training-4a8896bc837b4dddb47c3700b715cdc8)
|
[relax](https://github.com/mlc-ai/relax)
### ML Compiler Optimization Tools
|Title|
|:-:|
|[XLA](https://texonom.com/xla-4f75dff43dc0451aa5ca92e9218a3028)|
|[MLGO](https://texonom.com/mlgo-60d88727f0b944ec9ce62d254c2e8a76)|
|[Hidet](https://texonom.com/hidet-7b706024cb3040269f71e47b5b87d2b2)|
|
011e7bd0ba8f417bb111ec5ea2171c8e
|
Parallel Training
|
Machine Learning Techniques
|
Mar 15, 2022
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
[ML Compiler Optimization](https://texonom.com/ml-compiler-optimization-011e7bd0ba8f417bb111ec5ea2171c8e)
|
### data parallelism or model parallelism
- In data parallelism, the data is split into multiple parts
- in model parallelism, different parts of the model are processed by separate processors
### Parallel Training Notion
|Title|
|:-:|
|[Model Parallelism](https://texonom.com/model-parallelism-76dd813ada7b4e50b645af2f05821d48)|
|[Data Parallelism](https://texonom.com/data-parallelism-8e90f1c595a84dad9f4e921e74f86ba6)|
### Parallel Training Usages
|Title|
|:-:|
|[Parallel Learning Tool](https://texonom.com/parallel-learning-tool-2a9741aa76c14c16a1240f3422f11421)|
|[Parallel Training Example](https://texonom.com/parallel-training-example-3c16ddf97fbe43359e7da3dbd3ce96ee)|

> [๋ฅ๋ฌ๋ ๋ชจ๋ธ์ ๋ถ์ฐํ์ต์ด๋? (Data parallelism๊ณผ Model parallelism)](https://lifeisenjoyable.tistory.com/21)
|
4a8896bc837b4dddb47c3700b715cdc8
|
Quantum Machine Learning
|
Machine Learning Techniques
|
Mar 9, 2022
|
Alan Jo
|
Alan Jo
|
Mar 5, 2023
|
### Quantum Machine Learnings
|Title|
|:-:|
> [Spooky Action Could Help Boost Quantum Machine Learning](https://spectrum.ieee.org/quantum-machine-learning)
|
8f5276045f4d43b8b96f3b4ec6646f66
|
|
Weight Initialization
|
Machine Learning Techniques
|
Jun 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
- weight to small random numbers
- bias (zero or small nonzero)
### Weight Initialization Usages
|Title|
|:-:|
|[He initialization](https://texonom.com/he-initialization-a255fdaeec8e485faf3215a28ed5fdb9)|
|[Xavier Initialization](https://texonom.com/xavier-initialization-40045bfdf72343aea3a234214145f9dd)|
> [0025 Initialization - Deepest Documentation](https://deepestdocs.readthedocs.io/en/latest/002_deep_learning_part_1/0025/)
|
6cfc10eb06f948528aa76a9814a9ac85
|
|
Hidet
|
ML Compiler Optimization Tools
|
May 1, 2023
|
Alan Jo
|
Alan Jo
|
May 1, 2023
|
[Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686)
|
> [PyTorch](https://pytorch.org/blog/introducing-hidet)
|
7b706024cb3040269f71e47b5b87d2b2
|
MLGO
|
ML Compiler Optimization Tools
|
Jul 9, 2022
|
Alan Jo
|
Alan Jo
|
Mar 11, 2023
|
[LLVM](https://texonom.com/llvm-5dc6acb10b5244a2af349319ef87c797) [ml-compiler-opt](https://github.com/google/ml-compiler-opt)
|
### Infrastructure for Machine Learning Guided Optimization
> [MLGO: A Machine Learning Framework for Compiler Optimization](https://ai.googleblog.com/2022/07/mlgo-machine-learning-framework-for.html)
|
60d88727f0b944ec9ce62d254c2e8a76
|
XLA
|
ML Compiler Optimization Tools
|
Mar 11, 2023
|
Alan Jo
|
Alan Jo
|
Mar 11, 2023
|
[xla](https://github.com/openxla/xla)
|
- pytorch
- tensorflow
- jax
|
4f75dff43dc0451aa5ca92e9218a3028
|
Data Parallelism
|
Parallel Training Notion
|
Apr 25, 2023
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
ํ์ต ๋ฐ์ดํฐ๋ฅผ ์ฌ๋ฌ GPU์ ๋๋ ํ์ต
### Data Parallelism Usages
|Title|
|:-:|
|
8e90f1c595a84dad9f4e921e74f86ba6
|
|
Model Parallelism
|
Parallel Training Notion
|
Apr 25, 2023
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
๋ชจ๋ธ์ ์ฌ๋ฌ GPU์ ๋๋๋
### Model Parallelism Usages
|Title|
|:-:|
|
76dd813ada7b4e50b645af2f05821d48
|
|
Parallel Learning Tool
|
Parallel Training Usages
|
Apr 25, 2023
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
### Parallel Training System
|Title|
|:-:|
|[Colossal AI](https://texonom.com/colossal-ai-031c5480b24249ce903fea4e0f8d435c)|
|[Megatrom LM](https://texonom.com/megatrom-lm-87be23e9b623465395a0d6a4e94470ae)|
|[DeepSpeed](https://texonom.com/deepspeed-3866b23c00eb4d529de6e33dc48ffae7)|
|
2a9741aa76c14c16a1240f3422f11421
|
|
Parallel Training Example
|
Parallel Training Usages
|
Apr 25, 2023
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
### Parallel Learning Examples
|Title|
|:-:|
|[](https://texonom.com/48f191fcffa04c068978381a78b4ca8d)|
|
3c16ddf97fbe43359e7da3dbd3ce96ee
|
|
Colossal AI
|
Parallel Training System
|
Mar 15, 2022
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
[ColossalAI](https://github.com/hpcaitech/ColossalAI)
|
> [Colossal-AI](https://colossalai.org/)
|
031c5480b24249ce903fea4e0f8d435c
|
DeepSpeed
|
Parallel Training System
|
Feb 19, 2021
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
[DeepSpeed](https://github.com/microsoft/DeepSpeed)
|
### Pipeline Parallelism
> [DeepSpeed Pipeline Parallelism](https://velog.io/@nawnoes/DeepSpeed-Pipeline-Parallelism)
> [PyTorch Lightning DeepSpeed](https://velog.io/@nawnoes/PyTorch-Lightning-DeepSpeed)
|
3866b23c00eb4d529de6e33dc48ffae7
|
Megatrom LM
|
Parallel Training System
|
Apr 25, 2023
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
[Megatron-LM](https://github.com/NVIDIA/Megatron-LM)
|
87be23e9b623465395a0d6a4e94470ae
|
|
[gpt-neox](https://github.com/EleutherAI/gpt-neox)
|
Parallel Learning Examples
|
Apr 25, 2023
|
Alan Jo
|
Alan Jo
|
Apr 25, 2023
|
[Megatrom LM](https://texonom.com/megatrom-lm-87be23e9b623465395a0d6a4e94470ae) [DeepSpeed](https://texonom.com/deepspeed-3866b23c00eb4d529de6e33dc48ffae7)
|
48f191fcffa04c068978381a78b4ca8d
|
|
**He initialization**
|
Weight Initialization Usages
|
Jul 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
[ReLU](https://texonom.com/relu-e582549804da48b893758895e446ffb9)
|
ReLU + He ์ด๊ธฐํ ๋ฐฉ๋ฒ์ด ๋ณดํธ์
> [Untitled](https://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf)
|
a255fdaeec8e485faf3215a28ed5fdb9
|
**Xavier Initialization**
|
Weight Initialization Usages
|
Jul 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 6, 2023
|
## Glorot Initialization
์ด์ ์ธต์ ๋ด๋ฐ ๊ฐ์์ ๋ค์ ์ธต์ ๋ด๋ฐ ๊ฐ์ ์ด์ฉ
์ฌ๋ฌ ์ธต์ ๊ธฐ์ธ๊ธฐ ๋ถ์ฐ ์ฌ์ด์ ๊ท ํ์ ๋ง์ถ๋ค
S์ ํํ์ธ ํ์ฑํ ํจ์์ ํจ๊ป ์ฌ์ฉํ ๊ฒฝ์ฐ์๋ ์ข์ ์ฑ๋ฅ์ ๋ณด์ด์ง๋ง, ReLU์ ํจ๊ป ์ฌ์ฉํ ๊ฒฝ์ฐ์๋ ์ฑ๋ฅ์ด ์ข์ง ์๋ค
### Uniform Distribution
### Normal distribution
> [07-07 ๊ธฐ์ธ๊ธฐ ์์ค(Gradient Vanishing)๊ณผ ํญ์ฃผ(Exploding)](https://wikidocs.net/61375)
|
40045bfdf72343aea3a234214145f9dd
|
|
Function Transformers
|
Machine Learning Tools
|
Jun 13, 2021
|
Alan Jo
|
Alan Jo
|
Jan 9, 2023
|
### Composable transformations
> [Transformers](https://huggingface.co/transformers/)
|
c4396d81a01a4425ba0c6702501c911a
|
|
ML Accelerator
|
Machine Learning Tools
|
Jun 1, 2022
|
Alan Jo
|
Alan Jo
|
Jun 1, 2022
|
### ML Accelerators
|Title|
|:-:|
|
0df3a271237b4e63a342aa7ce704870d
|
|
ML Analyze Tool
|
Machine Learning Tools
|
Aug 5, 2021
|
Alan Jo
|
Alan Jo
|
Apr 19, 2022
|
### ML Analyze Tools
|Title|
|:-:|
|[Uptrain](https://texonom.com/uptrain-a12b6a1c3410434ea318cb76a2f99a98)|
|[Evidently](https://texonom.com/evidently-0500b048a7c34d8e91e18207e43cefd1)|
|
5aeb48a9850245eb97e20ec56448a15f
|
|
ML Container Tool
|
Machine Learning Tools
|
May 11, 2022
|
Alan Jo
|
Alan Jo
|
May 11, 2022
|
### ML Container Tools
|Title|
|:-:|
|[Cog](https://texonom.com/cog-1b9175a7b63c43b5b6cd69a223f9d99d)|
|
14223d2021c748268fba092dee1fa357
|
|
ML Feature Store
|
Machine Learning Tools
|
Apr 19, 2022
|
Alan Jo
|
Alan Jo
|
Apr 19, 2022
|
### ML Feature Stores
|Title|
|:-:|
|[Feathr](https://texonom.com/feathr-5ff843203b4745df949b956391bf9423)|
|
f53069a7083b4719b1e5fab18a5a9bbd
|
|
ML Platform
|
Machine Learning Tools
|
Sep 8, 2021
|
Alan Jo
|
Alan Jo
|
Aug 4, 2022
|
[](https://texonom.com/fee31eaaf53a45c28b0b305c3874b856)
|
### ML Platforms
|Title|
|:-:|
|[diffgram](https://texonom.com/diffgram-99e0ffaf73f84615a556b6bbe71c4572)|
|[Wandb](https://texonom.com/wandb-2219d228212940068aa5a604af7d5dbc)|
|
9d4142db8db042ed9e4a79085348cc55
|
Evidently
|
ML Analyze Tools
|
Aug 5, 2021
| null | null | null | null |
> [GitHub - evidentlyai/evidently: Interactive reports to analyze machine learning models during validation or production monitoring.](https://github.com/evidentlyai/evidently?ref=producthunt?utm_source=tldrnewsletter)
|
0500b048a7c34d8e91e18207e43cefd1
|
Uptrain
|
ML Analyze Tools
|
Mar 9, 2023
| null | null | null | null |
[uptrain](https://github.com/uptrain-ai/uptrain)
|
a12b6a1c3410434ea318cb76a2f99a98
|
Cog
|
ML Container Tools
|
May 11, 2022
|
Alan Jo
|
Alan Jo
|
May 11, 2022
|
[cog](https://github.com/replicate/cog)
|
1b9175a7b63c43b5b6cd69a223f9d99d
|
|
Feathr
|
ML Feature Stores
|
Apr 19, 2022
|
Alan Jo
|
Alan Jo
|
Apr 19, 2022
|
[LinkedIn](https://texonom.com/linkedin-1c0eb8ae1ca346a388e79c15b34355dc) [feathr](https://github.com/linkedin/feathr)
|
> [Open sourcing Feathr - LinkedIn's feature store for productive machine learning](https://engineering.linkedin.com/blog/2022/open-sourcing-feathr
linkedin-s-feature-store-for-productive-m)
### Template Gallery
|Title|
|:-:|
|[Template Page](https://texonom.com/template-page-b6dd128730be402fbf47e98d1a81c5f2)|
|
5ff843203b4745df949b956391bf9423
|
Template Page
|
Template Gallery
|
Apr 19, 2022
|
Alan Jo
|
Alan Jo
|
Apr 19, 2022
|
b6dd128730be402fbf47e98d1a81c5f2
|
||
diffgram
|
ML Platforms
|
Sep 8, 2021
| null | null | null |
> [GitHub - diffgram/diffgram: Complete training data platform for machine learning delivered as a single application.](https://github.com/diffgram/diffgram)
|
99e0ffaf73f84615a556b6bbe71c4572
|
|
Wandb
|
ML Platforms
|
Aug 4, 2022
| null | null | null |
[wandb](https://github.com/wandb/wandb)
|
2219d228212940068aa5a604af7d5dbc
|
|
AdaBoost
|
ML Meta Algorithms
|
Oct 6, 2021
|
Alan Jo
|
Alan Jo
|
Oct 6, 2021
|
์ฑ๋ฅ์ ํฅ์์ํค๊ธฐ ์ํ์ฌ ๋ค๋ฅธ ๋ง์ ํํ์ ํ์ต ์๊ณ ๋ฆฌ์ฆ๊ณผ ๊ฒฐํฉํ์ฌ ์ฌ์ฉ
|
79feac6204384e7299e08e3cfa40d05e
|
|
Deep Learning
|
Neural Network Notion
|
Nov 5, 2019
|
Alan Jo
|
Seong-lae Cho
|
Jul 5, 2023
|
[Neuroscience ](https://texonom.com/neuroscience-b45d9f638a2b4330906556c402307925)
|
### Neural Network based Machine Learning method
composition of differentiable functions
Brain Algorithm + [Neural Network](https://texonom.com/neural-network-86f54f9f1de848c1a29c56c24f7d5094) + [Big Data](https://texonom.com/big-data-236ec9f0ed844f4d8a5ca3236dfa442c)
limit - explainability, fairness, generalizability, causality
Main feature of neural network It can find non-heuristic feature representation
### Deep Learning Notion
|Title|
|:-:|
|[Deep Learning Math](https://texonom.com/deep-learning-math-57c204edef3042568bbd0d5268b877fd)|
|[Deep Learning Network](https://texonom.com/deep-learning-network-f368faf8fa634699aeda503d01c193f0)|
|[End-to-end Deep Learning](https://texonom.com/end-to-end-deep-learning-16bd62f447a144f1b251acf25f1f8789)|
### Deep Learning Usages
|Title|
|:-:|
|[Deep Learning Tool](https://texonom.com/deep-learning-tool-a14ea6f4574342ef974443634e27c6ce)|
|[Deep Learning Compiler](https://texonom.com/deep-learning-compiler-7d79af9683764b6b983793c1856578c6)|
|[Sentiment Neuron](https://texonom.com/sentiment-neuron-2e44e9754b894534af6c121b6d6074d6)|
|[Learn Deep Learning](https://texonom.com/learn-deep-learning-4382083fb54d4705984e7f45a9af2d86)|
### Interview from popular people
> [แแ
ฆแแ
ณแ
แ
ต ํํด์ด ๋งํ๋ AI์ ์ํฅ๋ ฅ๊ณผ ์ ์ฌ๋ ฅ](https://www.youtube.com/watch?v=IvUw9um4Bv8)
> [OpenAI์ ํต์ฌ, Ilya Sutskever ์ธํฐ๋ทฐ](https://www.youtube.com/watch?v=SGCFeIbpGlU&t=722s)
|
7d3c8b9ce05b49cf9eed92dbcdc80cfd
|
Neural Network History
|
Neural Network Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
Jun 7, 2023
|
> [[๋ฏธ๋ผํด๋ ํฐ] AI์๋ ํ์ํ ์๋ ๋ก๊ทธ ์ปดํจํฐ!!](https://stibee.com/api/v1.0/emails/share/KeeaUnUVO5o8muoWpT9bjEtpsL5hny0=)
|
2127bdb56da54a659e570f47704e40b1
|
|
Neural Network Structure
|
Neural Network Notion
|
May 11, 2023
|
Alan Jo
|
Alan Jo
|
Jul 4, 2023
|
### Neural Network Components
|Title|
|:-:|
|[Activation Function](https://texonom.com/activation-function-8e52ee5f83a244d88abeeee3fb9497a8)|
|[Neural Network Layer](https://texonom.com/neural-network-layer-e10ef2afe1954cf6b909f8aa40077393)|
|[Forward Forward Algorithm](https://texonom.com/forward-forward-algorithm-6c989313e382466bba02d15c412fb17f)|
|[skip connections](https://texonom.com/skip-connections-d7ad187f3487468db6eea278f0236a22)|
### Neural Networks
|Title|
|:-:|
|[Perceptron](https://texonom.com/perceptron-1deb66f486d54c93bb928d8afba1864c)|
|[FFNN](https://texonom.com/ffnn-89ecee87d8b7482e86995950db90eb31)|
|[CNN](https://texonom.com/cnn-002bf81a77bc40d1858740d26b61d97b)|
|[RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16)|
|[ANN](https://texonom.com/ann-d4232205ecf9463c95a911d179c87a84)|
|[SNN](https://texonom.com/snn-6cf69239a87e4df9a32b3494862374e4)|
|[GNN](https://texonom.com/gnn-58adb81cb2b649d9af19019182960bb2)|
|
400bbea8029c4eb1a97c0dd063735551
|
|
Deep Learning Math
|
Deep Learning Notion
|
Nov 5, 2019
|
Alan Jo
|
Seong-lae Cho
|
Mar 26, 2023
|
### Sub-field of ML: learning representations of data
Existing ML uses manually designed features
- often over-specified and incomplete
- take a long time to design and validate
DP
- Learned Features are easy to adapt, fast
- Deep learning provides a very flexible, (almost?) universal
- Effective end-to-end joint system learning
> speach recog not good
> visual perception good
> Question Answering good
> 4 + 2 = 6 neurons (not counting inputs) - biases has same number (for each resulting node)
> [3 x 4] + [4 x 2] = 20 weights
>
> 
> 

Optimize (min. or max.)** objective/cost function ๐ฝ(๐)** Generate **error signal **that measures difference between predictions and target values
word representation is 74 72 65 65 in x0
but it is pool
so we need new presentation of word
## wordnet
WordNet: contains the list of synonyms/hypernyms โ using human resources
Wordโs meaning is given by words that frequently appear close-by
### Context: set of words that appears nearby in the fixed-size window (ex: before/after 5 words)
Vector dimension = Number of words * Number of words
memroy O(n^2) poor
so we use dimensionality reduction (pca, svd) โ almost all value is 0 so l;we can
โ word embedding ์ด์ ๋ฒกํฐ๊ณฑ์ผ๋ก similarํ๋จ๊ฐ๋ฅ (become dense vector) - from one hot vector
> We can make embedding for between other languagex
> vector similarity (cosine)
> 
# Language Modeling: Models ofP(text) - sentance
score sentance
> 
> add all is exampel (linearfeatures)
> usually softmax the sum
> 
> 
soft max is for before output

# 1. P(text) - linear model
## CBOW
Predict word based on sum of surroundingembeddings
## Skip-gram
Predict each word in the context given theword [very good, good, neutral, bad, verybad]
Linear Models canโtLearn FeatureCombinations
# 2. Models of P(label |text)
## BOW
each word has its own 5 elements corresponding to
## DeepCBOW
CombinationFeatures - Each vector has โfeaturesโ (e.g. is this an animate object? is this a positive word, etc.)
# ConvolutionalNetworks
### x/ CNN - pooling
weak for long distance feature extractor
don't have holistic view of
# x/ RNN
good for long disatnce fraeture extractor
weakness - Indirect passing of information, credit assignment moredifficult
Can be slow, due to incrementalprocessing
## ModelingSentences w/n-grams
# P(text |text)
> 
> Conditional LanguageModels
> 
> Calculating the Probability of aSentence
> RNN is frequently used in language modeling since RNN can capture long-distance dependencies
|
57c204edef3042568bbd0d5268b877fd
|
|
Deep Learning Network
|
Deep Learning Notion
|
Oct 6, 2021
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
### Deep Learning Models
|Title|
|:-:|
|[Seq2Seq](https://texonom.com/seq2seq-01a9854dffa6417c87d92c11a607250c)|
|[GAN](https://texonom.com/gan-66482b5f518d47f6b337eba9a30ff792)|
|[Capsule Network](https://texonom.com/capsule-network-a14fc7e569864154aae1ef44106e8991)|
|[MANN](https://texonom.com/mann-84ed691391c8439ba8e6b297623c9c0e)|
|
f368faf8fa634699aeda503d01c193f0
|
|
End-to-end Deep Learning
|
Deep Learning Notion
|
Apr 3, 2023
|
Alan Jo
|
Alan Jo
|
Apr 3, 2023
|
์
๋ ฅ์์ ๋ฐ๋ก ์ถ๋ ฅ์ ๊ตฌํ ์ ์๋ค
๋ชจ๋ ๋งค๊ฐ๋ณ์๊ฐ ํ๋์ ์์คํจ์์ ๋ํด ๋์์ ํ๋ จ๋๋ ๊ฒฝ๋ก๊ฐ ๊ฐ๋ฅํ ๋คํธ์ํฌ๋ฅผ ๋ปํ๋ค
[Text Tokenizer](https://texonom.com/text-tokenizer-2bbc41eaa76c4674a4f4b9127fbe5da1) [Text Encoding](https://texonom.com/text-encoding-ab7377cfc5c648059de4860510ad9134) ํ์ง์๊ณ
๋ฉ๋ชจ๋ฆฌ ๋ง์ด ์ฌ์ฉ
> [What is end-to-end deep learning?](https://velog.io/@jeewoo1025/What-is-end-to-end-deep-learning)
|
16bd62f447a144f1b251acf25f1f8789
|
|
Capsule Network
|
Deep Learning Models
|
Aug 21, 2021
|
Alan Jo
|
Alan Jo
|
Oct 6, 2021
|
[CNN](https://texonom.com/cnn-002bf81a77bc40d1858740d26b61d97b)
|
## CapsNet
cnn์์ ์ด๋ฏธ์ง recognition ์์ ์๊ธฐ๋ ๋ฌธ์ ๊ฐ ์๋ ์ธ๊ณต์ ๊ฒฝ๋ง
detects the rotation and leans it as one of the activation vector

> [What is a CapsNet or Capsule Network?](https://medium.com/hackernoon/what-is-a-capsnet-or-capsule-network-2bfbe48769cc)
> [Why Do Capsule Networks Work Better Than Convolutional Neural Networks?](https://medium.com/@ashukumar27/why-do-capsule-networks-work-better-than-convolutional-neural-networks-f4a105a53aff)
|
a14fc7e569864154aae1ef44106e8991
|
GAN
|
Deep Learning Models
|
Nov 18, 2019
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
[Unsupervised learning](https://texonom.com/unsupervised-learning-8cb6e253bfa845b5931d22963ea93019) [Generative Model](https://texonom.com/generative-model-6e5204d2982b4042847aa42e88eb8fb5) [Transfer Learning](https://texonom.com/transfer-learning-442feb66465944eebf144d4e9dd1dbf8)
|
## Generative Adversarial Network
Learn how to generate samples, ์์กฐ์งํ์ ๊ฒฝ์ฐฐ๊ฐ์ด ๊ฒฝ์ํ๋ฉฐ ๋ฐ์
### GAN Notion
|Title|
|:-:|
|[Generator Network](https://texonom.com/generator-network-3d3fb237d8f149979c7ed172aca65529)|
|[Discriminator Network](https://texonom.com/discriminator-network-a02287522a264d779574871285eed3b6)|
|[GAN Minmax Game](https://texonom.com/gan-minmax-game-d3b08fa32fc34cf5920bfc0e80c34b90)|
|[GAN Issues](https://texonom.com/gan-issues-be6312c3a8184b55ba443167164101ba)|
### GANs
|Title|
|:-:|
|[3D GAN](https://texonom.com/3d-gan-672af0e2639041e484ed1a26b56f84cb)|
|[DCGAN](https://texonom.com/dcgan-ae93b511fed5402c926e818243d8a966)|
|[DragGan](https://texonom.com/draggan-bd335de6767243e7b7ec48ff88dea800)|
> [Generative adversarial network](https://en.wikipedia.org/wiki/Generative_adversarial_network)
|
66482b5f518d47f6b337eba9a30ff792
|
****MANN****
|
Deep Learning Models
|
Apr 29, 2023
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
### Memory Augmented Neural Networks
RNN, CNN ๋ฑ์ ๊ธฐ๋ฐ ๋ชจ๋ธ๊ณผ ๋ฉ๋ชจ๋ฆฌ ๊ตฌ์กฐ๋ฅผ ๊ฒฐํฉํ ๋ชจ๋ธ
### ****MANN Notion****
|Title|
|:-:|
|[Differentiable Neural Computer](https://texonom.com/differentiable-neural-computer-2c182410c5c34222b41605b63c37c777)|
|[Neural Turing Machine](https://texonom.com/neural-turing-machine-c03efee39e1942f197f4b3d6553e4ac1)|
|
84ed691391c8439ba8e6b297623c9c0e
|
|
Seq2Seq
|
Deep Learning Models
|
Mar 4, 2023
|
Alan Jo
|
Alan Jo
|
Jul 30, 2023
|
### Variable Length of inputs and outputs
์ธ์ฝ๋-๋์ฝ๋ ๊ตฌ์กฐ๋ ์ฃผ๋ก ์
๋ ฅ ๋ฌธ์ฅ๊ณผ ์ถ๋ ฅ ๋ฌธ์ฅ์ ๊ธธ์ด๊ฐ ๋ค๋ฅผ ๊ฒฝ์ฐ์ ์ฌ์ฉ
- Encoder takes the input sequence and converts it into a fixed length vector representation
- Decoder uses this vector to generate the output sequence
### Seq2Seq Notion
|Title|
|:-:|
|[Attention Mechanism](https://texonom.com/attention-mechanism-762711860abb45f59904f1ac4e4af285)|
|[Copy mechanism](https://texonom.com/copy-mechanism-b5dee9c80ca24bb993b5f152129b3577)|
### Seq2Seq Models
|Title|
|:-:|
|[Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b)|
|[Transformer Model](https://texonom.com/transformer-model-f3e8053cc5b447a2bc9c6b5d0874dafc)|
|[Encoder Model](https://texonom.com/encoder-model-321d79943c8940fcaac0c9ccca0b5f6f)|
> [14-01 ์ํ์ค-ํฌ-์ํ์ค(Sequence-to-Sequence, seq2seq)](https://wikidocs.net/24996)
|
01a9854dffa6417c87d92c11a607250c
|
|
Discriminator Network
|
GAN Notion
|
Nov 18, 2019
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
tries to distinguish between real and fake images
Discriminator $\phi$ aims at maximizing the objective
- $D(x)$ to be close to 1 for real
- $D(G(z))$ to be close to 0 for fake
|
a02287522a264d779574871285eed3b6
|
|
GAN Issues
|
GAN Notion
|
Jun 1, 2023
|
Alan Jo
|
Alan Jo
|
Jun 14, 2023
|
### 1. Non-convergence
minmax objective **cycle **generated so repeated. Even with a small learning rate, it will not converge
example. $min_xmax_yV(x, y ) = xy$
### 2. Mode-Collapse
What if the generator keeps generating a **single realistic image**? The discriminator will be always fooled by the single sample
### Mini Batch Trick
**Compute the similarity** of the image $x$ with other images in the same batch to avoid Mode-Collapse. **This measures the diversity of the batch.**
**Feed the similarity score** along with the image to the discriminator as an input feature. This penalizes the generator and encourages it to generate less similar images
Many more advanced techniques have also been proposed so far.
|
be6312c3a8184b55ba443167164101ba
|
|
GAN Minmax Game
|
GAN Notion
|
Jun 1, 2023
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
## Training GAN
Aims to $D(G(z))$ to be close to 1, i.e. discriminator is fooled
### Gradient descent on generator

If $G$ is very bad compared to $D$, then we have almost zero gradient. Hence, $-D$ term can be used

### Gradient ascent on discriminator
Generator $\theta$ and Discriminator $\phi$ has very different objective so hard to train stable
|
d3b08fa32fc34cf5920bfc0e80c34b90
|
|
Generator Network
|
GAN Notion
|
Nov 18, 2019
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
tries to fool the discriminator by generating realistic sample
Generator $\theta $ aims at minimizing the objective
|
3d3fb237d8f149979c7ed172aca65529
|
|
3D GAN
|
GANs
|
Feb 24, 2022
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
> [3D Generative Adversarial Network](http://3dgan.csail.mit.edu/)
|
672af0e2639041e484ed1a26b56f84cb
|
|
DCGAN
|
GANs
|
Mar 5, 2023
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
[DCGAN-tensorflow](https://github.com/carpedm20/DCGAN-tensorflow)
|
ae93b511fed5402c926e818243d8a966
|
|
DragGan
|
GANs
|
May 29, 2023
|
Alan Jo
|
Alan Jo
|
Jun 1, 2023
|
[DragGAN](https://github.com/XingangPan/DragGAN)
|
### Unofficial Huggingface implementation
> [DragGAN - a Hugging Face Space by fffiloni](https://huggingface.co/spaces/fffiloni/DragGAN)
> [Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold](https://vcai.mpi-inf.mpg.de/projects/DragGAN/)
|
bd335de6767243e7b7ec48ff88dea800
|
Differentiable Neural Computer
|
MANN Notion
|
May 29, 2023
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
2c182410c5c34222b41605b63c37c777
|
||
Neural Turing Machine
|
MANN Notion
|
May 29, 2023
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
c03efee39e1942f197f4b3d6553e4ac1
|
||
Decoder Model
|
Seq2Seq Models
|
Mar 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
[Encoder Model](https://texonom.com/encoder-model-321d79943c8940fcaac0c9ccca0b5f6f) [Text Generation](https://texonom.com/text-generation-cb0e475216c043fdbb4f309d324c9d19)
|
## **Autoregressive Model, **Causal language model
์ด์ ์์ ์ ์ถ๋ ฅ์ ํ์ฌ ์์ ์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ์ฌ, ์ํ์ค ๋ฐ์ดํฐ๋ฅผ ์์ฑํ๋ ๋ชจ๋ธ
๋จ๋ฐฉํฅ ๋ชจ๋ธ์ ํตํด ๋ค์ ๋จ์ด๋ฅผ ์์ธก, ์ถ์ธกํ๋ ์ธ์ด๋ชจ๋ธ
[Decoder Input IDs](https://texonom.com/decoder-input-ids-2b1e1d3c7c274b0880191a54290bed27)
### Autoregressive
ํ์ฌ ๊ฐ์ด ์ด์ ๊ฐ๋ค์ ์ ํ ์กฐํฉ์ผ๋ก ๋ํ๋๋ ๋ชจ๋ธ์ ์๋ฏธ
Transformer์ Decoder๋ง ์ฌ์ฉํ๋ฏ๋ก ์ด์ ์์ ์ ์ถ๋ ฅ์ ํ์ฌ ์์ ์ ์
๋ ฅ์ผ๋ก ์ฌ์ฉํ์ฌ, ์ํ์ค ๋ฐ์ดํฐ๋ฅผ ์์ฑํ๋ ๋ฐฉ์์ ์ฌ์ฉ
์ด ๋ฐฉ์์ ๋ฌธ์ฅ ์ ์ฒด๋ฅผ ํ ๋ฒ์ ์ฒ๋ฆฌํ์ง ์๊ณ , ๋จ์ด๋ ํ ํฐ ๋จ์๋ก ์ฒ๋ฆฌ
์ด์ ๋จ์ด๋ค์ ๊ธฐ๋ฐ์ผ๋ก ๋ค์ ๋จ์ด๋ฅผ ์์ธก ์ฆ ๋ฌธ์ฅ ์ ์ฒด๋ฅผ ํ ๋ฒ์ ์ฒ๋ฆฌํ์ง ์๊ธฐ ๋๋ฌธ์, ๋ฌธ์ฅ ์ ์ฒด๋ฅผ ๋ณผ ์ ์๋ค
> [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/)
> [Decoder models - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter1/6?fw=pt)
|
36e78a40265c473d90197089aebfa83b
|
Encoder Model
|
Seq2Seq Models
|
Mar 6, 2023
|
Alan Jo
|
Alan Jo
|
Jul 30, 2023
|
[Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b)
|
## Auto*-encoding model, Masked Language Model*
transform text or images into a condensed numerical representation called an embedding
### These models are often characterized as having bi-directional attention
์
๋ ฅ ๋ฌธ์ฅ์ ๋ฒกํฐํํ๊ณ ๋ค์ ๋ณต์ํ๋ ๋ชจ๋ธ
์
๋ ฅ ๋ฌธ์ฅ์ ์๋ฏธ๋ฅผ ๋ณด์กดํ๋ฉด์, ๋ชจ๋ธ์ด ์ฒ๋ฆฌํ๊ธฐ ์ฌ์ด ํํ๋ก ๋ณํ
The pretraining of these models usually revolves around somehow corrupting a given sentence and tasking the model with finding or reconstructing the initial sentence
### *auto-encoding*
์
๋ ฅ ์ํ์ค๋ฅผ ์ธ์ฝ๋ฉํ์ฌ ์ ๋ถ [Latent Space](https://texonom.com/latent-space-d67b6bdef18b4058bfbc3d25f87ec087) ์ ๋งคํํ๋ค
์
๋ ฅ ์ํ์ค๋ฅผ ํ ํฐ ๋จ์๋ก ์ฒ๋ฆฌํ์ง๋ง Self-Attention์ผ๋ก ์
๋ ฅ ์ํ์ค์ ๊ฐ ์์น๋ง๋ค ํด๋น ์์น์ ๋ค๋ฅธ ๋ชจ๋ ์์น ๊ฐ์ ์ ์ฌ๋๋ฅผ ๊ณ์ฐํ๊ธฐ ๋๋ฌธ์ ๋ฌธ๋งฅ๊ณ ๋ ค ๊ฐ๋ฅํ๋ค. ์ด๋ ์ ์ฌ๋๊ฐ ๋์ ์์น๋ค์ ํด๋น ์์น์ ์๋ฒ ๋ฉ ๋ฒกํฐ์ ๋ ๋ง์ ๊ฐ์ค์น๊ฐ ๋ถ์ฌ
> [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/)
> [Encoder models - Hugging Face NLP Course](https://huggingface.co/learn/nlp-course/chapter1/5?fw=pt)
|
321d79943c8940fcaac0c9ccca0b5f6f
|
Transformer Model
|
Seq2Seq Models
|
Aug 17, 2020
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
[Attention Mechanism](https://texonom.com/attention-mechanism-762711860abb45f59904f1ac4e4af285) [Attention is all you need](https://texonom.com/attention-is-all-you-need-f52e664318eb47c0aa6cbd27b9a4c491)
|
## Self Attention is the core feature
the Transformer gains a wider perspective and can attend to multiple interaction levels within the input sentence.
์ดํ
์
์ Encoder ๋ณด์ ์ ์ํ ์ฉ๋๋ก์ ์ฌ์ฉํ๋ ๊ฒ์ด ์๋๋ผ ์ดํ
์
๋ง์ผ๋ก ์ธ์ฝ๋์ ๋์ฝ๋ ์์ฑ
๋ชจ๋ ํ ํฐ์ ๋์์ ๋ฐ์ ์ฐ์ฐํ๊ธฐ ๋๋ฌธ์ ๋ณ๋ ฌ์ฐ์ฐ์ด ๊ฐ๋ฅ
์ธ์ฝ๋ ๋ธ๋ญ๊ณผ ๋์ฝ๋ ๋ธ๋ญ์ด ๊ฐ๊ฐ 6๊ฐ์ฉ ๋ชจ์ฌ์๋ ๊ตฌ์กฐ
์ธ์ฝ๋ ๋ธ๋ก์ 2๊ฐ์ sub-layer(Multi-Head(self) Attention, Feed Forward)๋ก ๋๋ ์ ์๊ณ
๋์ฝ๋ ๋ธ๋ก์ 3๊ฐ์ sub-layer(Masked Multi-Head(self) Attention, Multi-Head (Encoder-Decoder) Attention, Feed Forward)๋ก ๋๋ ์ ์๋ค
๊ธฐ์กด์ Attention๊ณผ๋ ๋ค๋ฅด๊ฒ ๊ฐ ๋ฒกํฐ๊ฐ ๋ชจ๋ ๊ฐ์ค์น ๋ฒกํฐ๋ผ๋ ์
### Transformer Model Notion
|Title|
|:-:|
|[Attention is all you need](https://texonom.com/attention-is-all-you-need-f52e664318eb47c0aa6cbd27b9a4c491)|
|[NLP Transformer Encoder](https://texonom.com/nlp-transformer-encoder-d5bc8cebe43d4fc7896d8e0290683b18)|
|[NLP Transformer Decoder](https://texonom.com/nlp-transformer-decoder-1ee79f4754d24aa8b64312399782cbec)|
|[Transformer Attention](https://texonom.com/transformer-attention-84d340d69722409a96ec1d806970608a)|
|[Transformer Model Tool ](https://texonom.com/transformer-model-tool-01c749c1e9254d8c85ec5ef2feb0566b)|
### Transformer Models
|Title|
|:-:|
|[RETRO Transformer](https://texonom.com/retro-transformer-e3efdb06419948ebb1be444f4e124867)|
|[Torchscale](https://texonom.com/torchscale-c4b82c1245d74c7092617e70b9ddab51)|
|[BART](https://texonom.com/bart-48bd2258eee74af3b2d929b03bb9553b)|
|[RMT](https://texonom.com/rmt-f076841449f2419a82b3ce09281b9bb9)|
|[Transformer-XL](https://texonom.com/transformer-xl-eb2128e01dc744539b6825f3880da761)|
### Architecture
> [Transformerโs Encoder-Decoder: Letโs Understand The Model Architecture - KiKaBeN](https://kikaben.com/transformers-encoder-decoder/)
### Pseudo Source code
> [Transformers for software engineers](https://blog.nelhage.com/post/transformers-for-software-engineers/)
### Korean
> [์ ํ ํฌ ํ์ด์ฌ](https://wikidocs.net/31379)
> [[๋ฅ๋ฌ๋] ์ธ์ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐ๋
์ ๋ฆฌ](https://velog.io/@rsj9987/๋ฅ๋ฌ๋-์ฉ์ด์ ๋ฆฌ)
|
f3e8053cc5b447a2bc9c6b5d0874dafc
|
Decoder Input IDs
|
Decoder Model
| null | null | null | null | null |
## token indices
|
2b1e1d3c7c274b0880191a54290bed27
|
Attention is all you need
|
Transformer Model Notion
|
Aug 23, 2020
|
Alan Jo
|
Alan Jo
|
May 30, 2023
|
[Transformer Model](https://texonom.com/transformer-model-f3e8053cc5b447a2bc9c6b5d0874dafc)
|
2017๋
๋ฐํ๋ ํธ๋์คํฌ๋จธ ๊ตฌ์กฐ๋ฅผ ์ฒ์ ๋ฐํํ ๋
ผ๋ฌธ
ํด๋น ๋
ผ๋ฌธ์ Non-recurrent sequence to sequence encoder-decoder model์ ๋ง๋๋ ๊ฒ์ด ๋ชฉํ
Replaced [RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16) Encoder Decoder Model
## **Background**
### **1. Sequential computation**
sequence to sequenceํ ๋ฌธ์ ๋ฅผ ํธ๋ ๊ณผ์ ์์, Encoder-Decoder ๊ตฌ์กฐ์ RNN ๋ชจ๋ธ๋ค์ด ์ข์ ์ฑ๋ฅ์ ๋๋ค.
### **2. Long term dependency**
RNN์ ๊ฒฝ์ฐ, Long term dependency์ ๋ฌธ์ ๊ฐ ํญ์ ๋ฐ๋ผ๋ค๋๊ณ , CNN์ ๊ฒฝ์ฐ kernel ์์์ O(1)์ด๋, kernel ๊ฐ ์ ๋ณด๊ฐ ๊ณต์ ๋์ง ์๋๋ค.
# Model Architecture
6๊ฐ์ stack - ํ๋์ ์ธ์ฝ๋๋ Self-Attention Layer์ Feed Forward Neural Network(2๊ฐ์ Sub-layer)
- Encoder
- Multi-Head Attention
- Positional Encoding
- Relative Positioning
- The Residuals
- Decoder
### Author
> [ashVaswani](https://twitter.com/ashVaswani)
### pdf
> [Untitled](https://arxiv.org/pdf/1706.03762.pdf)
> [Attention Is All You Need(transformer) paper ์ ๋ฆฌ](https://medium.com/@omicro03/attention-is-all-you-need-transformer-paper-%EC%A0%95%EB%A6%AC-83066192d9ab)
> [์ ํ ํฌ ํ์ด์ฌ](https://wikidocs.net/31379)
|
f52e664318eb47c0aa6cbd27b9a4c491
|
NLP Transformer Decoder
|
Transformer Model Notion
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
Mar 7, 2023
|
### 3 Sub Layer
[Masked Self-Attention](https://texonom.com/masked-self-attention-cb0e29589c93423780cc0ca60260f3e4) + [Multi-head Attention](https://texonom.com/multi-head-attention-d9dfc39b27494123ae4c81f3b98e50b5) + [Position-wise FFNN](https://texonom.com/position-wise-ffnn-6fe30c96aa9245d898c73ec34625377d)

|
1ee79f4754d24aa8b64312399782cbec
|
|
NLP Transformer Encoder
|
Transformer Model Notion
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
Mar 7, 2023
|
### 2 Sub Layer
[Multi-head Attention](https://texonom.com/multi-head-attention-d9dfc39b27494123ae4c81f3b98e50b5) + [Position-wise FFNN](https://texonom.com/position-wise-ffnn-6fe30c96aa9245d898c73ec34625377d)

|
d5bc8cebe43d4fc7896d8e0290683b18
|
|
Transformer Attention
|
Transformer Model Notion
|
Apr 3, 2023
|
Alan Jo
|
Alan Jo
|
Apr 3, 2023
|
### Transformer Attentions
|Title|
|:-:|
|[Encoder-Decoder Attention](https://texonom.com/encoder-decoder-attention-4b3bb5b4aeac4681bf6bdca66ea79e04)|
|[Encoder Self-Attention](https://texonom.com/encoder-self-attention-c1e03ab0c57f44c395f2056e201f4326)|
|[Masked Self-Attention](https://texonom.com/masked-self-attention-cb0e29589c93423780cc0ca60260f3e4)|
|
84d340d69722409a96ec1d806970608a
|
|
Transformer Model Tool
|
Transformer Model Notion
|
May 22, 2023
|
Alan Jo
|
Alan Jo
|
May 22, 2023
|
[Transformers.js](https://texonom.com/transformersjs-3e7f8bb1d88940298936f13ee1ce7ed7)
|
### Transformer Model Tools
|Title|
|:-:|
|[ ](https://texonom.com/3ddce975d3aa4f6dbca6e3d4b3eb6e6e)|
|
01c749c1e9254d8c85ec5ef2feb0566b
|
Encoder-Decoder Attention
|
Transformer Attentions
|
Mar 5, 2023
|
Alan Jo
|
Alan Jo
|
Apr 3, 2023
|
input๊ณผ output์ ์ ๋ณด๋ฅผ ์ฎ์ด์ฃผ๋ ์ญํ
|
4b3bb5b4aeac4681bf6bdca66ea79e04
|
|
Encoder Self-Attention
|
Transformer Attentions
|
Mar 5, 2023
|
Alan Jo
|
Alan Jo
|
Apr 3, 2023
|
c1e03ab0c57f44c395f2056e201f4326
|
||
Masked Self-Attention
|
Transformer Attentions
|
Mar 7, 2023
|
Alan Jo
|
Alan Jo
|
Apr 3, 2023
|
๋์ฝ๋ ๋ธ๋ญ์์ ์ฌ์ฉ๋๋ ํน์ํ Self-Attention
๋์ฝ๋๋ [Autoregressive](https://texonom.com/autoregressive-93cf5710b4b54730a7e7efcc6e0fc642) ํ๊ธฐ ๋๋ฌธ์ ์ดํ๋จ์ด ๋ณด์ง์๊ณ ์์ธกํด์ผ
๊ทธ๋์ ๋ค์ ๋ณด์ง ์๋๋ก Maskingํ๋ค
> [[๋ฅ๋ฌ๋] ์ธ์ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐ๋
์ ๋ฆฌ](https://velog.io/@rsj9987/๋ฅ๋ฌ๋-์ฉ์ด์ ๋ฆฌ)
|
cb0e29589c93423780cc0ca60260f3e4
|
|
[trl](https://github.com/lvwerra/trl)
|
Transformer Model Tools
|
May 22, 2023
|
Alan Jo
|
Alan Jo
|
May 22, 2023
|
3ddce975d3aa4f6dbca6e3d4b3eb6e6e
|
||
BART
|
Transformer Models
|
Mar 25, 2023
|
Alan Jo
|
Alan Jo
|
May 31, 2023
|
[AI Summarization](https://texonom.com/ai-summarization-a314a6fb3162447086b8d2526ae8ef16) [KoBART-summarization](https://github.com/seujung/KoBART-summarization) [BERT](https://texonom.com/bert-e282abe8a34543988a0e71f6c8701ad2)
|
****Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension****
2019
> [BART: Denoising Sequence-to-Sequence Pre-training for Natural...](https://arxiv.org/abs/1910.13461)
> [BART Text Summarization vs. GPT-3 vs. BERT: An In-Depth Comparison | Width.ai](https://www.width.ai/post/bart-text-summarization)
> [BART ๋
ผ๋ฌธ ๋ฆฌ๋ทฐ](https://dladustn95.github.io/nlp/BART_paper_review/)
|
48bd2258eee74af3b2d929b03bb9553b
|
RETRO Transformer
|
Transformer Models
|
Jul 5, 2022
|
Alan Jo
|
Alan Jo
|
May 22, 2023
|
[Vector Database](https://texonom.com/vector-database-5dfdb6e2bc294fed8ae80eaea2ee5c26) [Deepmind](https://texonom.com/deepmind-5eb171c77b344d4786a9a5b23ae70eca)
|
### Retrieval-Enhanced
### Fast
๋ฐ์๋ค retrieval database๋ฅผ ๋๋ ํํ
### Implementations
[RETRO-pytorch](https://github.com/lucidrains/RETRO-pytorch)
> [Improving language models by retrieving from trillions of tokens](https://www.deepmind.com/publications/improving-language-models-by-retrieving-from-trillions-of-tokens)
> [The Illustrated Retrieval Transformer](https://jalammar.github.io/illustrated-retrieval-transformer/)
> [RETRO Is Blazingly Fast](http://mitchgordon.me/ml/2022/07/01/retro-is-blazing.html)
### Korean
> [RETRO: Improving language models by retrieving from trillions of tokens](https://velog.io/@nawnoes/RETRO-Improving-language-models-by-retrieving-from-trillions-of-tokens)
|
e3efdb06419948ebb1be444f4e124867
|
RMT
|
Transformer Models
|
May 3, 2023
|
Alan Jo
|
Alan Jo
|
May 3, 2023
|
[LM-RMT](https://github.com/booydar/LM-RMT) [RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16)
|
## **Recurrent Memory Transformer**
GPT-4โs maximum input token for inference is 32000204
This model can 2 million
> [Recurrent Memory Transformer](https://arxiv.org/abs/2207.06881)
> [โ๋ชจ๋ ๊ฒ ๋ฐ๊ฟ์ง๋โ... ๊ธฐ์ต๋ ฅ GPT-4 63๋ฐฐ โRMTโ ๊ธฐ๋ฐ AI ๋ฑ์ฅ](https://contents.premium.naver.com/themiilk/business/contents/230426095632265ym)
|
f076841449f2419a82b3ce09281b9bb9
|
Torchscale
|
Transformer Models
|
Dec 2, 2022
|
Alan Jo
|
Alan Jo
|
May 22, 2023
|
[Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686) [torchscale](https://github.com/microsoft/torchscale)
|
c4b82c1245d74c7092617e70b9ddab51
|
|
**Transformer-XL**
|
Transformer Models
|
May 3, 2023
|
Alan Jo
|
Alan Jo
|
May 3, 2023
|
[transformer-xl](https://github.com/kimiyoung/transformer-xl)
|
**Attentive Language Models Beyond a Fixed-Length Context**
> [Recurrent Memory Transformer](https://arxiv.org/abs/2207.06881)
|
eb2128e01dc744539b6825f3880da761
|
Attention Mechanism
|
Seq2Seq Notion
|
Mar 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
[Encoder Model](https://texonom.com/encoder-model-321d79943c8940fcaac0c9ccca0b5f6f) [Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b) [RNN](https://texonom.com/rnn-f7aad56acb5542b2ac26c2908be4ce16)
|
**ํ ๋ฒ์ ์ ์ฒด ๋ฌธ์ฅ์ ์ฝ๊ณ ๋ฌธ์ฅ์ ๊ธฐ๋ฐ์ผ๋ก ๊ฐ ๋จ์ด์ ํํ์ ๋ณ๋ ฌ๋ก ๊ณ์ฐ๊ฐ๋ฅ**
Imagine yourself in a library. You have a specific question (**query**). Books on the shelves have titles on their spines (**keys**) that suggest their content. You compare your question to these titles to decide how relevant each book is, and how muchย **attention**ย to give each book. Then, you get the information (**value**) from the relevant books to answer your question.
the query vector points to the current input word (aka context).
**Theย *****keys*****ย represent the words in the input sentence. The key vectors help the model understand how each word relates to the context word.**
**Attention is how much weight the query word should give each word in the sentence. This is computed via a dot product between the query vector and all the key vectors**. These dot products then go through aย softmax which makes the attention scores (across all keys) sum to 1.
**Each word is also represented by aย *****value*****ย which contains the information of that word. As a result, each context word is now represented by an attention-based weightage of all the words in the sentence.**
### NLP Attention Notion
|Title|
|:-:|
|[Self-Attention](https://texonom.com/self-attention-d06bde44563f455b951d17955d820f77)|
|[Multi-head Attention](https://texonom.com/multi-head-attention-d9dfc39b27494123ae4c81f3b98e50b5)|
|[Cross-Attention](https://texonom.com/cross-attention-6365d422d2bf4be78dac038df3a19aae)|

### NLP Attention Usages
|Title|
|:-:|
|[Flash Attention](https://texonom.com/flash-attention-ca8093deb6c648059aff1580b0ddbf68)|
|[Dilated Attention](https://texonom.com/dilated-attention-d9000e3460f547bca3f15ac9b8d7e36c)|
|[Multi Query Attention](https://texonom.com/multi-query-attention-5641aba38a8b47caa3a9f364c13789f1)|
|[Group Query Attentiion](https://texonom.com/group-query-attentiion-d72d2da59fba4e9d937a4ec856dac90f)|
|[PagedAttention](https://texonom.com/pagedattention-abf197357343437681fb878ae5700926)|
> [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/)
> [[๋ฅ๋ฌ๋] ์ธ์ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐ๋
์ ๋ฆฌ](https://velog.io/@rsj9987/๋ฅ๋ฌ๋-์ฉ์ด์ ๋ฆฌ)
> [16-01 ํธ๋์คํฌ๋จธ(Transformer)](https://wikidocs.net/31379)
|
762711860abb45f59904f1ac4e4af285
|
***Copy mechanism***
|
Seq2Seq Notion
|
Mar 8, 2023
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
๋์ฝ๋ฉ ๊ณผ์ ์์ ๋ฌธ์ฅ์ ์์ฑํ ๋ ํ์ํ ์ดํ๊ฐ ์ถ๋ ฅ ์ฌ์ (output vocabulary)์ ์๋ ๋ฌธ์ (Out-of-Vocabulary)์๊ณ ์ ๋ช
์ฌ๋ค์ ์ถ๋ ฅ ํ๋ฅ ์ด ์์์ง๋ ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ธฐ ์ํด ๊ณ ์๋ ๋ฐฉ๋ฒ์ผ๋ก, ์ถ๋ ฅ์ ํ์ํ ์ดํ๋ฅผ ์
๋ ฅ ์ด์์์ฐพ์ ์ถ๋ ฅ ์์ด ๋ณต์ฌ(copy)ํ๋ ์ฌ์ฉํ๋ ๋ฐฉ๋ฒ
> [Untitled](https://koreascience.kr/article/CFKO201612470014629.pdf)
|
b5dee9c80ca24bb993b5f152129b3577
|
|
Cross-Attention
|
NLP Attention Notion
|
Apr 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
[MLLM](https://texonom.com/mllm-98172e9446c04bcc9cf52b2bc5d0bd17)
|
## Encoder-Decoder Attention
in [Decoder Model](https://texonom.com/decoder-model-36e78a40265c473d90197089aebfa83b)
Query : ๋์ฝ๋ ๋ฒกํฐ / **Key **= Value : ์ธ์ฝ๋ ๋ฒกํฐ
์ธ์ฝ๋์ ์ถ๋ ฅ๊ณผ ๋์ฝ๋์ ํ์ฌ ์ํ๋ฅผ ์ด์ฉํ์ฌ ๋์ฝ๋๊ฐ ๋ค์ ๋จ์ด๋ฅผ ์์ธกํ๋ ๋ฐ ์ฌ์ฉ
MLLM์์ ๋งค์ฐ ์ค์ํ ๋ถ๋ถ
์ด๋ฏธ์ง๋ ๋น๋์ค ๋ฐ์ดํฐ์ ํ
์คํธ ๋ฐ์ดํฐ ๊ฐ์ ์ํธ์์ฉ์ ๋ชจ๋ธ๋ง
$$h_i = softmax((x_i * K_v) / sqrt(d_k)) * V_v$$
|
6365d422d2bf4be78dac038df3a19aae
|
****Multi-head Attention****
|
NLP Attention Notion
|
Mar 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
### **Multiple heads lets the model consider multiple words simultaneously.**
Because we use the softmax function in attention, it amplifies the highest value while squashing the lower ones. As a result, each head tends to focus on a single element.
Multiple heads let us attend to several words.ย **It also provides redundancy**, where if any single head fails, we have the other attention heads to rely on.
์
๋ ฅ ๋ฒกํฐ๋ฅผ ์ฌ๋ฌ ๊ฐ์ ํค๋๋ก ๋ถํ ํ์ฌ ๊ฐ๊ฐ ์ดํ
์
์ ์ํํ๊ณ ๊ฒฐ๊ณผ๋ฅผ ๊ฒฐํฉํ๋ ๋ฐฉ์์ผ๋ก ๋์
> [Some Intuition on Attention and the Transformer](https://eugeneyan.com/writing/attention/)
|
d9dfc39b27494123ae4c81f3b98e50b5
|
|
Self-Attention
|
NLP Attention Notion
|
Mar 5, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
### ํ
์คํธ ๋ฐ์ดํฐ ๋ด์์ ๋จ์ด ๊ฐ์ ์ํธ์์ฉ์ ๋ชจ๋ธ๋ง
Self-attention enables the encoder to weigh the importance of each word and capture both short and long-range dependencies.
Self-attention enables the decoder to focus on different parts of the output generated so far.
- ์ธ์ฝ๋์ ์
ํ ์ดํ
์
: Query = **Key **= Value
- ๋์ฝ๋์ ๋ง์คํฌ๋ ์
ํ ์ดํ
์
: Query = **Key **= Value
self-attention ์ ์ดํ
์
์ ์๊ธฐ ์์ ์๊ฒ ์ํํ๋ค๋ ์๋ฏธ
๋ฌธ์ฅ ๋ด๋ถ ์์์ ๊ด๊ณ๋ฅผ ์ ํ์
ํ๊ธฐ ์ํด์ ๋ฌธ์ฅ ์์ ์ ๋ํด ์ดํ
์
๋งค์ปค๋์ฆ์ ์ ์ฉ
- ์ฟผ๋ฆฌ : ๋ถ์ํ๊ณ ํ๋ ๋จ์ด์ ๋ํ ๊ฐ์ค์น ๋ฒกํฐ
- ํค : ๊ฐ ๋จ์ด๊ฐ ์ฟผ๋ฆฌ์ ํด๋นํ๋ ๋จ์ด์ ์ผ๋ง๋ ์ฐ๊ด์๋ ์ง๋ฅผ ๋น๊ตํ๊ธฐ ์ํ ๊ฐ์ค์น ๋ฒกํฐ
- ๋ฐธ๋ฅ : ๊ฐ ๋จ์ด์ ์๋ฏธ๋ฅผ ์ด๋ ค์ฃผ๊ธฐ ์ํ ๊ฐ์ค์น ๋ฒกํฐ
### self Attention ๊ณผ์
1. ํน์ ๋จ์ด์ ์ฟผ๋ฆฌ(q) ๋ฒกํฐ์ ๋ชจ๋ ๋จ์ด์ ํค(k) ๋ฒกํฐ๋ฅผ ๋ด์ ํ๋ค. ๋ด์ ํด์ ๋์จ ๊ฐ์ Attention Score๊ฐ ๋๋ค.
2. ๋ณด์ ์ผ๋ก ํธ๋์คํฌ๋จธ์์๋ ์ด ๊ฐ์ค์น๋ฅผ q,k,v ๋ฒกํฐ ์ฐจ์ย *dk*ย ์ ์ ๊ณฑ๊ทผ์ธย *d**k*๋ก ๋๋์ด ์ค๋ค.
3. Softmax๋ก ์ฟผ๋ฆฌ์ ํด๋นํ๋ ๋จ์ด์ ๋ฌธ์ฅ ๋ด ๋ค๋ฅธ ๋จ์ด๊ฐ ๊ฐ์ง๋ ๊ด๊ณ์ ๋น์จ ๊ณ์ฐ
4. Value ๊ฐ ๋จ์ด์ ๋ฒกํฐ๋ฅผ ๊ณฑํด์ค ํ ๋ชจ๋ ๋ํ๋ค.
> [[๋ฅ๋ฌ๋] ์ธ์ด๋ชจ๋ธ, RNN, GRU, LSTM, Attention, Transformer, GPT, BERT ๊ฐ๋
์ ๋ฆฌ](https://velog.io/@rsj9987/๋ฅ๋ฌ๋-์ฉ์ด์ ๋ฆฌ)
|
d06bde44563f455b951d17955d820f77
|
|
Dilated Attention
|
NLP Attention Usages
|
Jul 13, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|

> [Microsoftโs LongNet Scales Transformer to One Billion Tokens](https://medium.com/syncedreview/microsofts-longnet-scales-transformer-to-one-billion-tokens-af02ff657d87)
|
d9000e3460f547bca3f15ac9b8d7e36c
|
|
Flash Attention
|
NLP Attention Usages
|
Jun 29, 2023
|
Alan Jo
|
Alan Jo
|
Jul 28, 2023
|
[flash-attention](https://github.com/HazyResearch/flash-attention)
|
**Tiling**๊ณผย **Recomputation**์ ์ฌ์ฉํ์ฌ Attention์ ๊ฐ์ํ
softmax๋ฅผ block๋จ์๋ฅผ ์ชผ๊ฐ์ ๊ณ์ฐ
CUDA kernel๋ฅผ ์ฌ์ฉ
> [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Wonbeom Jang](https://www.wonbeomjang.kr/blog/2023/fastattention/)
> [FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness | Wonbeom Jang](https://www.wonbeomjang.kr/blog/2023/fastattention/)
|
ca8093deb6c648059aff1580b0ddbf68
|
Group Query Attentiion
|
NLP Attention Usages
|
Aug 2, 2023
|
Alan Jo
|
Alan Jo
|
Aug 2, 2023
|
d72d2da59fba4e9d937a4ec856dac90f
|
||
Multi Query Attention
|
NLP Attention Usages
|
Aug 2, 2023
|
Alan Jo
|
Alan Jo
|
Aug 2, 2023
|
5641aba38a8b47caa3a9f364c13789f1
|
||
**PagedAttention**
|
NLP Attention Usages
|
Aug 3, 2023
|
Alan Jo
|
Alan Jo
|
Aug 3, 2023
|
Efficient management of attention key and value memory
|
abf197357343437681fb878ae5700926
|
|
Deep Learning Compiler
|
Deep Learning Usages
|
Mar 22, 2022
|
Alan Jo
|
Alan Jo
|
Mar 22, 2022
|
### Deep Learning Compiler Tools
|Title|
|:-:|
|[Nebullvm](https://texonom.com/nebullvm-d967865085f84f5ab2b1a4a5ea661dfd)|
|
7d79af9683764b6b983793c1856578c6
|
|
Deep Learning Tool
|
Deep Learning Usages
|
Oct 6, 2021
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
[tuning_playbook](https://github.com/google-research/tuning_playbook)
|
### Deep Learning Hubs
|Title|
|:-:|
|[HuggingFace](https://texonom.com/huggingface-eb25c513b432477e9da51ca19bb06833)|
|[Pytorch Hub](https://texonom.com/pytorch-hub-d46f5b09e91241578a0c63b4847396fb)|
### Deep Learning Tools
|Title|
|:-:|
|[fastbook](https://texonom.com/fastbook-d022baa785a14ce5af5cf1ef59995cb1)|
|[Netron](https://texonom.com/netron-2e410ef29aaa4a7d9e2a2777ce4dd3ee)|
|
a14ea6f4574342ef974443634e27c6ce
|
Learn Deep Learning
|
Deep Learning Usages
|
Jun 6, 2023
|
Alan Jo
|
Alan Jo
|
Jun 6, 2023
|
> [0020 DL Terms & Concepts - Deepest Documentation](https://deepestdocs.readthedocs.io/en/latest/002_deep_learning_part_1/0020/)
|
4382083fb54d4705984e7f45a9af2d86
|
|
Sentiment Neuron
|
Deep Learning Usages
|
May 20, 2023
|
Alan Jo
|
Alan Jo
|
May 20, 2023
|
### Proof of compressed data in deep network
before gpt1
### Why ImportantWhy Important
> [แแ
ตแฏแ
แ
ตแแ
ฃ ์์ธ ์ผ๋ฒ์ AGI์ ๋ฏธ์ฑ๋งํฌ](https://www.youtube.com/watch?v=LQviQS24uQY&t=840)
> [Unsupervised sentiment neuron](https://openai.com/research/unsupervised-sentiment-neuron)
> [Sentiment Neuron](https://tensorflow.blog/2017/04/07/sentiment-neuron/)
|
2e44e9754b894534af6c121b6d6074d6
|
|
Nebullvm
|
Deep Learning Compiler Tools
|
Mar 22, 2022
|
Alan Jo
|
Alan Jo
|
Mar 22, 2022
|
[nebuly](https://github.com/nebuly-ai/nebullvm)
|
d967865085f84f5ab2b1a4a5ea661dfd
|
|
HuggingFace
|
Deep Learning Hubs
|
Jun 27, 2022
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
[AI Industry](https://texonom.com/ai-industry-d8709bd0498145e3a66af6da3f963fa7) [Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686)
|
### [Pytorch Hub](https://texonom.com/pytorch-hub-d46f5b09e91241578a0c63b4847396fb) + [Lightening AI](https://texonom.com/lightening-ai-af651491f5474031bc51e6d1a99d5f22)
### HuggingFace Usages
|Title|
|:-:|
|[HuggingFace Hub](https://texonom.com/huggingface-hub-9b16b4841f2d4bdd9827120c47677b29)|
|[Huggingface Model](https://texonom.com/huggingface-model-f985be46b10e462f91e65576b81f452f)|
|[Huggingface Dataset](https://texonom.com/huggingface-dataset-43582d2257474a83874ecec2d4c6ab44)|
|[HuggingFace Space](https://texonom.com/huggingface-space-e37f59c218ca48cebe509d3ab2381b34)|
### LLM Leaderboard
> [Open LLM Leaderboard - a Hugging Face Space by HuggingFaceH4](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
> [[D] HuggingFace ecosystem vs. Pytorch Lightning for big research NLP project with many collaborators.](https://www.reddit.com/r/MachineLearning/comments/si08qt/d_huggingface_ecosystem_vs_pytorch_lightning_for/)
|
eb25c513b432477e9da51ca19bb06833
|
Pytorch Hub
|
Deep Learning Hubs
|
May 29, 2023
|
Alan Jo
|
Alan Jo
|
May 29, 2023
|
[HuggingFace](https://texonom.com/huggingface-eb25c513b432477e9da51ca19bb06833) [Pytorch](https://texonom.com/pytorch-2dd232d99b3a46d5b7d1e4e686070686)
|
### Pytorch Hub Usages
|Title|
|:-:|
|
d46f5b09e91241578a0c63b4847396fb
|
Huggingface Dataset
|
HuggingFace Usages
|
May 23, 2023
|
Alan Jo
|
Alan Jo
|
Jul 19, 2023
|
### Huggingface Dataset Usages
|Title|
|:-:|
|[ HuggingFace Datasets Jax](https://texonom.com/huggingface-datasets-jax-a92d62f342de4ddc83820623059ca02e)|
> [Create a dataset](https://huggingface.co/docs/datasets/create_dataset)
|
43582d2257474a83874ecec2d4c6ab44
|
|
HuggingFace Hub
|
HuggingFace Usages
|
Jun 19, 2023
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
[huggingface_hub](https://github.com/huggingface/huggingface_hub)
> [Quickstart](https://huggingface.co/docs/huggingface_hub/quick-start)
|
9b16b4841f2d4bdd9827120c47677b29
|
|
Huggingface Model
|
HuggingFace Usages
|
May 23, 2023
|
Alan Jo
|
Alan Jo
|
Jun 29, 2023
|
### Huggingface Model Usages
|Title|
|:-:|
|[Huggingface Provider](https://texonom.com/huggingface-provider-8bd64d5951774d6f9ed623abbe471b4c)|
|[Huggingface H4](https://texonom.com/huggingface-h4-9b33bfe491704142929a794edd95a7df)|
|[Huggingface Model Card](https://texonom.com/huggingface-model-card-39d3f0d8805c4cb0939b623b8bac64ea)|
> [Models](https://huggingface.co/docs/hub/models)
|
f985be46b10e462f91e65576b81f452f
|
|
HuggingFace Space
|
HuggingFace Usages
|
Mar 26, 2023
|
Alan Jo
|
Alan Jo
|
Jul 13, 2023
|
[Gradio](https://texonom.com/gradio-7e1647e9bc174161b4d7e8f44dd53707) [Streamlit](https://texonom.com/streamlit-9e295c64d27e4999878a022b1c538964)
|
### HuggingFace Space SDK
|Title|
|:-:|
|[HuggingFace Docker Space](https://texonom.com/huggingface-docker-space-0f250609ad1f493bb91146813de8d8a6)|
|
e37f59c218ca48cebe509d3ab2381b34
|
HuggingFace Datasets Jax
|
Huggingface Dataset Usages
|
May 30, 2023
|
Alan Jo
|
Alan Jo
|
Jul 17, 2023
|
> [Use with JAX](https://huggingface.co/docs/datasets/use_with_jax)
|
a92d62f342de4ddc83820623059ca02e
|
|
Huggingface H4
|
Huggingface Model Usages
|
Jul 9, 2023
|
Alan Jo
|
Alan Jo
|
Jul 9, 2023
|
### helpful, honest, harmless, and huggy
[StarChat](https://texonom.com/starchat-ea316ec509564a51b56ad92b22831220)
|
9b33bfe491704142929a794edd95a7df
|
|
Huggingface Model Card
|
Huggingface Model Usages
|
Jul 19, 2023
|
Alan Jo
|
Alan Jo
|
Jul 30, 2023
|
### Disable API
```type
inference: false
```
- tags
- pipeline tags
- etc
### Widgets
> [Widgets](https://huggingface.co/docs/hub/models-widgets)
> [Disable Hosted inference API](https://discuss.huggingface.co/t/disable-hosted-inference-api/10379)
|
39d3f0d8805c4cb0939b623b8bac64ea
|
|
Huggingface Provider
|
Huggingface Model Usages
|
Jun 29, 2023
|
Alan Jo
|
Alan Jo
|
Aug 5, 2023
|
### Companies
> [amazon (Amazon Web Services)](https://huggingface.co/amazon)
> [stabilityai (Stability AI)](https://huggingface.co/stabilityai)
> [EleutherAI (EleutherAI)](https://huggingface.co/EleutherAI)
> [allenai (Allen Institute for AI)](https://huggingface.co/allenai)
### Model Users
> [ehartford (Eric Hartford)](https://huggingface.co/ehartford)
> [psmathur (Pankaj Mathur)](https://huggingface.co/psmathur)
> [TheBloke (Tom Jobbins)](https://huggingface.co/TheBloke)
> [jncraton (Jon)](https://huggingface.co/jncraton)
> [bhenrym14 (Brandon)](https://huggingface.co/bhenrym14)
### Organization
> [decapoda-research (Decapoda Research)](https://huggingface.co/decapoda-research)
> [openchat (OpenChat)](https://huggingface.co/openchat)
> [MBZUAI (Mohamed Bin Zayed University of Artificial Intelligence)](https://huggingface.co/MBZUAI)
> [openchat/openchat ยท Hugging Face](https://huggingface.co/openchat/openchat)
> [OpenAssistant (OpenAssistant)](https://huggingface.co/OpenAssistant)
|
8bd64d5951774d6f9ed623abbe471b4c
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.