Arthur Conmy (GDM Account)
ArthurConmyGDM
AI & ML interests
Interpretability, AI Safety, AI Alignment
Organizations
gemma-2-2b layer 20 SAE width 65k SAE seems very off
1
#8 opened 6 months ago
by
charlieoneill

Removing SAEs with LR != 7e-5
5
#7 opened 8 months ago
by
Aric
Layer 13 saes raising "zipfile.BadZipFile: File is not a zip file"
5
#5 opened 11 months ago
by
MrGonao

suggestion: notate the canonical SAEs
3
#9 opened 10 months ago
by
dribnet
add experimental embedding SAEs
#4 opened 11 months ago
by
Aric
add experimental embedding SAEs
#7 opened 12 months ago
by
Aric
New table
1
#8 opened about 1 year ago
by
ArthurConmyGDM

Link model to paper
#5 opened about 1 year ago
by
nielsr

Link dataset to paper
#7 opened about 1 year ago
by
nielsr

Link model to paper
#6 opened about 1 year ago
by
nielsr

The L0 of the SAE does not quite match
1
#3 opened about 1 year ago
by
ShayanShamsi
Update README.md
#5 opened about 1 year ago
by
NeelNanda2
Update README.md
#4 opened about 1 year ago
by
NeelNanda2
Uploaded demo GIF
#3 opened about 1 year ago
by
NeelNanda2
Update README.md
#4 opened about 1 year ago
by
ArthurConmy
Update README.md
#2 opened about 1 year ago
by
ArthurConmy
Update README.md
#3 opened about 1 year ago
by
ArthurConmyGDM

Delete layer_11/width_16k/average_l0_79
1
#2 opened about 1 year ago
by
ArthurConmyGDM
