kernel
danieldk HF Staff commited on
Commit
a743610
·
1 Parent(s): 8071e7c

Add README

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ tags:
4
+ - kernel
5
+ ---
6
+
7
+ # Flash Attention 3
8
+
9
+ Flash Attention is a fast and memory-efficient implementation of the
10
+ attention mechanism, designed to work with large models and long sequences.
11
+ This is a Hugging Face compliant kernel build of Flash Attention.
12
+
13
+ Original code here [https://github.com/Dao-AILab/flash-attention](https://github.com/Dao-AILab/flash-attention).