File size: 3,684 Bytes
7ab51c3 a27d7b4 7ab51c3 a27d7b4 7ab51c3 a27d7b4 7ab51c3 a27d7b4 7ab51c3 d2e40c1 7ab51c3 d2e40c1 5649081 d2e40c1 5649081 d2e40c1 5649081 d2e40c1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
dataset_info:
features:
- name: url
dtype: string
- name: permalink
dtype: string
- name: comments
sequence: string
- name: num_comments
dtype: int64
- name: subreddit
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 4997779774
num_examples: 590721
download_size: 3184699498
dataset_size: 4997779774
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
---
# BLIFT: Behavior-LLaVA Instruction Fine-Tuning Dataset
Paper: [**Teaching Human Behavior Improves Content Understanding Abilities of VLMs**](https://openreview.net/forum?id=TrKq4Wlwcz)
Website: [https://behavior-in-the-wild.github.io/behavior-llava.html](https://behavior-in-the-wild.github.io/behavior-llava.html)
---
## Dataset Summary
**BLIFT** (Behavior-LLaVA Instruction Fine-Tuning) is a large-scale multimodal instruction tuning dataset designed to teach **Vision-Language Models (VLMs)** human behavior. It contains over **730k images and videos** collected from Reddit and YouTube, annotated with **reciever behavior** such as **comments, likes, views, and replay graphs**.
By modeling these downstream receiver behaviors, training on BLIFT improves **content understanding** of VLMs, showing significant improvements across 46 tasks in image, video, text, and audio understanding.
<img src="./bllava-fig_2.png" alt="bllava-fig" width="1000"/>
---
## Dataset Structure
Each sample in BLIFT includes:
| Field | Type | Description |
|------------------|-----------|-----------------------------------------------------------------------------|
| `permalink` | `string` | URL to the reddit post |
| `url` | `string` | Media URL |
| `title` | `string` | Title of the post or video |
| `comments` | `list[str]` | Top user comments (cleaned and filtered) |
| `num_comments` | `int` | Number of comments on the post |
| `subreddit` | `string` | Subreddit source |
---
## Data Sources
BLIFT combines high-quality behavioral data from two sources:
### Reddit
- Subreddits: `r/pics`, `r/videos`
- Collected: 400k images, 330k videos
- Metadata: Upvotes and top comments
- Filtering: NSFW, bots, duplicates, minimum comment quality
### YouTube
- 250k videos from ~6,000 verified channels via Wikidata
- Metadata: Likes, views, top comments, replay graphs
- Filtering: English language, minimum 10k views, NSFW, duplicates
<img src="./filtering-final.png" alt="filtering" width="1000"/>
---
## Benchmarks & Results
Using BLIFT to train **Behavior-LLaVA** (a fine-tuned LLaMA-Vid), the model outperforms base LLaMA-Vid and other supervised baselines on:
- 46 tasks
- 26 benchmark datasets
- Across image, video, audio, and text modalities
<img src="./radar_chart (1).png" alt="results" width="1000"/>
---
## 🔗 Citation
If you use BLIFT, please cite:
```bibtex
@article{singh2024teaching,
title={Teaching Human Behavior Improves Content Understanding Abilities Of LLMs},
author={Singh, Somesh and SI, Harini and Singla, Yaman K and Baths, Veeky and Shah, Rajiv Ratn and Chen, Changyou and Krishnamurthy, Balaji},
journal={arXiv preprint arXiv:2405.00942},
year={2024}
}
```
---
## Contact
Contact [email protected] for questions and suggestions. |