Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
@@ -9,44 +9,4 @@ pinned: false
|
|
9 |
[Installation](https://pixeltable.github.io/pixeltable/getting-started/) | [Documentation](https://pixeltable.readme.io/) | [API Reference](https://pixeltable.github.io/pixeltable/) | [Code Samples](https://github.com/pixeltable/pixeltable?tab=readme-ov-file#-code-samples) | [Computer Vision](https://docs.pixeltable.com/docs/object-detection-in-videos) | [LLM](https://docs.pixeltable.com/docs/document-indexing-and-rag)
|
10 |
</div>
|
11 |
|
12 |
-
Pixeltable is a Python library providing a declarative interface for multimodal data (text, images, audio, video). It features built-in versioning, lineage tracking, and incremental updates, enabling users to **store**, **transform**, **index**, and **iterate** on data for their ML workflows. Data transformations, model inference, and custom logic are embedded as **computed columns**. **Pixeltable is persistent. Unlike in-memory Python libraries such as Pandas, Pixeltable is a database.**
|
13 |
-
|
14 |
-
## 🧱 Code Samples
|
15 |
-
|
16 |
-
### Text and image similarity search on video frames with embedding indexes
|
17 |
-
```python
|
18 |
-
import pixeltable as pxt
|
19 |
-
from pixeltable.functions.huggingface import clip_image, clip_text
|
20 |
-
from pixeltable.iterators import FrameIterator
|
21 |
-
import PIL.Image
|
22 |
-
|
23 |
-
video_table = pxt.create_table('videos', {'video': pxt.VideoType()})
|
24 |
-
|
25 |
-
video_table.insert([{'video': '/video.mp4'}])
|
26 |
-
|
27 |
-
frames_view = pxt.create_view(
|
28 |
-
'frames', video_table, iterator=FrameIterator.create(video=video_table.video))
|
29 |
-
|
30 |
-
@pxt.expr_udf
|
31 |
-
def embed_image(img: PIL.Image.Image):
|
32 |
-
return clip_image(img, model_id='openai/clip-vit-base-patch32')
|
33 |
-
|
34 |
-
@pxt.expr_udf
|
35 |
-
def str_embed(s: str):
|
36 |
-
return clip_text(s, model_id='openai/clip-vit-base-patch32')
|
37 |
-
|
38 |
-
# Create an index on the 'frame' column that allows text and image search
|
39 |
-
frames_view.add_embedding_index('frame', string_embed=str_embed, image_embed=embed_image)
|
40 |
-
|
41 |
-
# Now we will retrieve images based on a sample image
|
42 |
-
sample_image = '/image.jpeg'
|
43 |
-
sim = frames_view.frame.similarity(sample_image)
|
44 |
-
frames_view.order_by(sim, asc=False).limit(5).select(frames_view.frame, sim=sim).collect()
|
45 |
-
|
46 |
-
# Now we will retrieve images based on a string
|
47 |
-
sample_text = 'red truck'
|
48 |
-
sim = frames_view.frame.similarity(sample_text)
|
49 |
-
frames_view.order_by(sim, asc=False).limit(5).select(frames_view.frame, sim=sim).collect()
|
50 |
-
|
51 |
-
```
|
52 |
-
Learn how to work with [Embedding and Vector Indexes](https://docs.pixeltable.com/docs/embedding-vector-indexes).
|
|
|
9 |
[Installation](https://pixeltable.github.io/pixeltable/getting-started/) | [Documentation](https://pixeltable.readme.io/) | [API Reference](https://pixeltable.github.io/pixeltable/) | [Code Samples](https://github.com/pixeltable/pixeltable?tab=readme-ov-file#-code-samples) | [Computer Vision](https://docs.pixeltable.com/docs/object-detection-in-videos) | [LLM](https://docs.pixeltable.com/docs/document-indexing-and-rag)
|
10 |
</div>
|
11 |
|
12 |
+
Pixeltable is a Python library providing a declarative interface for multimodal data (text, images, audio, video). It features built-in versioning, lineage tracking, and incremental updates, enabling users to **store**, **transform**, **index**, and **iterate** on data for their ML workflows. Data transformations, model inference, and custom logic are embedded as **computed columns**. **Pixeltable is persistent. Unlike in-memory Python libraries such as Pandas, Pixeltable is a database.**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|