Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
seanpedrickcase
/
topic_modelling
like
14
Running
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
22ca76e
topic_modelling
180 kB
Ctrl+K
Ctrl+K
4 contributors
History:
45 commits
seanpedrickcase
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
22ca76e
over 1 year ago
.github
first commit
over 2 years ago
funcs
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
over 1 year ago
.dockerignore
Safe
203 Bytes
Can split passages into sentences. Improved embedding, LLM representation models, improved zero shot capabilities
almost 2 years ago
.gitignore
Safe
226 Bytes
Can split passages into sentences. Improved embedding, LLM representation models, improved zero shot capabilities
almost 2 years ago
Dockerfile
3.1 kB
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
over 1 year ago
LICENSE
Safe
11.6 kB
first commit
over 2 years ago
README.md
Safe
1.9 kB
Updated Gradio version for spaces. Updated Dockerfile to enable Llama.cpp build with Cmake
over 1 year ago
app.py
Safe
13.7 kB
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
over 1 year ago
download_model.py
Safe
558 Bytes
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
over 1 year ago
how_to_create_exe_dist.txt
Safe
1.89 kB
Upgraded to Gradio 4.16.0. Guide for converting to exe added.
about 2 years ago
requirements.txt
Safe
615 Bytes
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
over 1 year ago
requirements_gpu.txt
Safe
646 Bytes
Allowed for app running on AWS to use smaller embedding model and not to load representation LLM (due to size restrictions).
over 1 year ago