Jebin2005 commited on
Commit
b7ca217
·
verified ·
1 Parent(s): 258cdf5

Update requirements.txt

Browse files
Files changed (1) hide show
  1. requirements.txt +30 -4
requirements.txt CHANGED
@@ -1,4 +1,30 @@
1
- Gradio
2
- TinyLlama/TinyLlama-1.1B-Chat-v1.0 Model
3
- Hugging Face Transformer
4
- Pytorch and Tensorflow
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Python Libraries
2
+ Transformers:
3
+ Required to use Hugging Face's pre-trained models for text generation.
4
+ Install with: pip install transformers
5
+ Gradio:
6
+ Provides the user interface for your chatbot.
7
+ Install with: pip install gradio
8
+ Model and Tokenizer
9
+ TinyLlama/TinyLlama-1.1B-Chat-v1.0:
10
+ Ensure this model is available on Hugging Face's model hub.
11
+ Requires downloading the model and tokenizer with from_pretrained.
12
+ Additional Dependencies
13
+ PyTorch or TensorFlow:
14
+ Either backend is required for running Hugging Face models.
15
+ Install with:
16
+ PyTorch: pip install torch
17
+ TensorFlow (optional): pip install tensorflow
18
+ Hardware Requirements
19
+ A machine with sufficient GPU memory (e.g., 8GB or more) is recommended for faster inference, especially for models like TinyLlama-1.1B.
20
+ If using a CPU, ensure the machine has adequate resources, though response times may be slower.
21
+ Optional Tools
22
+ Internet Connection: Required for the first-time download of the model and tokenizer from Hugging Face's servers.
23
+ Custom Gradio Theme (Optional): If you need custom styling beyond the built-in Monochrome theme, you might need CSS.
24
+ Running the Code
25
+ Save the script in a .py file (e.g., interview_chatbot.py).
26
+ Run the script using Python:
27
+ bash
28
+ Copy code
29
+ python interview_chatbot.py
30
+ Gradio will launch a local server, and a link (e.g., http://127.0.0.1:7860) will open in your web browser.