Prompt Chain

#1
by awacke1 - opened
  1. User
    Create a torch demonstration and use data created with this program below for input and output. Design a torch demo that uses simple convolutions to explain correlation between one word and another and add to IO patterns of this program and use requests and nlp including nltk to remix a demonstration app that uses the text files as input. Have this allow users to build their library of prompts and add it to their user file defined by email address named text file. Add links to sites with reference documentation. Use this as teaching lesson in python streamlit UI and ccode yet remove all comments and just have variable names be super descriptive

  2. Create requirements.txt import streamlit as st
    import torch
    import torch.nn as nn
    import nltk
    from nltk.corpus import stopwords
    import requests
    import os

  3. Analyze the build log below and create a dockerfile based on python slim for versions of python that go with nvidiaa cards: ===== Build Queued at 2024-01-25 10:24:10 / Commit SHA: 187aa2c =====

--> FROM docker.io/library/python:3.10@sha256:11edd7f5ed19b19adfc91d3384970da0787d53cf290a54db5f96b4dab645da6d
DONE 0.0s

--> RUN apt-get update && apt-get install -y git git-lfs ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx && rm -rf /var/lib/apt/lists/* && git lfs install
CACHED

  1. Dockerfile:

Use Python slim image compatible with NVIDIA cards

FROM python:3.10-slim

Set working directory

WORKDIR /home/user/app

Install system dependencies

RUN apt-get update && apt-get install -y
git
git-lfs
ffmpeg
libsm6
libxext6
cmake
libgl1-mesa-glx
&& rm -rf /var/lib/apt/lists/*
&& git lfs install

Add a non-root user

RUN useradd -m -u 1000 user

Copy requirements and pre-requirements files into the container

COPY requirements.txt pre-requirements.txt /home/user/app/

Install Python dependencies

RUN pip install --no-cache-dir pip==22.3.1 &&
pip install --no-cache-dir -r pre-requirements.txt &&
pip install --no-cache-dir -r requirements.txt

Install NVIDIA CUDA libraries for GPU support

Note: The versions should match the CUDA version on your NVIDIA GPU

RUN pip install
torch==2.1.2
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cusolver-cu12==11.4.5.107
nvidia-curand-cu12==10.3.2.106
nvidia-cublas-cu12==12.1.3.1
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-nccl-cu12==2.18.1
nvidia-cusparse-cu12==12.1.0.106
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-nvtx-cu12==12.1.105
nvidia-nvjitlink-cu12

Set the user

USER user

Set the entry point for the container

CMD ["python"]

  1. Refactor UI to be more self explanatory and cchange download so the email and the download base63 of text is available. Have the convolution generate ten prompts with the information value order distribution for the input and show pandas based plot showing the words or tokens: Text Convolution Demonstration
    Upload a text file and see how text convolution works.

Choose a text file

Drag and drop file here
Limit 200MB per file • TXT
0823_1629_abouttheuniverseofStarWarsandincludeasummaryofeachplanetandshowemojisforeveryplaceandevery.txt
2.0KB
Enter your email to save your prompts:

  1. Convolution result: tensor([[[ 1.7409e+18, 3.7868e+18, -2.0988e+18, -2.9399e+18, 2.2897e+18, 1.9817e+18, -8.9956e+17, -7.7308e+17, -5.1334e+17, 8.1542e+17, 7.1413e+17, -1.3267e+18, -1.8908e+18, -5.5318e+17, 2.4777e+18, 5.1341e+17, 8.1472e+17, -2.9673e+18, 1.2300e+18, 1.1829e+18, 1.1893e+18, 3.7822e+18, -3.7751e+18, 1.5539e+18, 1.1271e+18, -1.4378e+18,

  2. Progam:
    import streamlit as st
    import torch
    import torch.nn as nn
    import nltk
    from nltk.corpus import stopwords
    import pandas as pd
    import matplotlib.pyplot as plt
    import base64

Ensure NLTK resources are downloaded

nltk.download('punkt')
nltk.download('stopwords')

Function to perform convolution on text data

def text_convolution(input_text, kernel_size=3):
words = nltk.word_tokenize(input_text)
words = [word for word in words if word not in stopwords.words('english')]
tensor_input = torch.tensor([hash(word) for word in words], dtype=torch.float)
conv_layer = nn.Conv1d(1, 1, kernel_size, stride=1)
tensor_input = tensor_input.view(1, 1, -1)
output = conv_layer(tensor_input)
return output, words

Streamlit UI

def main():
st.title("Text Convolution Demonstration")
st.write("This app demonstrates how text convolution works. Upload a text file and see the convolution result along with a distribution plot of word tokens.")

uploaded_file = st.file_uploader("Choose a text file (TXT only)", type=["txt"])
user_email = st.text_input("Enter your email to save your prompts:")

if uploaded_file is not None and user_email:
    text_data = uploaded_file.read().decode("utf-8")
    conv_result, words = text_convolution(text_data)
    st.write("Convolution result:", conv_result)

    # Visualization
    word_counts = pd.Series(words).value_counts()
    st.bar_chart(word_counts.head(10))

    # Saving user prompts
    user_file_name = f"{user_email}_prompts.txt"
    with open(user_file_name, "a") as file:
        file.write(text_data + "\n")
    st.success(f"Your prompts have been added to {user_file_name}")

    # Download link for the file
    with open(user_file_name, "rb") as f:
        b64 = base64.b64encode(f.read()).decode()
    href = f'<a href="data:file/txt;base64,{b64}" download="{user_file_name}">Download {user_file_name}</a>'
    st.markdown(href, unsafe_allow_html=True)

if name == "main":
main()

  1. User
    Create a torch demonstration and use data created with this program below for input and output. Design a torch demo that uses simple convolutions to explain correlation between one word and another and add to IO patterns of this program and use requests and nlp including nltk to remix a demonstration app that uses the text files as input. Have this allow users to build their library of prompts and add it to their user file defined by email address named text file. Add links to sites with reference documentation. Use this as teaching lesson in python streamlit UI and ccode yet remove all comments and just have variable names be super descriptive

  2. Create requirements.txt import streamlit as st
    import torch
    import torch.nn as nn
    import nltk
    from nltk.corpus import stopwords
    import requests
    import os

  3. Analyze the build log below and create a dockerfile based on python slim for versions of python that go with nvidiaa cards: ===== Build Queued at 2024-01-25 10:24:10 / Commit SHA: 187aa2c =====

--> FROM docker.io/library/python:3.10@sha256:11edd7f5ed19b19adfc91d3384970da0787d53cf290a54db5f96b4dab645da6d
DONE 0.0s

--> RUN apt-get update && apt-get install -y git git-lfs ffmpeg libsm6 libxext6 cmake libgl1-mesa-glx && rm -rf /var/lib/apt/lists/* && git lfs install
CACHED

  1. Dockerfile:

Use Python slim image compatible with NVIDIA cards

FROM python:3.10-slim

Set working directory

WORKDIR /home/user/app

Install system dependencies

RUN apt-get update && apt-get install -y
git
git-lfs
ffmpeg
libsm6
libxext6
cmake
libgl1-mesa-glx
&& rm -rf /var/lib/apt/lists/*
&& git lfs install

Add a non-root user

RUN useradd -m -u 1000 user

Copy requirements and pre-requirements files into the container

COPY requirements.txt pre-requirements.txt /home/user/app/

Install Python dependencies

RUN pip install --no-cache-dir pip==22.3.1 &&
pip install --no-cache-dir -r pre-requirements.txt &&
pip install --no-cache-dir -r requirements.txt

Install NVIDIA CUDA libraries for GPU support

Note: The versions should match the CUDA version on your NVIDIA GPU

RUN pip install
torch==2.1.2
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cusolver-cu12==11.4.5.107
nvidia-curand-cu12==10.3.2.106
nvidia-cublas-cu12==12.1.3.1
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-nccl-cu12==2.18.1
nvidia-cusparse-cu12==12.1.0.106
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-nvtx-cu12==12.1.105
nvidia-nvjitlink-cu12

Set the user

USER user

Set the entry point for the container

CMD ["python"]

  1. Refactor UI to be more self explanatory and cchange download so the email and the download base63 of text is available. Have the convolution generate ten prompts with the information value order distribution for the input and show pandas based plot showing the words or tokens: Text Convolution Demonstration
    Upload a text file and see how text convolution works.

Choose a text file

Drag and drop file here
Limit 200MB per file • TXT
0823_1629_abouttheuniverseofStarWarsandincludeasummaryofeachplanetandshowemojisforeveryplaceandevery.txt
2.0KB
Enter your email to save your prompts:

  1. Convolution result: tensor([[[ 1.7409e+18, 3.7868e+18, -2.0988e+18, -2.9399e+18, 2.2897e+18, 1.9817e+18, -8.9956e+17, -7.7308e+17, -5.1334e+17, 8.1542e+17, 7.1413e+17, -1.3267e+18, -1.8908e+18, -5.5318e+17, 2.4777e+18, 5.1341e+17, 8.1472e+17, -2.9673e+18, 1.2300e+18, 1.1829e+18, 1.1893e+18, 3.7822e+18, -3.7751e+18, 1.5539e+18, 1.1271e+18, -1.4378e+18,

  2. Progam:
    import streamlit as st
    import torch
    import torch.nn as nn
    import nltk
    from nltk.corpus import stopwords
    import pandas as pd
    import matplotlib.pyplot as plt
    import base64

Ensure NLTK resources are downloaded

nltk.download('punkt')
nltk.download('stopwords')

Function to perform convolution on text data

def text_convolution(input_text, kernel_size=3):
words = nltk.word_tokenize(input_text)
words = [word for word in words if word not in stopwords.words('english')]
tensor_input = torch.tensor([hash(word) for word in words], dtype=torch.float)
conv_layer = nn.Conv1d(1, 1, kernel_size, stride=1)
tensor_input = tensor_input.view(1, 1, -1)
output = conv_layer(tensor_input)
return output, words

Streamlit UI

def main():
st.title("Text Convolution Demonstration")
st.write("This app demonstrates how text convolution works. Upload a text file and see the convolution result along with a distribution plot of word tokens.")

uploaded_file = st.file_uploader("Choose a text file (TXT only)", type=["txt"])
user_email = st.text_input("Enter your email to save your prompts:")

if uploaded_file is not None and user_email:
    text_data = uploaded_file.read().decode("utf-8")
    conv_result, words = text_convolution(text_data)
    st.write("Convolution result:", conv_result)

    # Visualization
    word_counts = pd.Series(words).value_counts()
    st.bar_chart(word_counts.head(10))

    # Saving user prompts
    user_file_name = f"{user_email}_prompts.txt"
    with open(user_file_name, "a") as file:
        file.write(text_data + "\n")
    st.success(f"Your prompts have been added to {user_file_name}")

    # Download link for the file
    with open(user_file_name, "rb") as f:
        b64 = base64.b64encode(f.read()).decode()
    href = f'<a href="data:file/txt;base64,{b64}" download="{user_file_name}">Download {user_file_name}</a>'
    st.markdown(href, unsafe_allow_html=True)

if name == "main":
main()

  1. Show full code listing app.py and new requirements.txt

app.py:
import streamlit as st
import torch
import torch.nn as nn
import nltk
from nltk.corpus import stopwords
import pandas as pd
import base64

Ensure NLTK resources are downloaded

nltk.download('punkt')
nltk.download('stopwords')

Function to perform convolution on text data

def text_convolution(input_text, kernel_size=3):
words = nltk.word_tokenize(input_text)
words = [word for word in words if word not in stopwords.words('english')]
tensor_input = torch.tensor([hash(word) for word in words], dtype=torch.float)
conv_layer = nn.Conv1d(1, 1, kernel_size, stride=1)
tensor_input = tensor_input.view(1, 1, -1)
output = conv_layer(tensor_input)
return output, words

Streamlit UI

def main():
st.title("Text Convolution Demonstration")
st.write("This app demonstrates how text convolution works. Upload a text file and see the convolution result along with a distribution plot of word tokens.")

uploaded_file = st.file_uploader("Choose a text file (TXT only)", type=["txt"])
user_email = st.text_input("Enter your email to save your prompts:")

if uploaded_file is not None and user_email:
    text_data = uploaded_file.read().decode("utf-8")
    conv_result, words = text_convolution(text_data)
    st.write("Convolution result:", conv_result)

    # Visualization
    word_counts = pd.Series(words).value_counts()
    st.bar_chart(word_counts.head(10))

    # Saving user prompts
    user_file_name = f"{user_email}_prompts.txt"
    with open(user_file_name, "a") as file:
        file.write(text_data + "\n")
    st.success(f"Your prompts have been added to {user_file_name}")

    # Download link for the file
    with open(user_file_name, "rb") as f:
        b64 = base64.b64encode(f.read()).decode()
    href = f'<a href="data:file/txt;base64,{b64}" download="{user_file_name}">Download {user_file_name}</a>'
    st.markdown(href, unsafe_allow_html=True)

if name == "main":
main()

requirements.txt:
streamlit
torch
nltk
pandas

Sign up or log in to comment