Create app.py
Browse files
app.py
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import streamlit as st
|
2 |
+
|
3 |
+
AppGoals="""
|
4 |
+
Computer Use
|
5 |
+
1. Browser based testing app
|
6 |
+
2. similar to apps I wrote years ago which would operate a browser then run tests against my web apps including being able to compare any image or text content together to search results from one of my ai programs to determine content overlap which is then used to evaluate the results and update my ai model context data to store anything that was found that adds to the original idea. When I looked at this problem before I found chrome driver for automatic testing, saucelabs which can kind of do it, and then some python testing libraries which could do it. Can you enlighten me on which python libraries and potenitally dev tools which would help me with this to automate my testing and evaluation of my ai generated content which resides at many different URLs on huggingface as running apps
|
7 |
+
3. Past apps per wayback from 2004:
|
8 |
+
- https://web.archive.org/web/20040520102150/http://www.evolvable.com/EStore/
|
9 |
+
|
10 |
+
WebTest 8.0
|
11 |
+
WebTest is a stress and load testing browser.
|
12 |
+
You can use WebTest to identify defects that occur when web sites incurr a large amount of traffic.
|
13 |
+
To use WebTest, simply visit the pages that you want to test and WebTest remembers your navigation history. You can save the history to a text file that you can open from other machines or other copies of WebTest running on your computer. To perform stress testing, set the time interval you would like WebTest to visit each Url, then set the cycle option to cycle through your site list.
|
14 |
+
|
15 |
+
Features Include:
|
16 |
+
Screen Captures
|
17 |
+
Graph of Page Load Times
|
18 |
+
Memory of Visited Pages
|
19 |
+
Tunable Delay and Maximum Wait Time
|
20 |
+
Hotkeys for Typical Browser Resolution Sizing
|
21 |
+
XML/XSL based Usability Reporting
|
22 |
+
|
23 |
+
Crawler 1.0
|
24 |
+
|
25 |
+
Have you ever wanted to analyze a web site to see what it contains yet didn't want to spend the next few days traversing all the links on the site?
|
26 |
+
|
27 |
+
With Crawler, you can enter a web address and Crawler visits every page, extracting a reusable list of links from the pages it reviews.
|
28 |
+
You can save the HTML source code from each page that Crawler encounters so you can analyze the site further, or automatically surf the site and take snapshots using Evolvable WebTest.
|
29 |
+
|
30 |
+
Crawler is an indespensable tool because it provides results in an easy to use format and also allows you to search for terms across different sites and save the list of links that match your search criteria.
|
31 |
+
|
32 |
+
Let me help you explore the Python libraries and tools that would be well-suited for automated browser testing and content comparison for AI-generated content.
|
33 |
+
The most popular and robust options for this type of testing are:
|
34 |
+
|
35 |
+
Selenium with Python
|
36 |
+
|
37 |
+
pythonCopyfrom selenium import webdriver
|
38 |
+
from selenium.webdriver.common.by import By
|
39 |
+
|
40 |
+
driver = webdriver.Chrome()
|
41 |
+
driver.get("https://huggingface.co/your-app")
|
42 |
+
content = driver.find_element(By.CLASS_NAME, "content").text
|
43 |
+
|
44 |
+
Playwright (more modern alternative to Selenium)
|
45 |
+
|
46 |
+
pythonCopyfrom playwright.sync_api import sync_playwright
|
47 |
+
|
48 |
+
with sync_playwright() as p:
|
49 |
+
browser = p.chromium.launch()
|
50 |
+
page = browser.new_page()
|
51 |
+
page.goto("https://huggingface.co/your-app")
|
52 |
+
content = page.inner_text(".content")
|
53 |
+
|
54 |
+
For content comparison and similarity analysis:
|
55 |
+
|
56 |
+
pythonCopyfrom difflib import SequenceMatcher
|
57 |
+
from sklearn.feature_extraction.text import TfidfVectorizer
|
58 |
+
from sklearn.metrics.pairwise import cosine_similarity
|
59 |
+
|
60 |
+
# Basic text similarity
|
61 |
+
def similarity_ratio(text1, text2):
|
62 |
+
return SequenceMatcher(None, text1, text2).ratio()
|
63 |
+
|
64 |
+
# More advanced semantic similarity
|
65 |
+
def semantic_similarity(text1, text2):
|
66 |
+
vectorizer = TfidfVectorizer()
|
67 |
+
tfidf = vectorizer.fit_transform([text1, text2])
|
68 |
+
return cosine_similarity(tfidf[0:1], tfidf[1:2])[0][0]
|
69 |
+
For image comparison:
|
70 |
+
pythonCopyfrom PIL import Image
|
71 |
+
import imagehash
|
72 |
+
import numpy as np
|
73 |
+
|
74 |
+
def compare_images(image1_path, image2_path):
|
75 |
+
hash1 = imagehash.average_hash(Image.open(image1_path))
|
76 |
+
hash2 = imagehash.average_hash(Image.open(image2_path))
|
77 |
+
return hash1 - hash2 # Lower number means more similar
|
78 |
+
Would you like me to create a more comprehensive example that combines these elements into a complete testing framework? I could show you how to:
|
79 |
+
|
80 |
+
Automate browsing multiple HuggingFace URLs
|
81 |
+
Extract both text and image content
|
82 |
+
Compare content using different similarity metrics
|
83 |
+
Store results in a structured format for model context updates
|
84 |
+
|
85 |
+
"""
|
86 |
+
|
87 |
+
st.markdown(Goals)
|
88 |
+
|