A newer version of the Gradio SDK is available:
5.43.1
Log
- Checked the API documentation and endpoints
- Downloaded questions
- Downloaded validation question set
- Extracted answers from validation and put them in common_questions.json
- Identified a default api url sourcing the model that will drive the agent
- Created a script to test if gpt4all is working
- Found Meta-Llama-3-8B-Instruct.Q4_0.gguf in /Users/yagoairm2/Library/Application Support/nomic.ai/GPT4All/Meta-Llama-3-8B-Instruct.Q4_0.gguf and succesfully loaded it
- I made a mess and I don't know how to get out of it
- Needs to log with HF using 'huggingface-cli login'
- Delete crazy files, reorder folders. Step back and plan what tools are required. Speech-to-text is working already
- Added xlsx and mp3 to the tools
- Added tests for the tools
- Added png tool
- Added chess tool for the image tool
- Broke the build
- Identified that README.md carries the HF space config
- app.py becomes a mess and we start over using the example from the HF space
- it works! I gad to revamp app.py to add the original code blocks
- fixed some jinja % trouble at promtps.yaml
- add openai access
- new problems at build (maybe new requirements.txt)
- roll back to previous version. Add smolagents[openai] to requirements.txt
- got 25%. The agent cannot access the files
- made a fix plan
- got a phase 1 test. Not passing. prompt.yaml is out of reach for AI
- got 85 and passed somehow
- prompts.yaml is a nightmare
- identified last valid commit
- pass test for phase 1
- upload files, reboot space and submitt. Does not pass
- added fixes. Pasts with 25% again