{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Welcome to the Second Lab - Week 1, Day 3\n", "\n", "Today we will work with lots of models! This is a way to get comfortable with APIs." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Important point - please read

\n", " The way I collaborate with you may be different to other courses you've taken. I prefer not to type code while you watch. Rather, I execute Jupyter Labs, like this, and give you an intuition for what's going on. My suggestion is that you carefully execute this yourself, after watching the lecture. Add print statements to understand what's going on, and then come up with your own variations.

If you have time, I'd love it if you submit a PR for changes in the community_contributions folder - instructions in the resources. Also, if you have a Github account, use this to showcase your variations. Not only is this essential practice, but it demonstrates your skills to others, including perhaps future clients or employers...\n", "
\n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This version adds Reflection pattern where we ask each model to critique and improve its own answer." ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# Start with imports - ask ChatGPT to explain any package that you don't know\n", "\n", "import os\n", "import json\n", "from dotenv import load_dotenv\n", "from openai import OpenAI\n", "from anthropic import Anthropic\n", "from IPython.display import Markdown, display" ] }, { "cell_type": "markdown", "metadata": {}, "source": [] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "request = \"Please come up with a challenging, nuanced question that I can ask a number of LLMs to evaluate their intelligence. \"\n", "request += \"Answer only with the question, no explanation.\"\n", "messages = [{\"role\": \"user\", \"content\": request}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "messages" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "competitors = []\n", "answers = []\n", "messages = [{\"role\": \"user\", \"content\": question}]" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "gemini = OpenAI(api_key=google_api_key, base_url=\"https://generativelanguage.googleapis.com/v1beta/openai/\")\n", "model_name = \"gemini-2.0-flash\"\n", "\n", "response = gemini.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "deepseek = OpenAI(api_key=deepseek_api_key, base_url=\"https://api.deepseek.com/v1\")\n", "model_name = \"deepseek-chat\"\n", "\n", "response = deepseek.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "groq = OpenAI(api_key=groq_api_key, base_url=\"https://api.groq.com/openai/v1\")\n", "model_name = \"llama-3.3-70b-versatile\"\n", "\n", "response = groq.chat.completions.create(model=model_name, messages=messages)\n", "answer = response.choices[0].message.content\n", "\n", "display(Markdown(answer))\n", "competitors.append(model_name)\n", "answers.append(answer)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Super important - ignore me at your peril!

\n", " The model called llama3.3 is FAR too large for home computers - it's not intended for personal computing and will consume all your resources! Stick with the nicely sized llama3.2 or llama3.2:1b and if you want larger, try llama3.1 or smaller variants of Qwen, Gemma, Phi or DeepSeek. See the the Ollama models page for a full list of models and sizes.\n", " \n", "
" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "!ollama pull llama3.2" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "# Let's bring this together - note the use of \"enumerate\"\n", "\n", "together = \"\"\n", "for index, answer in enumerate(answers):\n", " together += f\"# Response from competitor {index+1}\\n\\n\"\n", " together += answer + \"\\n\\n\"" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [], "source": [ "judge = f\"\"\"You are judging a competition between {len(competitors)} competitors.\n", "Each model has been given this question:\n", "\n", "{question}\n", "\n", "Your job is to evaluate each response for clarity and strength of argument, and rank them in order of best to worst.\n", "Respond with JSON, and only JSON, with the following format:\n", "{{\"results\": [\"best competitor number\", \"second best competitor number\", \"third best competitor number\", ...]}}\n", "\n", "Here are the responses from each competitor:\n", "\n", "{together}\n", "\n", "Now respond with the JSON with the ranked order of the competitors, nothing else. Do not include markdown formatting or code blocks.\"\"\"\n" ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [], "source": [ "judge_messages = [{\"role\": \"user\", \"content\": judge}]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Exercise

\n", " Which pattern(s) did this use? Try updating this to add another Agentic design pattern.\n", " \n", "
" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "1. Ensemble (Model Competition) Pattern\n", "Description: The same prompt/question is sent to multiple different LLMs (OpenAI, Anthropic, Ollama, etc.).\n", "Purpose: To compare the quality, style, and content of responses from different models.\n", "Where in notebook:\n", "The code sends the same question to several models and collects their answers in the competitors and answers lists.\n", "\n", "2. Judging/Evaluator Pattern\n", "Description: After collecting responses from all models, another LLM is used as a “judge” to evaluate and rank the responses.\n", "Purpose: To automate the assessment of which model gave the best answer, based on clarity and strength of argument.\n", "Where in notebook:\n", "The judge prompt is constructed, and an LLM is asked to rank the responses in JSON format.\n", "\n", "3. Self-Improvement/Meta-Reasoning Pattern\n", "Description: The system not only generates answers but also reflects on and evaluates its own outputs (or those of its peers).\n", "Purpose: To iteratively improve or select the best output, often used in advanced agentic systems.\n", "Where in notebook:\n", "The “judge” LLM is an example of meta-reasoning, as it reasons about the quality of other LLMs’ outputs.\n", "\n", "4. Chain-of-Thought/Decomposition Pattern (to a lesser extent)\n", "Description: Breaking down a complex task into subtasks (e.g., generate question → get answers → evaluate answers).\n", "Purpose: To improve reliability and interpretability by structuring the workflow.\n", "Where in notebook:\n", "The workflow is decomposed into:\n", "Generating a challenging question\n", "Getting answers from multiple models\n", "Judging the answers\n", "\n", "In short:\n", "This notebook uses the Ensemble/Competition, Judging/Evaluator, and Meta-Reasoning agentic patterns, and also demonstrates a simple form of Decomposition by structuring the workflow into clear stages.\n", "If you want to add more agentic patterns, you could try things like:\n", "Reflexion (let models critique and revise their own answers)\n", "Tool Use (let models call external tools or APIs)\n", "Planning (let a model plan the steps before answering)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", " \n", " \n", " \n", " \n", "
\n", " \n", " \n", "

Commercial implications

\n", " These kinds of patterns - to send a task to multiple models, and evaluate results,\n", " are common where you need to improve the quality of your LLM response. This approach can be universally applied\n", " to business projects where accuracy is critical.\n", " \n", "
" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.8" } }, "nbformat": 4, "nbformat_minor": 2 }