
pytorch/Phi-4-mini-instruct-8da4w
Text Generation
•
Updated
•
765
•
1
An open source machine learning framework that accelerates the path from research prototyping to production deployment.
sdk_version
to 5.28
(in the README.md
)mcp_server=True
in launch()
def generate(text, speed=1):
"""
Convert text to speech audio.
Parameters:
text (str): The input text to be converted to speech.
speed (float, optional): Playback speed of the generated speech.
mcp_server=True
in launch()
import gradio as gr
def letter_counter(word, letter):
"""Count the occurrences of a specific letter in a word.
Args:
word: The word or phrase to analyze
letter: The letter to count occurrences of
Returns:
The number of times the letter appears in the word
"""
return word.lower().count(letter.lower())
demo = gr.Interface(
fn=letter_counter,
inputs=["text", "text"],
outputs="number",
title="Letter Counter",
description="Count how many times a letter appears in a word"
)
demo.launch(mcp_server=True)
pip install gradio --pre
sdk_version
to be 5.0.0b3
in the README.md
file on Spaces.top_k
arbitrarily discarding high-quality continuations? Or top_p
forgetting to exclude low-probability tokens, derailing your generation? Try out the new min_p
flag in generate
, fresh from a PR merged today! 🥬min_p
flag) and multiplies it by the probability of the most likely token in the distribution for the next token. All tokens less likely than the resulting value are filtered. What happens with this strategy?min_p
to a low value, between 0.05 and 0.1. It behaves particularly well for creative text generation when paired up with temperature > 1.outlines
library.outlines
folks to stay on top of the constrained generation game 🧠