Spaces:
Running
on
Zero
Running
on
Zero
Nithya
commited on
Commit
Β·
017b2a5
1
Parent(s):
d607f42
updated filenames and added artist info
Browse files- app.py +20 -9
- examples/{ex1-hf.wav β ex1.wav} +0 -0
- examples/{ex2-hf.wav β ex2.wav} +0 -0
- examples/{ex3-hf.wav β ex3.wav} +0 -0
- examples/{ex4-hf.wav β ex4.wav} +0 -0
- examples/{ex5-hf.wav β ex5.wav} +0 -0
- examples/music sample attribution.txt +3 -0
- requirements.txt +1 -1
app.py
CHANGED
|
@@ -194,25 +194,37 @@ def set_guide_and_generate(audio):
|
|
| 194 |
plt.close(user_input_plot)
|
| 195 |
return audio, user_input_plot, pitch
|
| 196 |
|
| 197 |
-
|
| 198 |
-
|
| 199 |
-
|
| 200 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 201 |
:book: Read more about the project [here](https://arxiv.org/pdf/2408.12658) <br>
|
| 202 |
:samples: Listen to the samples [here](https://snnithya.github.io/gamadhani-samples) <br>
|
| 203 |
-
|
|
|
|
| 204 |
gr.Markdown("""
|
| 205 |
## Instructions
|
| 206 |
In this demo you can interact with the model in two ways:
|
| 207 |
1. **Call and response**: The model will try to continue the idea that you input. This is similar to `primed generation' discussed in the paper.
|
| 208 |
2. **Melodic reinterpretation**: Akin to the idea of `coarse pitch conditioning' presented in the paper, you can input a pitch contour and the model will generate audio that is similar to but not exactly the same. <br><br>
|
| 209 |
-
|
| 210 |
-
""")
|
| 211 |
gr.Markdown("""
|
| 212 |
This is still a work in progress, so please feel free to share any weird or interesting examples, we would love to hear them! Contact us at [snnithya.mit.edu](mailto:snnithya.mit.edu).
|
| 213 |
""")
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
-
with gr.Row():
|
|
|
|
| 216 |
with gr.Column():
|
| 217 |
audio = gr.Audio(label="Input")
|
| 218 |
sbmt = gr.Button()
|
|
@@ -230,7 +242,6 @@ with gr.Blocks() as demo:
|
|
| 230 |
["examples/ex3.wav"],
|
| 231 |
["examples/ex4.wav"],
|
| 232 |
["examples/ex5.wav"]
|
| 233 |
-
# Add more examples as needed
|
| 234 |
],
|
| 235 |
inputs=audio
|
| 236 |
)
|
|
|
|
| 194 |
plt.close(user_input_plot)
|
| 195 |
return audio, user_input_plot, pitch
|
| 196 |
|
| 197 |
+
|
| 198 |
+
gr.HTML("""
|
| 199 |
+
<style>
|
| 200 |
+
.center-text {
|
| 201 |
+
text-align: center;
|
| 202 |
+
}
|
| 203 |
+
</style>
|
| 204 |
+
""")
|
| 205 |
+
with gr.Blocks(theme=gr.themes.Glass()) as demo:
|
| 206 |
+
demo.title("GaMaDHaNi: Hierarchical Generative Modeling of Melodic Vocal Contours in Hindustani Classical Music")
|
| 207 |
+
demo.description("""
|
| 208 |
:book: Read more about the project [here](https://arxiv.org/pdf/2408.12658) <br>
|
| 209 |
:samples: Listen to the samples [here](https://snnithya.github.io/gamadhani-samples) <br>
|
| 210 |
+
""")
|
| 211 |
+
with gr.Column():
|
| 212 |
gr.Markdown("""
|
| 213 |
## Instructions
|
| 214 |
In this demo you can interact with the model in two ways:
|
| 215 |
1. **Call and response**: The model will try to continue the idea that you input. This is similar to `primed generation' discussed in the paper.
|
| 216 |
2. **Melodic reinterpretation**: Akin to the idea of `coarse pitch conditioning' presented in the paper, you can input a pitch contour and the model will generate audio that is similar to but not exactly the same. <br><br>
|
| 217 |
+
### Upload an audio file or record your voice to get started!
|
| 218 |
+
""", elem_classes="center-text")
|
| 219 |
gr.Markdown("""
|
| 220 |
This is still a work in progress, so please feel free to share any weird or interesting examples, we would love to hear them! Contact us at [snnithya.mit.edu](mailto:snnithya.mit.edu).
|
| 221 |
""")
|
| 222 |
+
gr.Markdown("""
|
| 223 |
+
*Note: If you see an error message on the screen after clicking 'Submit', please wait for five seconds and click 'Submit' again.*
|
| 224 |
+
""")
|
| 225 |
|
| 226 |
+
with gr.Row(equal_heights=True):
|
| 227 |
+
|
| 228 |
with gr.Column():
|
| 229 |
audio = gr.Audio(label="Input")
|
| 230 |
sbmt = gr.Button()
|
|
|
|
| 242 |
["examples/ex3.wav"],
|
| 243 |
["examples/ex4.wav"],
|
| 244 |
["examples/ex5.wav"]
|
|
|
|
| 245 |
],
|
| 246 |
inputs=audio
|
| 247 |
)
|
examples/{ex1-hf.wav β ex1.wav}
RENAMED
|
File without changes
|
examples/{ex2-hf.wav β ex2.wav}
RENAMED
|
File without changes
|
examples/{ex3-hf.wav β ex3.wav}
RENAMED
|
File without changes
|
examples/{ex4-hf.wav β ex4.wav}
RENAMED
|
File without changes
|
examples/{ex5-hf.wav β ex5.wav}
RENAMED
|
File without changes
|
examples/music sample attribution.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:326d25da503903fe485be198241347ed6f9d2386303585dfb81ed1dd14ff6235
|
| 3 |
+
size 449
|
requirements.txt
CHANGED
|
@@ -1,4 +1,4 @@
|
|
| 1 |
crepe==0.0.15
|
| 2 |
hmmlearn==0.3.2
|
| 3 |
tensorflow==2.17.0
|
| 4 |
-
GaMaDHaNi @ git+https://github.com/snnithya/GaMaDHaNi.git
|
|
|
|
| 1 |
crepe==0.0.15
|
| 2 |
hmmlearn==0.3.2
|
| 3 |
tensorflow==2.17.0
|
| 4 |
+
GaMaDHaNi @ git+https://github.com/snnithya/GaMaDHaNi.git
|