sorry about that lol It was late and I wanted to make a post before sleep so I didnt really think about the choices
Pro Creations PRO
ProCreations
AI & ML interests
AGI and small scale-high quality AI
Recent Activity
View all activity
Organizations
ProCreations's activity

replied to
their
post
about 13 hours ago

replied to
their
post
1 day ago
Makes sense as LLM's arent very random, this dataset isnt really for making LLMs random because you just cant (yet) more as just a dump of info I dropped for someone to find a use case for. Hey, thanks for commenting!

posted
an
update
2 days ago
Post
2157
New dataset- what a random time, right?
Randomness⦠pure randomness feels refreshing to me so I made a dataset of randomness.
ProCreations/quantum-randomness
This is a dataset of 1000 entries of timestamps and real quantum randomness. I have no clue what it could be used for but it exists now
Randomness⦠pure randomness feels refreshing to me so I made a dataset of randomness.
ProCreations/quantum-randomness
This is a dataset of 1000 entries of timestamps and real quantum randomness. I have no clue what it could be used for but it exists now

replied to
their
post
2 days ago
Very cool! Thanks for going out of your way to help, I saved the photo!

reacted to
m-ric's
post with š¤
3 days ago
Post
4143
I've made an open version of Google's NotebookLM, and it shows the superiority of the open source tech task! šŖ
The app's workflow is simple. Given a source PDF or URL, it extracts the content from it, then tasks Meta's Llama 3.3-70B with writing the podcast script, with a good prompt crafted by @gabrielchua ("two hosts, with lively discussion, fun notes, insightful question etc.")
Then it hands off the text-to-speech conversion to Kokoro-82M, and there you go, you have two hosts discussion any article.
The generation is nearly instant, because:
> Llama 3.3 70B is running at 1,000 tokens/seconds with Cerebras inference
> The audio is generated in streaming mode by the tiny (yet powerful) Kokoro, generating voices faster than real-time.
And the audio generation runs for free on Zero GPUs, hosted by HF on H200s.
Overall, open source solutions rival the quality of closed-source solutions at close to no cost!
Try it here šš m-ric/open-notebooklm
The app's workflow is simple. Given a source PDF or URL, it extracts the content from it, then tasks Meta's Llama 3.3-70B with writing the podcast script, with a good prompt crafted by @gabrielchua ("two hosts, with lively discussion, fun notes, insightful question etc.")
Then it hands off the text-to-speech conversion to Kokoro-82M, and there you go, you have two hosts discussion any article.
The generation is nearly instant, because:
> Llama 3.3 70B is running at 1,000 tokens/seconds with Cerebras inference
> The audio is generated in streaming mode by the tiny (yet powerful) Kokoro, generating voices faster than real-time.
And the audio generation runs for free on Zero GPUs, hosted by HF on H200s.
Overall, open source solutions rival the quality of closed-source solutions at close to no cost!
Try it here šš m-ric/open-notebooklm

replied to
their
post
5 days ago
Thank you! I'm glad your exited.
For quantization, I probably will offer different scales. Default won't be quantized to make it easier, but there will be INT16, INT8, and INT4 intellite models available.

posted
an
update
5 days ago
Post
2402
Post of the Day ā Your Thoughts, Our Take
Yesterday we asked:
If AI could master just one thing, what should it be?
And the responses? Insightful, creative, and genuinely thought-provoking.
Hereās a few that stood out:
š¼ @NandaKrishvaa said āCuriosity like a baby.ā
Instead of just answering questions, an AI that asks them with childlike wonder? Thatās a whole new kind of intelligence.
@MrDevolver suggested āMaster being Jack of All Trades.ā
Sure, it bends the rules a bit ā but adaptability is key. Sometimes breadth can outshine depth.
@afranco50 argued for āPerfect logic,ā saying it could unlock all other abilities.
Itās a solid point: if an AI can reason flawlessly, it may just learn to improve everything else on its own.
āø»
Our take?
We still believe the biggest leap forward is flawless conversation ā not just accurate, but deeply human. Emotional intelligence, nuance, humor, empathy. That kind of interaction is what makes AI feel real.
Itās also why weāre building IntellIte Chat to focus on that exact skillset:
⢠Emotion-aware replies
⢠Natural, flowing conversation
⢠Strong command of casual and expressive English
When it releases, it wonāt just talk ā itāll connect. And in a world full of tools, we think the future needs more companions.
What do you think? Let us know! If we get more comments, might as well do another post on this tomorrow lol.
Yesterday we asked:
If AI could master just one thing, what should it be?
And the responses? Insightful, creative, and genuinely thought-provoking.
Hereās a few that stood out:
š¼ @NandaKrishvaa said āCuriosity like a baby.ā
Instead of just answering questions, an AI that asks them with childlike wonder? Thatās a whole new kind of intelligence.
@MrDevolver suggested āMaster being Jack of All Trades.ā
Sure, it bends the rules a bit ā but adaptability is key. Sometimes breadth can outshine depth.
@afranco50 argued for āPerfect logic,ā saying it could unlock all other abilities.
Itās a solid point: if an AI can reason flawlessly, it may just learn to improve everything else on its own.
āø»
Our take?
We still believe the biggest leap forward is flawless conversation ā not just accurate, but deeply human. Emotional intelligence, nuance, humor, empathy. That kind of interaction is what makes AI feel real.
Itās also why weāre building IntellIte Chat to focus on that exact skillset:
⢠Emotion-aware replies
⢠Natural, flowing conversation
⢠Strong command of casual and expressive English
When it releases, it wonāt just talk ā itāll connect. And in a world full of tools, we think the future needs more companions.
What do you think? Let us know! If we get more comments, might as well do another post on this tomorrow lol.

posted
an
update
6 days ago
Post
784
Post of the Day
If AI could master just one thing, what should it be?
Human-level conversation that actually feels real?
Flawless, bug-free code?
Perfect math and logic every single time?
Whatever you pick, just know the AI will not be so good at the other topics! No picking "all of them" either lol
What do you think matters most for AI to truly level up?
Drop your thoughts in the comments ā weāll share our answer (and maybe a few of yours š) in the next post.
If AI could master just one thing, what should it be?
Human-level conversation that actually feels real?
Flawless, bug-free code?
Perfect math and logic every single time?
Whatever you pick, just know the AI will not be so good at the other topics! No picking "all of them" either lol
What do you think matters most for AI to truly level up?
Drop your thoughts in the comments ā weāll share our answer (and maybe a few of yours š) in the next post.

posted
an
update
8 days ago
Post
700
šØ NEW DATASET ALERT šØ
Come check out
ProCreations/black-hole-sim-randomized
a high-fidelity dataset with 400,000+ randomized black hole simulations ā packed with relativistic metrics, Kerr geometry, and GR weirdness to help AIs actually understand physics.
š³ļø Teach your model:
⢠Time dilation
⢠Redshift
⢠Orbital dynamics
⢠Frame dragging
⢠Full Kerr tensors
ā¦and more, all in raw JSONL!
This release celebrates SimpleMath hitting 200 downloads ā thank you all so much for the support! š
Come check out
ProCreations/black-hole-sim-randomized
a high-fidelity dataset with 400,000+ randomized black hole simulations ā packed with relativistic metrics, Kerr geometry, and GR weirdness to help AIs actually understand physics.
š³ļø Teach your model:
⢠Time dilation
⢠Redshift
⢠Orbital dynamics
⢠Frame dragging
⢠Full Kerr tensors
ā¦and more, all in raw JSONL!
This release celebrates SimpleMath hitting 200 downloads ā thank you all so much for the support! š

posted
an
update
9 days ago
Post
3086
Intellite Chat training script is working!
Training works fine, normal loss, no more gradient explosion or gradient vanishing, etc.
BUT, before I officially flip the switch and turn on training, I wanna make sure its the best possible 100m parameter model it can be, so I am working a bit more (probably an extra 3-5 days) to add even more innovative AI improvements to intellite.
Training works fine, normal loss, no more gradient explosion or gradient vanishing, etc.
BUT, before I officially flip the switch and turn on training, I wanna make sure its the best possible 100m parameter model it can be, so I am working a bit more (probably an extra 3-5 days) to add even more innovative AI improvements to intellite.

posted
an
update
11 days ago
Post
701
š§ Post of the Day: Quantum AI ā Your Thoughts + Our Take
Yesterday we asked: āWhat will quantum computing do to AI?ā
Big thanks to solongeran for this poetic insight:
āQuantum computers are hard to run error-free. But once theyāre reliable, AI will be there. Safer than the daily sunset. Shure ā no more queues ;)ā
š Our Take ā What Quantum Computing Will Do to AI (by 2035)
By the time scalable, fault-tolerant quantum computers arrive, AI wonāt just run faster ā itāll evolve in ways weāve never seen:
āø»
š¹ 1. Huge Speedups in Optimization & Search
Why: Quantum algorithms like Groverās can cut down search and optimization times exponentially in some cases.
How: Theyāll power up tasks like hyperparameter tuning, decision-making in RL, and neural architecture search ā crunching what now takes hours into seconds.
āø»
š¹ 2. Quantum Neural Networks (QNNs)
Why: QNNs can represent complex relationships more efficiently than classical nets.
How: They use entanglement and superposition to model rich feature spaces, especially useful for messy or high-dimensional data ā think drug discovery, finance, or even language structure.
āø»
š¹ 3. Autonomous Scientific Discovery
Why: Quantum AI could simulate molecular systems that are impossible for classical computers.
How: By combining quantum simulation with AI exploration, we may unlock ultra-fast pathways to new drugs, materials, and technologies ā replacing years of lab work with minutes of computation.
āø»
š¹ 4. Self-Evolving AI Architectures
Why: Future AI systems will design themselves.
How: Quantum processors will explore massive spaces of model variants in parallel, enabling AI to simulate, compare, and evolve new architectures ā fast, efficient, and with little trial-and-error.
āø»
āļø The Takeaway:
Quantum computing wonāt just speed up AI. Itāll open doors to new types of intelligence ā ones that learn, discover, and evolve far beyond todayās limits.
Yesterday we asked: āWhat will quantum computing do to AI?ā
Big thanks to solongeran for this poetic insight:
āQuantum computers are hard to run error-free. But once theyāre reliable, AI will be there. Safer than the daily sunset. Shure ā no more queues ;)ā
š Our Take ā What Quantum Computing Will Do to AI (by 2035)
By the time scalable, fault-tolerant quantum computers arrive, AI wonāt just run faster ā itāll evolve in ways weāve never seen:
āø»
š¹ 1. Huge Speedups in Optimization & Search
Why: Quantum algorithms like Groverās can cut down search and optimization times exponentially in some cases.
How: Theyāll power up tasks like hyperparameter tuning, decision-making in RL, and neural architecture search ā crunching what now takes hours into seconds.
āø»
š¹ 2. Quantum Neural Networks (QNNs)
Why: QNNs can represent complex relationships more efficiently than classical nets.
How: They use entanglement and superposition to model rich feature spaces, especially useful for messy or high-dimensional data ā think drug discovery, finance, or even language structure.
āø»
š¹ 3. Autonomous Scientific Discovery
Why: Quantum AI could simulate molecular systems that are impossible for classical computers.
How: By combining quantum simulation with AI exploration, we may unlock ultra-fast pathways to new drugs, materials, and technologies ā replacing years of lab work with minutes of computation.
āø»
š¹ 4. Self-Evolving AI Architectures
Why: Future AI systems will design themselves.
How: Quantum processors will explore massive spaces of model variants in parallel, enabling AI to simulate, compare, and evolve new architectures ā fast, efficient, and with little trial-and-error.
āø»
āļø The Takeaway:
Quantum computing wonāt just speed up AI. Itāll open doors to new types of intelligence ā ones that learn, discover, and evolve far beyond todayās limits.

posted
an
update
12 days ago
Post
1365
Quantum Computing + AI = �
What do you think quantum computing will do to AI?
Will it revolutionize training speed? Unlock whole new algorithms? Or maybe⦠just complicate things?
š¬ Drop your thoughts below ā weāll share our take and highlight some of your replies in tomorrowās post!
What do you think quantum computing will do to AI?
Will it revolutionize training speed? Unlock whole new algorithms? Or maybe⦠just complicate things?
š¬ Drop your thoughts below ā weāll share our take and highlight some of your replies in tomorrowās post!

reacted to
merterbak's
post with š„
12 days ago
Post
1669
Microsoft released their new fine-tuned phi-4 models with reasoning data yesterday. They outperform/rival much larger models . Check out them if you haven't yet. š
Phi4 mini reasoning(SFT): microsoft/Phi-4-mini-reasoning
Phi-4 reasoning(SFT): microsoft/Phi-4-reasoning
Phi-4 reasoning plus (SFT + RL): microsoft/Phi-4-reasoning-plus
Demo: https://github.com/marketplace/models/azureml/Phi-4-reasoning/playground
Articles: https://arxiv.org/pdf/2504.21318
https://arxiv.org/pdf/2504.21233
Blog: https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/
Phi4 mini reasoning(SFT): microsoft/Phi-4-mini-reasoning
Phi-4 reasoning(SFT): microsoft/Phi-4-reasoning
Phi-4 reasoning plus (SFT + RL): microsoft/Phi-4-reasoning-plus
Demo: https://github.com/marketplace/models/azureml/Phi-4-reasoning/playground
Articles: https://arxiv.org/pdf/2504.21318
https://arxiv.org/pdf/2504.21233
Blog: https://azure.microsoft.com/en-us/blog/one-year-of-phi-small-language-models-making-big-leaps-in-ai/

reacted to
shanaka95's
post with ā¤ļø
12 days ago
Post
1276
Letās Play the Chrome Dino Game with Reinforcement Learning! š
Reinforcement Learning has been one of my favorite areas of interest for a while. This is a project I worked on a while ago while learning the fundamentals of reinforcement learning.
I believe the OpenAI Gym library offers an excellent way to standardize environments for RL agents. While there are many ready-to-use Gym environments available for learning and testing, you donāt fully understand how they work until you build your own custom Gym environment āļø
Creating your own environment helps you grasp the core concepts behind RL.
On the other hand, Stable Baselines3 offers PyTorch implementations of popular RL algorithms like PPO and DQN. The best part is that Gym environments are fully compatible with Stable Baselines3, making it easy to benchmark different models and compare their performance.
I'm open-sourcing this project as a helpful starting point for anyone interested in learning how to :
* Build a custom RL environment using the OpenAI Gym library
* Train RL agents using Stable Baselines3
* Use the Chrome DevTools Protocol for direct communication between a Python script and the Chrome browser. This is especially useful if you're interested in web scraping or browser automation (another one of my all-time favorite topics 𤩠)
Also, this project uses image preprocessing with Sobel edge detection, a basic feature extraction technique commonly used in image processing and by deep neural networks.
I've also included pre-trained model checkpoints saved every 100,000 timesteps, up to 1 million timesteps. If you'd like to test the project without training from scratch, you can simply load and use one of these pre-trained models.
I hope this project helps someone learn something new and exciting!
shanaka95/AIDino
Reinforcement Learning has been one of my favorite areas of interest for a while. This is a project I worked on a while ago while learning the fundamentals of reinforcement learning.
I believe the OpenAI Gym library offers an excellent way to standardize environments for RL agents. While there are many ready-to-use Gym environments available for learning and testing, you donāt fully understand how they work until you build your own custom Gym environment āļø
Creating your own environment helps you grasp the core concepts behind RL.
On the other hand, Stable Baselines3 offers PyTorch implementations of popular RL algorithms like PPO and DQN. The best part is that Gym environments are fully compatible with Stable Baselines3, making it easy to benchmark different models and compare their performance.
I'm open-sourcing this project as a helpful starting point for anyone interested in learning how to :
* Build a custom RL environment using the OpenAI Gym library
* Train RL agents using Stable Baselines3
* Use the Chrome DevTools Protocol for direct communication between a Python script and the Chrome browser. This is especially useful if you're interested in web scraping or browser automation (another one of my all-time favorite topics 𤩠)
Also, this project uses image preprocessing with Sobel edge detection, a basic feature extraction technique commonly used in image processing and by deep neural networks.
I've also included pre-trained model checkpoints saved every 100,000 timesteps, up to 1 million timesteps. If you'd like to test the project without training from scratch, you can simply load and use one of these pre-trained models.
I hope this project helps someone learn something new and exciting!
shanaka95/AIDino

posted
an
update
13 days ago
Post
594
Intellite Chat is almost in training! Technically it works but it had a gradient explosion issue so I will fix it then retry and keep you updated.
Check out
ProCreations/Simple-FriendlyMath
For a simple math dataset with friendly English
Check out
ProCreations/Simple-FriendlyMath
For a simple math dataset with friendly English

posted
an
update
14 days ago
Post
578
Post of the Day:
For every new follower I get on my account over the next 2 weeks, thatās how big my upcoming AI project will be ā in millions of parameters.
(Example: 16 new followers = 16M parameters!)
Iāll post full training logs and updates so you can watch it being built live.
If it somehow hits 1 billion parameters (1,000 followers), thatās the cap ā my poor GPUs need mercy š
I dare you guys to make me suffer. š
You control the size.
Letās build something absolutely insane together. ā¤ļø
For every new follower I get on my account over the next 2 weeks, thatās how big my upcoming AI project will be ā in millions of parameters.
(Example: 16 new followers = 16M parameters!)
Iāll post full training logs and updates so you can watch it being built live.
If it somehow hits 1 billion parameters (1,000 followers), thatās the cap ā my poor GPUs need mercy š
I dare you guys to make me suffer. š
You control the size.
Letās build something absolutely insane together. ā¤ļø

posted
an
update
15 days ago
Post
491
Hey there- might have to delay the Qwen fine tune on math project a tiny bit due to testing not going so great: but instead of giving you guys a bad model Iāll take the time to fix it soon.
Iād rather be honest and fix it than rush and dishonest so yea.
Also yesterday I released another dataset so check it out if you want
ProCreations/Simple-FriendlyMath
And join the discord for updates
https://discord.gg/XGvwmfXAvu
And donate to me to fund projects
https://buymeacoffee.com/procreations
Iād rather be honest and fix it than rush and dishonest so yea.
Also yesterday I released another dataset so check it out if you want
ProCreations/Simple-FriendlyMath
And join the discord for updates
https://discord.gg/XGvwmfXAvu
And donate to me to fund projects
https://buymeacoffee.com/procreations

posted
an
update
17 days ago
Post
3403
Post of the Day
Iām fine-tuning Qwen 2.5-0.5B to be extremely good at math, using high-quality datasets and some smart training strategies.
The logs are looking really promising so far!
Expected release:
Tomorrow morning?
Iāll post as soon as itās ready ā stay tuned.
If you want faster updates or just wanna chat about it, come join my Discord:
https://discord.gg/EXsug2Ux29
(Heads up: we might ask a couple quick questions when you join ā just making sure we keep the server safe.)
Also, check out one of the datasets weāre using:
ProCreations/SimpleMath
This project is also helping shape the future of IntellIte.
The insights and techniques weāre developing here ā better dataset curation, fine-tuning tricks, and evaluation methods ā will directly contribute to making IntellIte even sharper, faster, and more reliable, especially for math and reasoning tasks.
Big progress ahead. Canāt wait to share it with you all!
Iām fine-tuning Qwen 2.5-0.5B to be extremely good at math, using high-quality datasets and some smart training strategies.
The logs are looking really promising so far!
Expected release:
Tomorrow morning?
Iāll post as soon as itās ready ā stay tuned.
If you want faster updates or just wanna chat about it, come join my Discord:
https://discord.gg/EXsug2Ux29
(Heads up: we might ask a couple quick questions when you join ā just making sure we keep the server safe.)
Also, check out one of the datasets weāre using:
ProCreations/SimpleMath
This project is also helping shape the future of IntellIte.
The insights and techniques weāre developing here ā better dataset curation, fine-tuning tricks, and evaluation methods ā will directly contribute to making IntellIte even sharper, faster, and more reliable, especially for math and reasoning tasks.
Big progress ahead. Canāt wait to share it with you all!