AI & ML interests

We aim to unify the schema across many different biomedical NLP resources.

Recent Activity

ImranzamanML 
posted an update 1 day ago
view post
Post
200
# Runway Aleph: The Future of AI Video Editing

Runway’s new **Aleph** model lets you *transform*, *edit*, and *generate* video from existing footage using just text prompts.
You can remove objects, change environments, restyle shots, alter lighting, and even create entirely new camera angles, all in one tool.

## Key Links

- 🔬 [Introducing Aleph (Runway Research)](https://runwayml.com/research/introducing-runway-aleph)
- 📖 [Aleph Prompting Guide (Runway Help Center)](https://help.runwayml.com/hc/en-us/articles/43277392678803-Aleph-Prompting-Guide)
- 🎬 [How to Transform Videos (Runway Academy)](https://academy.runwayml.com/aleph/how-to-transform-videos)
- 📰 [Gadgets360 Coverage](https://www.gadgets360.com/ai/news/runway-aleph-ai-video-editing-generation-model-post-production-unveiled-8965180)
- 🎥 [YouTube Demo: ALEPH by Runway](https://www.youtube.com/watch?v=PPerCtyIKwA)
- 📰 [Runway Alpha dataset]( Rapidata/text-2-video-human-preferences-runway-alpha)

## Prompt Tips

1. Be clear and specific (e.g., _“Change to snowy night, keep people unchanged”_).
2. Use action verbs like _add, remove, restyle, relight_.
3. Add reference images for style or lighting.


Aleph shifts AI video from *text-to-video* to *video-to-video*, making post-production faster, more creative, and more accessible than ever.
albertvillanova 
posted an update 2 days ago
view post
Post
2827
Latest smolagents release supports GPT-5: build agents that think, plan, and act.
⚡ Upgrade now and put GPT-5 to work!
albertvillanova 
posted an update 4 days ago
view post
Post
257
🚀 smolagents v1.21.0 is here!
Now with improved safety in the local Python executor: dunder calls are blocked!
⚠️ Still, not fully isolated: for untrusted code, use a remote executor instead: Docker, E2B, Wasm.
✨ Many bug fixes: more reliable code.
👉 https://github.com/huggingface/smolagents/releases/tag/v1.21.0
ImranzamanML 
posted an update 7 days ago
view post
Post
365
OpenAI has launched GPT-5, a significant leap forward in AI technology that is now available to all users. The new model unifies all of OpenAI's previous developments into a single, cohesive system that automatically adapts its approach based on the complexity of the user's request. This means it can prioritize speed for simple queries or engage a deeper reasoning model for more complex problems, all without the user having to manually switch settings.

Key Features and Improvements
Unified System: GPT-5 combines various models into one interface, intelligently selecting the best approach for each query.

Enhanced Coding: It's being hailed as the "strongest coding model to date," with the ability to create complex, responsive websites and applications from a single prompt.

PhD-level Reasoning: According to CEO Sam Altman, GPT-5 offers a significant jump in reasoning ability, with a much lower hallucination rate. It also performs better on academic and human-evaluated benchmarks.

New Personalities: Users can now select from four preset personalities—Cynic, Robot, Listener and Nerd to customize their chat experience.

Advanced Voice Mode: The voice mode has been improved to sound more natural and adapt its speech based on the context of the conversation.


https://openai.com/index/introducing-gpt-5/
https://openai.com/gpt-5/
ImranzamanML 
posted an update 8 days ago
view post
Post
257
All key links to OpenAI open sourced GPT OSS models (117B and 21B) which are released under apache 2.0. Here is a quick guide to explore and build with them:

Intro & vision: https://openai.com/index/introducing-gpt-oss

Model specs & license: https://openai.com/index/gpt-oss-model-card/

Dev overview: https://cookbook.openai.com/topic/gpt-oss

How to run via vLLM: https://cookbook.openai.com/articles/gpt-oss/run-vllm

Harmony I/O format: https://github.com/openai/harmony

Reference PyTorch code: https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation

Community site: https://gpt-oss.com/

Lets deep dive with OpenAI models now 😊

#OpenSource #AI #GPTOSS #OpenAI #LLM #Python #GenAI
ImranzamanML 
posted an update 9 days ago
view post
Post
3493
Finaly OpenAI is open to share open-source models after GPT2-2019.
gpt-oss-120b
gpt-oss-20b

openai/gpt-oss-120b

#AI #GPT #LLM #Openai
  • 1 reply
·
Tonic 
posted an update 12 days ago
ImranzamanML 
posted an update 13 days ago
view post
Post
299
Working of Transformer model layers!

I focused on showing the core steps side by side with tokenization, embedding and the transformer model layers, each highlighting the self attention and feedforward parts without getting lost in too much technical depth.

Its showing how these layers work together to understand context and generate meaningful output!

If you are curious about the architecture behind AI language models or want a clean way to explain it, hit me up, I’d love to share!



#AI #MachineLearning #NLP #Transformers #DeepLearning #DataScience #LLM #AIAgents
ImranzamanML 
posted an update 17 days ago
view post
Post
1635
Hugging Face just made life easier with the new hf CLI!
huggingface-cli to hf

With renaming the CLI, there are new features added like hf jobs. We can now run any script or Docker image on dedicated Hugging Face infrastructure with a simple command. It's a good addition for running experiments and jobs on the fly.

To get started, just run:
pip install -U huggingface_hub

List of hf CLI Commands

Main Commands
hf auth: Manage authentication (login, logout, etc.).
hf cache: Manage the local cache directory.
hf download: Download files from the Hub.
hf jobs: Run and manage Jobs on the Hub.
hf repo: Manage repos on the Hub.
hf upload: Upload a file or a folder to the Hub.
hf version: Print information about the hf version.
hf env: Print information about the environment.

Authentication Subcommands (hf auth)
login: Log in using a Hugging Face token.
logout: Log out of your account.
whoami: See which account you are logged in as.
switch: Switch between different stored access tokens/profiles.
list: List all stored access tokens.

Jobs Subcommands (hf jobs)
run: Run a Job on Hugging Face infrastructure.
inspect: Display detailed information on one or more Jobs.
logs: Fetch the logs of a Job.
ps: List running Jobs.
cancel: Cancel a Job.

hashtag#HuggingFace hashtag#MachineLearning hashtag#AI hashtag#DeepLearning hashtag#MLTools hashtag#MLOps hashtag#OpenSource hashtag#Python hashtag#DataScience hashtag#DevTools hashtag#LLM hashtag#hfCLI hashtag#GenerativeAI
  • 1 reply
·
mkurman 
posted an update 22 days ago
view post
Post
258
🚀 Big news! NeuroBLAST, the outstanding new architecture, has officially arrived on HF! After three intense months of training my 1.9 billion SLM on my trusty RTX 3090 Ti, I’m happy to announce the results. While it’s not perfect just yet, I’ve dedicated countless hours to optimizing costs while crafting clever layer connections that mimic the brain's centers. Plus, I’ve introduced a new memory-like layer that’s sure to turn heads! I can’t wait to dive deep into this journey in my upcoming blog post. Stay tuned for the full scoop! 🔥

meditsolutions/NeuroBLAST-1.9B-Instruct-Early-Preview
AtAndDev 
posted an update 23 days ago
view post
Post
362
Qwen 3 Coder is a personal attack to k2, and I love it.
It achieves near SOTA on LCB while not having reasoning.
Finally people are understanding that reasoning isnt necessary for high benches...

Qwen ftw!

DECENTRALIZE DECENTRALIZE DECENTRALIZE
Tonic 
posted an update 25 days ago
view post
Post
715
👋 Hey there folks,

just submitted my plugin idea to the G-Assist Plugin Hackathon by @nvidia . Check it out, it's a great way to use a local SLA model on a windows machine to easily and locally get things done ! https://github.com/NVIDIA/G-Assist
Tonic 
posted an update 26 days ago
view post
Post
554
🙋🏻‍♂️ Hey there folks ,

Yesterday , Nvidia released a reasoning model that beats o3 on science, math and coding !

Today you can try it out here : Tonic/Nvidia-OpenReasoning

hope you like it !