-
apple/DiffuCoder-7B-cpGRPO
8B • Updated • 2.72k • 302 -
apple/DiffuCoder-7B-Instruct
8B • Updated • 7.17k • 44 -
DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation
Paper • 2506.20639 • Published • 29 -
apple/DiffuCoder-7B-Base
8B • Updated • 652 • 21
Apple
Verified
AI & ML interests
None defined yet.
Recent Activity
View all activity
-
apple/coreml-depth-anything-v2-small
Depth Estimation • Updated • 233 • 67 -
apple/coreml-depth-anything-small
Depth Estimation • Updated • 69 • 36 -
apple/coreml-detr-semantic-segmentation
Image Segmentation • Updated • 186 • 29 -
apple/coreml-FastViT-T8
Image Classification • Updated • 147 • 12
Benchmark for the design of efficient continual learning of image-text models over years.
-
apple/coreml-stable-diffusion-mixed-bit-palettization
Updated • 21 • 25 -
apple/coreml-stable-diffusion-xl-base
Text-to-Image • Updated • 35 • 67 -
apple/coreml-stable-diffusion-2-1-base
Text-to-Image • Updated • 157 • 51 -
pcuenq/coreml-stable-diffusion-2-1-base
Text-to-Image • Updated • 40 • 3
AIM: Autoregressive Image Models
A collection of AIMv2 vision encoders that supports a number of resolutions, native resolution, and a distilled checkpoint.
-
apple/aimv2-large-patch14-224
Image Feature Extraction • 0.3B • Updated • 1.17k • 56 -
apple/aimv2-huge-patch14-224
Image Feature Extraction • 0.7B • Updated • 67 • 10 -
apple/aimv2-1B-patch14-224
Image Feature Extraction • 1B • Updated • 53 • 7 -
apple/aimv2-3B-patch14-224
Image Feature Extraction • 3B • Updated • 19 • 3
-
apple/OpenELM-270M-Instruct
Text Generation • 0.3B • Updated • 3.39k • 138 -
apple/OpenELM-450M-Instruct
Text Generation • 0.5B • Updated • 929 • 47 -
apple/OpenELM-1_1B-Instruct
Text Generation • 1B • Updated • 1.33M • 66 -
apple/OpenELM-3B-Instruct
Text Generation • 3B • Updated • 2.39k • 338
MobileCLIP: Mobile-friendly image-text models with SOTA zero-shot capabilities.
DataCompDR: Improved datasets for training image-text SOTA models.
-
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Paper • 2311.17049 • Published • 2 -
apple/mobileclip_s0_timm
Image Classification • Updated • 316 • 10 -
apple/mobileclip_s1_timm
Image Classification • Updated • 219 • 2 -
apple/mobileclip_s2_timm
Image Classification • Updated • 160 • 5
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second
-
apple/DepthPro-hf
Depth Estimation • 1.0B • Updated • 12.4k • 61 -
apple/DepthPro
Depth Estimation • Updated • 6.16k • 459 -
apple/DepthPro-mixin
Depth Estimation • 1.0B • Updated • 4 • 7 -
openai/clip-vit-large-patch14
Zero-Shot Image Classification • 0.4B • Updated • 10.4M • 1.83k
CLIP Models trained using DFN-2B/DFN-5B datasets
DCLM Models + Datasets
-
apple/DiffuCoder-7B-cpGRPO
8B • Updated • 2.72k • 302 -
apple/DiffuCoder-7B-Instruct
8B • Updated • 7.17k • 44 -
DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation
Paper • 2506.20639 • Published • 29 -
apple/DiffuCoder-7B-Base
8B • Updated • 652 • 21
A collection of AIMv2 vision encoders that supports a number of resolutions, native resolution, and a distilled checkpoint.
-
apple/aimv2-large-patch14-224
Image Feature Extraction • 0.3B • Updated • 1.17k • 56 -
apple/aimv2-huge-patch14-224
Image Feature Extraction • 0.7B • Updated • 67 • 10 -
apple/aimv2-1B-patch14-224
Image Feature Extraction • 1B • Updated • 53 • 7 -
apple/aimv2-3B-patch14-224
Image Feature Extraction • 3B • Updated • 19 • 3
-
apple/coreml-depth-anything-v2-small
Depth Estimation • Updated • 233 • 67 -
apple/coreml-depth-anything-small
Depth Estimation • Updated • 69 • 36 -
apple/coreml-detr-semantic-segmentation
Image Segmentation • Updated • 186 • 29 -
apple/coreml-FastViT-T8
Image Classification • Updated • 147 • 12
-
apple/OpenELM-270M-Instruct
Text Generation • 0.3B • Updated • 3.39k • 138 -
apple/OpenELM-450M-Instruct
Text Generation • 0.5B • Updated • 929 • 47 -
apple/OpenELM-1_1B-Instruct
Text Generation • 1B • Updated • 1.33M • 66 -
apple/OpenELM-3B-Instruct
Text Generation • 3B • Updated • 2.39k • 338
MobileCLIP: Mobile-friendly image-text models with SOTA zero-shot capabilities.
DataCompDR: Improved datasets for training image-text SOTA models.
-
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
Paper • 2311.17049 • Published • 2 -
apple/mobileclip_s0_timm
Image Classification • Updated • 316 • 10 -
apple/mobileclip_s1_timm
Image Classification • Updated • 219 • 2 -
apple/mobileclip_s2_timm
Image Classification • Updated • 160 • 5
Benchmark for the design of efficient continual learning of image-text models over years.
Depth Pro: Sharp Monocular Metric Depth in Less Than a Second
-
apple/DepthPro-hf
Depth Estimation • 1.0B • Updated • 12.4k • 61 -
apple/DepthPro
Depth Estimation • Updated • 6.16k • 459 -
apple/DepthPro-mixin
Depth Estimation • 1.0B • Updated • 4 • 7 -
openai/clip-vit-large-patch14
Zero-Shot Image Classification • 0.4B • Updated • 10.4M • 1.83k
-
apple/coreml-stable-diffusion-mixed-bit-palettization
Updated • 21 • 25 -
apple/coreml-stable-diffusion-xl-base
Text-to-Image • Updated • 35 • 67 -
apple/coreml-stable-diffusion-2-1-base
Text-to-Image • Updated • 157 • 51 -
pcuenq/coreml-stable-diffusion-2-1-base
Text-to-Image • Updated • 40 • 3
CLIP Models trained using DFN-2B/DFN-5B datasets
AIM: Autoregressive Image Models
DCLM Models + Datasets