CodeReality / docs /DATASET_CARD.md
Vincenzo Gallo
Add CodeReality-1T Evaluation Subset (19GB)
6759906

CodeReality-1T Dataset Card

Dataset Summary

CodeReality-1T is a large-scale, deliberately noisy code repository dataset designed for robust AI research. The dataset contains 397,475 repositories across 21 programming languages in 3.05 TB of uncompressed data, specifically curated to test robustness, data curation methods, and real-world code understanding.

  • Total Size: 3.05 TB (uncompressed)
  • Repositories: 397,475
  • Files: 52,692 JSONL archives
  • Languages: 21 detected languages
  • Status: deliberately_noisy: true (research-only)
  • Version: 1.0.0

Dataset Structure

Data Format

  • Format: JSONL (JSON Lines) archives
  • Repository Structure: Each line contains complete repository metadata including:
    • Source code files with full paths
    • Git commit history and messages
    • Issue tracking data
    • Repository metadata (stars, forks, topics)
    • License information (when available)

Language Distribution

Based on complete analysis of all 397,475 repositories:

Language Repositories Percentage
Unknown 389,941 98.1%
Python 4,738 1.2%
Shell 4,505 1.1%
C 3,969 1.0%
C++ 3,339 0.8%
HTML 2,487 0.6%
JavaScript 2,394 0.6%
Go 2,110 0.5%
Java 2,026 0.5%
Others 1,966 0.5%

Domain Distribution

Cross-domain analysis reveals:

Domain Repositories Cross-Domain
General 389,941 -
Database 7,534
AI/ML 7,534
Systems 7,534
Security 7,534
Web 7,429
Enterprise 7,072
Gaming 6,538
Mobile 5,705
Scientific 5,386
DevOps 4,600

Cross-domain repositories: 59,332 (14.9%)

Motivation

Real-world code repositories are inherently messy, containing:

  • Duplicate code and forked repositories
  • Incomplete or experimental code snippets
  • Mixed licensing conditions
  • Buggy commits and partial implementations
  • DevOps configurations and non-code artifacts

CodeReality-1T embraces this complexity as a research laboratory for:

  1. Robustness Testing: How do code LLMs perform on noisy, real-world data?
  2. Data Curation Methods: Developing better filtering and cleaning techniques
  3. License Compliance: Research into automated license detection and filtering
  4. Bug-Fix Alignment: Studying commit patterns for before/after code analysis
  5. NL↔Code Tasks: Natural language to code alignment through issues, commits, and documentation

Collection Process

Sources

  • Public GitHub repositories
  • GitLab public projects
  • Open source package registries
  • Developer forum code dumps

Acquisition Pipeline

  1. Repository Harvesting: Systematic collection from public sources
  2. Metadata Extraction: Complete git history, issues, documentation
  3. Format Standardization: Conversion to JSONL with consistent schema
  4. Indexing: SHA256 checksums and comprehensive cataloging

Filtering Strategy

Deliberately Minimal Filtering to preserve research value:

  • Kept: Forks, duplicates, incomplete code, experimental projects
  • Kept: Repositories with unknown or missing licenses
  • Kept: Multi-language and cross-domain projects
  • Excluded: Only explicitly "all rights reserved" repositories

Quality Assurance

  • 100% Coverage: Complete analysis without sampling
  • Integrity Verification: SHA256 checksums for all files
  • Comprehensive Indexing: Full metadata extraction and validation
  • Reproducible Pipeline: Open source tools only (enry, scancode-toolkit, PyDriller)

Technical Characteristics

File Type Distribution (Top 15)

Extension Files Description
.h 34,195,463 C/C++ headers
.go 18,691,961 Go source
.java 18,109,114 Java source
.c 16,700,728 C source
.py 15,650,558 Python source
.ts 10,271,948 TypeScript
.cpp 9,768,211 C++ source
.md 7,815,310 Markdown docs
.rs 7,280,129 Rust source
.rb 6,309,814 Ruby source
.json 5,888,235 JSON data
.txt 4,627,011 Text files
.rst 4,250,204 reStructuredText
.js 4,125,928 JavaScript
.scala 3,619,096 Scala source

Build Systems Detected

Build System Occurrences Ecosystem
Makefile 619,857 C/C++/Universal
package.json 510,769 Node.js/npm
build.gradle 430,334 Java/Android
pom.xml 136,386 Java/Maven
requirements.txt 57,793 Python/pip

Development Patterns Analysis

Based on 49,140 commit messages analyzed:

Pattern Count Percentage
Bug fixes 21,570 43.9%
New features 11,580 23.6%
Testing 6,483 13.2%
Documentation 4,695 9.6%
Improvements 4,477 9.1%
Refactoring 335 0.7%

Uses

Primary Research Applications

  1. Code LLM Robustness: Testing model performance on noisy, real-world data
  2. Data Curation Research: Developing automated filtering and cleaning methods
  3. License Detection: Training and evaluating license classification systems
  4. Bug-Fix Studies: Before/after commit analysis for automated debugging
  5. Cross-Language Analysis: Multi-language repository understanding
  6. DevOps Research: Configuration file analysis and validation

Specific Task Examples

  • Deduplication: Identify and remove duplicate code across repositories
  • License Classification: Automated SPDX license detection and compliance
  • Issue→Code Retrieval: Generate code solutions from natural language descriptions
  • Commit Message Generation: Automatic commit message creation from code diffs
  • Build System Analysis: Configuration file validation and optimization
  • Security Scanning: Identifying potential vulnerabilities and secrets

Limitations

License Coverage

  • 0% License Detection Rate: All repositories marked as "Unknown" in current release
  • Manual Review Required: Commercial use requires individual license verification
  • Research Use Recommended: Dataset optimized for academic and research applications

Data Quality Issues

  • 98.1% Unknown Language: Large portion of repositories with undetected language
  • Deliberately Noisy: Intentionally includes incomplete, experimental, and duplicate code
  • Exact Duplicates: 0% exact SHA256 duplicates detected across file-level content
  • Semantic Duplicates: ~18% estimated semantic duplicates and forks preserved by design (includes repository forks, copy-pasted code, and similar implementations)
  • Intentional Design: Duplicates are preserved to study real-world code distribution and test deduplication algorithms
  • Security Concerns: Contains potential API keys, passwords, and tokens (see Security Analysis)

Representation Bias

  • Language Skew: Heavy bias toward C/C++, Python, JavaScript ecosystems
  • Geographic Bias: Primarily English-language repositories and comments
  • Temporal Bias: Snapshot from specific time period, may not reflect current practices

Scale Limitations

  • Processing Requirements: 3.05 TB requires significant storage and computational resources
  • Filtering Needed: Most use cases will require substantial preprocessing
  • Network Intensive: Large download size may limit accessibility

Security Analysis

Detected Security Patterns

Comprehensive security scan revealed:

Pattern Type Occurrences Risk Level
Password patterns 1,231,942 High
Token patterns 353,266 High
Secret patterns 71,778 Medium
API key patterns 4,899 Critical

Security Recommendations

⚠️ WARNING: This dataset contains potential secrets and should be used for research only

  • No Production Use: Never deploy code from this dataset without thorough security review
  • Credential Scanning: Always scan extracted code for hardcoded credentials
  • Isolation Required: Use in sandboxed environments only
  • Legal Compliance: Verify licensing before any commercial application

Ethical Considerations

Privacy & Consent

  • Public Data Only: All repositories were publicly available at collection time
  • No Private Information: No deliberately collected private repositories or data
  • Takedown Policy: DMCA and removal requests will be honored promptly

Bias & Fairness

  • Representation Issues: Dataset reflects existing biases in open source development
  • Language Barriers: Primarily English-language codebases and documentation
  • Economic Bias: Overrepresents well-resourced development environments

Legal Compliance

  • License Uncertainty: Many repositories lack clear licensing information
  • Commercial Risk: Use in commercial products requires individual license verification
  • Attribution: Original repository attribution preserved in metadata

Evaluation Framework

Evaluation Subset (Available)

A curated evaluation subset is now available:

  • Size: 19.0 GB (323 files, 2,049 repositories)
  • Selection Criteria:
    • Research value scoring with diversity sampling
    • Repositories with enhanced metadata and commit history
    • Cross-language implementations and multi-repo files
    • Complete build system configurations
  • Location: /eval/subset/ with comprehensive metadata

Baseline Tasks & Results

  1. Code Completion: Pass@k evaluation → Results: 14.2% Pass@1
  2. License Classification: Automated detection → Results: 9.8% accuracy
  3. Bug Detection: Commit history analysis → Framework available
  4. Cross-Language Translation: Code equivalence → Framework available
  5. Complete Analysis: Summary CSV for research comparison

Metrics

  • Functional Correctness: Pass@k, CodeBLEU, execution success rate
  • Information Retrieval: MRR, MAP, BLEU scores for search and generation
  • Classification Accuracy: Precision, recall, F1 for license and bug detection

Distribution

Access Information

📦 Full Dataset (3.05 TB):

  • Status: Hosting in progress on Hugging Face Hub
  • Content: Complete 397,475 repositories, 52,692 JSONL files
  • Distribution: codereality/codereality-1t (pending)
  • Alternatives: Torrent and S3 bucket options planned

📋 Evaluation Subset (19.0 GB):

  • Status: Available now
  • Content: 2,049 curated repositories, 323 JSONL files
  • Location: /eval/subset/ directory
  • Purpose: Research benchmarks and evaluation tasks

📚 Documentation & Tools:

  • GitHub Repository: Complete analysis scripts and benchmarks
  • Benchmark Results: Sample baselines and comparison data

File Organization

codereality-1t/
├── data/
│   ├── *.jsonl              # Repository archives (52,692 files)
│   └── manifest.json        # File checksums and metadata
├── analysis/
│   ├── dataset_index.json   # Complete file index
│   ├── metrics.json         # Analysis results
│   └── language_stats.json  # Language distribution
├── docs/
│   ├── DATASET_CARD.md      # This document
│   ├── LICENSE.md           # Dataset license
│   └── USAGE_EXAMPLES.md    # Code examples
└── eval/
    ├── subset/              # Evaluation subset (15.1GB, available)
    └── benchmarks/          # Evaluation scripts

Checksums & Integrity

  • Hash Algorithm: SHA256
  • Manifest File: Complete checksums for all 52,692 JSONL files
  • Verification: sha256sum -c manifest.json

Maintenance & Support

Contact Information

Update Policy

  • Version 1.0.0: Initial deliberately noisy release
  • Future Versions: May include cleaned/curated variants
  • Community Contributions: Cleaning scripts, evaluation tasks, and analysis tools welcome

Contribution Guidelines

  1. Bug Reports: Use GitHub issues for data quality problems
  2. Enhancement Requests: Suggest improvements via pull requests
  3. Research Papers: Share research using this dataset for community benefit
  4. Derived Datasets: Coordinate to avoid duplication and ensure proper attribution

Version History

v1.0.0 (Current)

  • Release Date: September 2025
  • Content: Complete 3.05 TB deliberately noisy dataset
  • Analysis: Full BigCode-compliant metrics on all 397,475 repositories
  • Status: Research-ready with comprehensive documentation

Community-Driven Roadmap

CodeReality-1T is a living dataset that evolves with community contributions:

  • v1.1.0 (Q1 2025): Enhanced evaluation subset with community feedback, improved benchmarks, and additional task frameworks
  • v1.2.0 (Q2 2025): License detection improvements, deduplication analysis tools, semantic duplicate estimation, and community filtering scripts
  • v2.0.0 (Q3 2025): Community-curated clean variant with quality filters, improved metadata, and production-ready subset

Community contributions actively encouraged: cleaning scripts, new benchmarks, evaluation tasks, data curation improvements, and quality assessment tools.

Citation

@misc{codereality2025,
  title={CodeReality-1T: A Large-Scale Deliberately Noisy Dataset for Robust Code Understanding},
  author={Vincenzo Gallo},
  year={2025},
  publisher={Hugging Face},
  howpublished={\\url{https://huggingface.co/vinsblack}},
  note={Version 1.0.0}
}

License

This dataset is released under [License Terms] with the following considerations:

  • Research Use: Freely available for academic and research purposes
  • Commercial Use: Requires individual license verification for each repository
  • Attribution: Please cite this dataset card and preserve original repository attribution
  • Liability: Provided as-is with no warranties regarding licensing or content accuracy

Dataset Card generated automatically from comprehensive analysis of all 397,475 repositories using BigCode-compliant methodology. Analysis completed in 63.7 hours with 100% coverage and no sampling.