Datasets:
license: mit
language:
- en
tags:
- code
Filtered StarCoder Dataset Mini
Dataset Description
This dataset contains filtered and processed code samples from 10 popular programming languages: C, C++, C#, Go, Java, JavaScript, Python, Ruby, Scala, and TypeScript. The dataset was created by filtering source code based on quality metrics, removing outliers, and standardizing the format for machine learning and code analysis applications.
Key Features
- Cleaned and Filtered Code: Samples have been processed to remove outliers in terms of line length and code size
- Quality Metrics: Each sample includes metadata about average line length and line count
- Multi-language Support: 10 programming languages represented in separate subsets
- Consistent Format: All samples follow the same Parquet structure for easy processing
Dataset Size
The complete dataset is approximately 12GB in size. Individual language files vary in size, with the largest being C++ (2GB) and the smallest being Scala (665MB).
Dataset Statistics
Language | Sample Count | Avg. Line Length | Avg. Line Count |
---|---|---|---|
C | 1,752,078 | 28.07 | 61.85 |
C++ | 1,769,333 | 28.16 | 87.99 |
C# | 1,763,508 | 29.53 | 44.29 |
Go | 1,751,120 | 25.18 | 68.26 |
Java | 1,779,659 | 30.84 | 54.35 |
JavaScript | 1,718,133 | 27.68 | 44.07 |
Python | 1,764,099 | 32.68 | 54.66 |
Ruby | 1,756,771 | 27.35 | 27.34 |
Scala | 952,890 | 35.30 | 44.38 |
TypeScript | 1,738,885 | 29.17 | 36.84 |
Dataset Structure
The dataset is organized with separate Parquet files for each programming language:
c.parquet
- C language samplescpp.parquet
- C++ language samplesc-sharp.parquet
- C# language samplesgo.parquet
- Go language samplesjava.parquet
- Java language samplesjavascript.parquet
- JavaScript language samplespython.parquet
- Python language samplesruby.parquet
- Ruby language samplesscala.parquet
- Scala language samplestypescript.parquet
- TypeScript language samples
Within each file, data is stored with the following schema:
- language: string (the programming language of the code sample)
- code: string (the complete code content)
- avg_line_length: float (average character count per line)
- line_count: integer (total number of lines in the code)
Each sample is stored as a row in the Parquet file with these four columns.
How to Access the Dataset
Using the Hugging Face datasets
Library
This dataset is hosted on the Hugging Face Hub and can be easily accessed using the datasets
library.
Install the Required Library
pip install datasets
Import Library
from datasets import load_dataset
Load the Entire Dataset
dataset = load_dataset(
"jugalgajjar/Filtered-StarCoder-Dataset-Mini"
)
Load a Specific Language
dataset = load_dataset(
"jugalgajjar/Filtered-StarCoder-Dataset-Mini",
data_files="scala.parquet"
)
Stream Data
dataset = load_dataset(
"jugalgajjar/Filtered-StarCoder-Dataset-Mini",
data_files="scala.parquet",
streaming=True
)
Access Data Content (After Downloading)
try:
for example in dataset["train"].take(5):
print(example)
print("-"*25)
except Exception as e:
print(f"An error occurred: {e}")
Manual Download
You can also manually download specific language files from the Hugging Face repository page:
- Visit
https://huggingface.co/datasets/jugalgajjar/Filtered-StarCoder-Dataset-Mini
- Navigate to the "Files" tab
- Click on the language file you want to download (e.g.,
python.parquet
) - Use the download button to save the file locally
Dataset Creation
This dataset was created through the following process:
- Original code samples were collected from the StarCoder dataset (URL)
- Statistical analysis was performed to identify quality metrics
- Outliers were removed using IQR (Interquartile Range) method
- Samples were filtered to remove excessively long or short code examples
- Data was normalized and standardized across languages
- Metadata (average line length and line count) was calculated for each sample
- Final data was serialized in the efficient Parquet format for optimal storage and access speed
The processing pipeline included steps to:
- Remove code samples with abnormal line lengths (potential formatting issues)
- Filter out extremely long files (exceeding the 90th percentile)
- Ensure consistent formatting and structure
- Generate useful metadata for each example
Citation
If you use this dataset in your research or project, please cite it as follows:
@misc{fscdmini2025,
author = {Jugal Gajjar, Kamalasankari Subramaniakuppusamy, Kaustik Ranaware},
title = {Filtered CodeStar Dataset Mini},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/datasets/jugalgajjar/Filtered-StarCoder-Dataset-Mini}}
}
License
This dataset is released under the MIT License. See the LICENSE file for more details.