Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    ArrowInvalid
Message:      BatchSize must be greater than 0, got 0
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2285, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__
                  for key, pa_table in self._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1879, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 476, in _iter_arrow
                  for key, pa_table in iterator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 323, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 94, in _generate_tables
                  parquet_fragment.to_batches(
                File "pyarrow/_dataset.pyx", line 1529, in pyarrow._dataset.Fragment.to_batches
                File "pyarrow/_dataset.pyx", line 3558, in pyarrow._dataset.Scanner.from_fragment
                File "pyarrow/_dataset.pyx", line 3351, in pyarrow._dataset._populate_builder
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: BatchSize must be greater than 0, got 0

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Bittensor Subnet 13 Reddit Dataset

Data-universe: The finest collection of social media data the web has to offer
Data-universe: The finest collection of social media data the web has to offer

Miner Data Compliance Agreement

In uploading this dataset, I am agreeing to the Macrocosmos Miner Data Compliance Policy.

Dataset Summary

This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.

Supported Tasks

The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:

  • Sentiment Analysis
  • Topic Modeling
  • Community Analysis
  • Content Categorization

Languages

Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.

Dataset Structure

Data Instances

Each instance represents a single Reddit post or comment with the following fields:

Data Fields

  • text (string): The main content of the Reddit post or comment.
  • label (string): Sentiment or topic category of the content.
  • dataType (string): Indicates whether the entry is a post or a comment.
  • communityName (string): The name of the subreddit where the content was posted.
  • datetime (string): The date when the content was posted or commented.
  • username_encoded (string): An encoded version of the username to maintain user privacy.
  • url_encoded (string): An encoded version of any URLs included in the content.

Data Splits

This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.

Dataset Creation

Source Data

Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.

Personal and Sensitive Information

All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.

Considerations for Using the Data

Social Impact and Biases

Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.

Limitations

  • Data quality may vary due to the nature of media sources.
  • The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
  • Temporal biases may exist due to real-time collection methods.
  • The dataset is limited to public subreddits and does not include private or restricted communities.

Additional Information

Licensing Information

The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.

Citation Information

If you use this dataset in your research, please cite it as follows:

@misc{Axioris2025datauniversereddit_dataset_14,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={Axioris},
        year={2025},
        url={https://huggingface.co/datasets/Axioris/reddit_dataset_14},
        }

Contributions

To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.

Dataset Statistics

[This section is automatically updated]

  • Total Instances: 7347798
  • Date Range: 2025-05-24T00:00:00Z to 2025-05-30T00:00:00Z
  • Last Updated: 2025-06-03T07:44:54Z

Data Distribution

  • Posts: 2.94%
  • Comments: 97.06%

Top 10 Subreddits

For full statistics, please refer to the stats.json file in the repository.

Rank Topic Total Count Percentage
1 r/AmItheAsshole 48375 0.66%
2 r/GOONED 35873 0.49%
3 r/KinkTown 35000 0.48%
4 r/AmIOverreacting 32819 0.45%
5 r/formula1 30975 0.42%
6 r/MapPorn 30857 0.42%
7 r/SquaredCircle 30465 0.41%
8 r/Millennials 29650 0.40%
9 r/popculturechat 29627 0.40%
10 r/teenagers 28821 0.39%

Update History

Date New Instances Total Instances
2025-05-25T08:52:00Z 1793556 1793556
2025-05-26T02:55:18Z 1822369 3615925
2025-05-26T20:58:47Z 1558898 5174823
2025-05-27T15:00:08Z 585570 5760393
2025-05-28T09:01:42Z 635901 6396294
2025-05-29T03:03:53Z 307754 6704048
2025-05-29T20:24:33Z 537610 7241658
2025-05-30T14:25:10Z 104026 7345684
2025-05-31T07:43:05Z 2110 7347794
2025-06-01T01:43:34Z 1 7347795
2025-06-01T19:44:01Z 1 7347796
2025-06-02T13:44:27Z 1 7347797
2025-06-03T07:44:54Z 1 7347798
Downloads last month
306