RobustBiasBench / README.md
bsakash's picture
Update README.md
ba76f0b verified
metadata
license: cc-by-nc-sa-4.0
task_categories:
  - text-classification
language:
  - en
tags:
  - legal
  - bias-detection
  - robustness
size_categories:
  - 10K<n<100K
pretty_name: RobustBiasBench
dataset_preview: data/robustbiasbench_dataset.csv
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: date
      dtype: string
    - name: bias_type
      dtype: string
    - name: normative_framing
      dtype: string
    - name: source
      dtype: string
    - name: policy
      dtype: string
    - name: bias_type_merged
      dtype: string
  download_size: 5300000
  dataset_size: 5300000
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/robustbiasbench_dataset.csv

RobustBiasBench Dataset Description

This document provides an overview of the features and labels included in the RobustBiasBench dataset, which consists of over 18,000 policy excerpts annotated for bias type and normative framing.


πŸ“„ Dataset Format

The dataset is stored in CSV format and contains the following columns:

Column Name Description
id Unique numeric ID for each policy excerpt
date Year of the policy (extracted from the source document)
bias_type Fine-grained original bias label assigned by annotators (e.g., age, gender, citizenship)
normative_framing Whether the bias is presented implicitly or explicitly (implicit, explicit, or no_bias)
source URL or citation source for the original policy document
policy Text excerpt of the policy (typically 1–3 sentences)
bias_type_merged Mapped bias class used for evaluation: one of group_1, group_2, or no_bias

🏷️ Label Description

bias_type

Original annotation capturing specific bias domains:

  • age, gender, race/culture, religion, disability β†’ Group 2 (Demographic Bias)
  • economic, education, political, citizenship, criminal_justice β†’ Group 1 (Systemic Bias)
  • no_bias β†’ Procedural or neutral statements

bias_type_merged

Mapped classes used for modeling:

  • group_1: Systemic/institutional bias
  • group_2: Demographic/identity-based bias
  • no_bias: Factual or operational content

normative_framing

Captures how bias is framed:

  • explicit: Bias is directly stated (e.g., "women are not eligible")
  • implicit: Bias is implied through structure or condition (e.g., "must be a citizen to apply")
  • no_bias: Used only when bias_type = no_bias

πŸ“Š Class Distribution

The dataset includes the following number of annotated examples:

  • No Bias: 6,017 examples
  • Systemic Bias (group_1): 6,246 examples
  • Demographic Bias (group_2): 6,141 examples

This balance ensures that the model does not overfit to any single category and can learn to differentiate across nuanced cases of policy bias.


πŸ”’ License

This dataset is released under the CC BY-NC-SA 4.0 License and is intended for academic use in fairness and robustness research.