Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
CC-HARD / README.md
starmage520's picture
Update README.md
f9339b3 verified
metadata
license: cc-by-4.0
task_categories:
  - text-generation

🧠 CC-HARD: A Challenging Dataset for Design-to-Code Generation

📄 Paper on arXiv 📄 Paper on ACM

CC-HARD is a challenging benchmark dataset introduced in the KDD 2025 paper LaTCoder: Converting Webpage Design to Code with Layout-as-Thought. It was specifically designed to evaluate layout fidelity in webpage design-to-code generation.

The dataset consists of 128 webpage screenshots and their corresponding HTML/CSS code, manually curated from the Common Crawl corpus. Unlike prior datasets, CC-HARD emphasizes:

  • Deep DOM hierarchies
  • Visually complex and diverse layouts
  • High tag density and structural variability

LaTCoder uses CC-HARD to demonstrate that existing MLLM-based design-to-code approaches struggle with layout preservation when faced with real-world webpages. This dataset provides a strong testbed for evaluating both general-purpose MLLMs and layout-aware code generation systems.


📦 Dataset Details

Each example in CC-HARD consists of:

  • image: A high-resolution PNG screenshot of a real-world webpage design
  • text: The corresponding HTML/CSS code used to render that design

Dataset Comparison: CC-HARD vs. Design2Code-HARD

We compare CC-HARD with Design2Code-HARD dataset across multiple structural and content dimensions. The following table summarizes the key statistics, as reported in the LaTCoder paper (Table 2):

Metric Design2Code-HARD CC-HARD
Number of Samples 80 128
Avg. Total Length (tokens) 8,900 ± 2,399 8,416 ± 2,190
Avg. Text Length (tokens) 3,554 ± 2,820 969 ± 762
Avg. HTML Tags 251 ± 232 274 ± 66
Avg. DOM Depth 10 ± 4 16 ± 3
Avg. Unique Tags 23 ± 5 27 ± 5

Key Differences and Analysis

As discussed in the paper:

  • While overall token lengths of samples are similar across datasets, text content in Design2Code-HARD is much longer, meaning it emphasizes textual richness.
  • In contrast, CC-HARD’s token budget is dominated by HTML structure, not content, making it harder for models to learn from context or infer layout from semantic cues.
  • CC-HARD samples are structurally deeper (DOM depth 16 vs. 10), with more HTML tags and greater tag diversity, increasing layout reasoning complexity.
  • CC-HARD features a larger number of nested layout blocks, which amplifies the difficulty for MLLMs in preserving spatial relationships and hierarchy during code generation.

As a result, models that perform well on Design2Code-HARD often struggle on CC-HARD — a trend clearly shown in the benchmark results. This highlights the increased layout sensitivity and real-world difficulty embedded in CC-HARD, making it a more suitable testbed for evaluating layout-aware design-to-code systems.


🧾 Citation

@inproceedings{gui2025latcoder,  
    author = {Gui, Yi and Li, Zhen and Zhang, Zhongyi and Wang, Guohao and Lv, Tianpeng and Jiang, Gaoyang and Liu, Yi and Chen, Dongping and Wan, Yao and Zhang, Hongyu and Jiang, Wenbin and Shi, Xuanhua and Jin, Hai},
    title = {LaTCoder: Converting Webpage Design to Code with Layout-as-Thought},
    year = {2025},
    isbn = {9798400714542},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    url = {https://doi.org/10.1145/3711896.3737016},
    doi = {10.1145/3711896.3737016},
    booktitle = {Proceedings of the 31st ACM SIGKDD Conference on Knowledge Discovery and Data Mining V.2},
    pages = {721–732},
    numpages = {12},
    keywords = {code generation, design to code, ui automation},
    location = {Toronto ON, Canada},
    series = {KDD '25}  
}