Datasets:
license: apache-2.0
task_categories:
- text-generation
language:
- en
- fr
- bg
- hr
- cs
- da
- nl
- et
- fi
- de
- el
- hu
- ga
- it
- lt
- mt
- pl
- pt
- ro
- sk
- sl
- es
- sv
tags:
- legal
- finance
size_categories:
- 100B<n<1T
Open Government Dataset
Open Government is the largest agregation of governement text and data made available as part of open data programs.
In total, the dataset contains approximately 380B tokens. While Open Government aims to become a global resource, in its current state it mostly features open datasets from the US, France, European and international organizations.
The dataset comprises 16 collections curated through two different initiaties: Finance commons and Legal commons.
Use Common Corpus
from datasets import load_dataset data = load_dataset('PleIAs/open_government')
Dataset Structure
Data structure is aligned on the upcoming version of Common Corpus. While it implies that some field are redundant it ensures a full compatibility between Open Government and the largest open pretraining corpus.
Data Fields
- identifier: unique text identifier
- collection: name of the one of the 16 collection
- open type: generic type of open content (all "open_government" here)
- license: sharing rights for the content, either (US federal) public domain, Creative Commons, the French Licence ouverte or various other open data licenses.
- date: date of creation of the resource where known. Open data is largely retroactive and a significant part of the open government predate the 21st century.
- title: title of the resource when known or alternatively the filename.
- creator: institution publishing/collecting/curating the resource.
- language: automatically identified language.
- text: full text, without formatting.
- word_count: number of space delimited words.
Provenance
Finance Commons
Finance Commons was released independently as a collection. It is a multimodal dataset and contains both text and PDF data. This release containing Finance Commons contains only text data.
The dataset comprises several subsets:
- Securities and Exchange Commission (SEC) This dataset comprises the SEC annual reports (Form 10-K) for the years 1993 to 2024. Entries up to 2020 were compiled by Loukas et al. (2021). We added the reports from 2021-2024, which come from the EDGAR database, compiled using the EDGAR-Crawler toolkit. The documents are primarily in English. This dataset is available individually: https://huggingface.co/datasets/PleIAs/SEC.
- World Trade Organization (WTO) This dataset comprises documents from WTO’s official Documents Online platform. The documents cover the years 1995 to 2024. Documents are available in three official languages: English, French, and Spanish. Some documents are available in other languages, e.g. Chinese, Korean, Arabic, German, and Portuguese. This dataset is available individually: https://huggingface.co/datasets/PleIAs/WTO-Text.
- French Authority for Financial Market (AMF) This is a dataset of documents from the French Authority for Financial Market, or the Autorité des marchés financiers (AMF), which is an independent public authority that regulates the French market. The documents are primarily in French. This dataset is available individually: https://huggingface.co/datasets/PleIAs/AMF-Text.
- Tenders Electronic Daily (TED) EU Tenders This dataset is a collection of procurement notices published by the EU. The documents are published in the online version of the 'Supplement to the Official Journal' of the EU, dedicated to European public procurement. The documents are mostly in German, with French, Polish, and Spanish making up relatively large portions of the remaining documents. This dataset is available individually: https://huggingface.co/datasets/PleIAs/TEDEUTenders.
- General Agreement on Tariffs and Trade (GATT) Library This dataset comprises documents from GATT, which was an organization that promoted international commerce and the reduction of trade barriers among member states. Public documents were made available by the General Council of the WTO in 2006. The documents span from January 1, 1946, to September 6, 1996. Most of the documents are in English, but there are also documents in French, Spanish, and other languages. This dataset is available individually: https://huggingface.co/datasets/PleIAs/GATT_library.
Total tokens by subset:
Dataset | Tokens |
---|---|
SEC | 9,648,522,224 |
WTO | 2,783,387,015 |
AMF | 4,912,438,575 |
TEDEUTenders | 649,323,694 |
GATT Library | 215,338,931 |
Total Tokens: | 18,209,010,439 |
Legal Commons
Legal Commons is a collection of legal and administrative datasets. The datasets come mostly from the EU and the US and cover a wide range of languages. These datasets are useful for developing language models with legal knowledge, as well as models that are ideal for document processing in official administrative applications.
- Caselaw This dataset consists of 6,930,777 legal cases, digitized from Harvard Law School Library's physical collection of American case law. The dataset spans the years 1658 to 2020. The dataset is primarily in English.
- CourtListener This is a dataset of opinions, oral arguments, judges, judicial financial records, and federal filings put together by the Free Law Project.
- EUR-lex This is a dataset of 57,000 legislative documents from the EU. It is based on the dataset by Loza Mencía & Fürnkranz (2010) and developed by. Chalkidis et al. (2019). The documents have also been annotated by the Publications Office of EU with concepts from EuroVoc. The dataset covers all 24 EU languages.
- Eurovoc Eurovoc is a dataset containing 3,700,000 documents in 39 languages with associated EuroVoc labels. The documents come from Cellar, which is a data repository for the Publications Office of the European Union. This dataset was originally compiled by Sébastien Campion and the original version is available: https://huggingface.co/datasets/EuropeanParliament/Eurovoc.
- French Open Data This dataset comprises two different sub-collections overwhelming French:
- DILA: Various open data program coordinated by the French Directorate of Legal and Administrative Information (Direction de l'information légale et administrative; DILA)), which is a French public administrative entity that disseminates information about laws and their applications to the public.
- French Admin Sites: a comprehensive administrative crawl of about 200 French websites from the public sector managed by Pleias for the DINUM. This include both documents in web and PDF formats.
- US PTO This dataset comprises documents from the United States Patent and Trademark Office (USPTO), which is the federal agency that grants patents and registers trademarks. This dataset consists of actions from this agency from 2019-2022. It was originally published as part of the Pile of Law (Henderson et al. (2022).
- UN Digital Library This dataset comes from the UN Digital Library.
- PleIAs European Archive Dataset This comprises older archives with an historical/cultural heritage value from EU websites, e.g. Archives of the EU Institute and the Council of the EU.
- OECD This data comes from the Organisation for Economic Co-operation and Development (OECD).
Tokens by subset:
Dataset | Tokens |
---|---|
Caselaw. | 13,823,526,194 |
CourtListener | 22,463,960,458 |
EUR-lex | 64,896,588,374 |
Eurovoc | 31,613,548,606 |
French Open Data | 24,480,289,170 |
USPTO | 200,115,310,846 |
UN Digital Library | 1,764,113,826 |
European Archives | 3,627,192,797 |
OECD | 575,213,706 |
Total Tokens: | 363,359,743,977 |
How to Use
Considerations for Using the Data
All data in Common Corpus are permissibly licensed and may be used for both commercial and non-commercial purposes.
The dataset is multilingual. The language text is included in the metadata, so data can be filtered by language. Additionally, some of the text data are historical. The year each text is written is included in the metadata, therefore it is possible to construct a dataset with a custom date cutoff if desired.
Personal and Sensitive Information
We have attempted to remove non-public personally identifiable information (PII). We primarily use Microsoft Presidio, but make additional modifications to account for language- and country-specific considerations, such as European phone number formats.
Acknowledgements
The corpus was stored and processed with the generous support of the AI Alliance, Jean Zay (Eviden, Idris), Nvidia Inception program, Nebius AI, Tracto AI, Mozilla. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC). This dataset was also made in partnership with Wikimedia Enterprise for the Wikipedia part. The collection of the corpus has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Eleuther AI, Allen AI, HuggingFace…).

.svg.png)



