Datasets:
language:
- fr
tags:
- france
- service-public
- demarches
- embeddings
- administration
- open-data
- government
pretty_name: Service-Public.fr practical sheets dataset
size_categories:
- 10K<n<100K
license: etalab-2.0
π«π· Service-Public.fr practical sheets dataset (Administrative Procedures)
This dataset is derived from the official Service-Public.fr platform and contains practical information sheets and resources targeting both individuals (Particuliers) and entrepreneurs (Entreprendre). The purpose of these sheets is to provide information on administrative procedures relating to a number of themes. The data is publicly available on data.gouv.fr and has been processed and chunked for optimized semantic retrieval and large-scale embedding use.
The dataset provides semantic-ready, structured and chunked data of official content related to employment, labor law and administrative procedures.
These chunks have been vectorized using the BAAI/bge-m3
embedding model to enable semantic search and retrieval tasks.
Each record represents a semantically coherent text fragment (chunk) from an original sheet, enriched with metadata and a precomputed embedding vector suitable for search and retrieval applications (e.g., RAG pipelines).
ποΈ Dataset Contents
The dataset is provided in Parquet format and includes the following columns:
Column Name | Type | Description |
---|---|---|
chunk_id |
str |
Unique generated and encoded hash of each chunk. |
sid |
str |
Article identifier from the source site. |
chunk_index |
int |
Index of the chunk within its original article. |
audience |
str |
Target audience: Particuliers and/or Professionnels . |
theme |
str |
Thematic categories (e.g., Famille - ScolaritΓ©, Travail - Formation ). |
title |
str |
Title of the article. |
surtitre |
str |
Higher-lever theme of article structure. |
source |
str |
Dataset source label. (always "service-public" in this dataset). |
introduction |
str |
Introductory paragraph of the article. |
url |
str |
URL of the original article. |
related_questions |
list[dict] |
List of related questions, including their sid and URLs. |
web_services |
list[dict] |
Associated web services (if any). |
context |
list[str] |
Section names related to the chunk. |
text |
str |
Textual content extracted and chunked from a section of the article. |
chunk_text |
str |
Formated text including title , context , introduction and text values. Used for embedding. |
embeddings_bge-m3 |
str |
Embedding vector of chunk_text using BAAI/bge-m3 (length of 1024), stored as JSON array string |
π οΈ Data Processing Methodology
1. π₯ Field Extraction
The following fields were extracted and/or transformed from the original XML files:
- Basic fields:
sid
,theme
,title
,surtitre
,introduction
,url
,related_questions
,web_services
are directly extracted from the XML files, with some processing when needed. - Generated fields:
chunk_id
: is an unique generated and encoded hash for each chunk.chunk_index
: is the index of the chunk of a same article. Each article has an uniquesid
.source
: is always "service-public" here.
- Textual fields:
context
: Optional contextual hierarchy (e.g., nested sections).text
: Textual content of the article chunk. This is the value which corresponds to a semantically coherent fragment of textual content extracted from the XML document structure for a samesid
.
Column source
is a fixed variable here because this dataset was built at the same time as the Travail Emploi Dataset.
Both datasets were intended to be grouped together in a single vector collection, they then have differents source
values.
2. βοΈ Generation of 'chunk_text'
The value includes the title
and introduction
of the article, the context
values of the chunk and the textual content chunk text
.
This strategy is designed to improve semantic search for document search use cases on administrative procedures.
The Langchain's RecursiveCharacterTextSplitter
function was used to make these chunks (text
value). The parameters used are :
chunk_size
= 1500 (in order to maximize the compability of most LLMs context windows)chunk_overlap
= 200length_function
= len
3. π§ Embeddings Generation
Each chunk_text
was embedded using the BAAI/bge-m3
model. The resulting embedding vector is stored in the embeddings_bge-m3
column as a string, but can easily be parsed back into a list[float]
or NumPy array.
π Embedding Use Notice
β οΈ The embeddings_bge-m3
column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"
).
To use it as a vector, you need to parse it into a list of floats or NumPy array. For example, if you want to load the dataset into a dataframe :
import pandas as pd
import json
df = pd.read_parquet("service-public-latest.parquet")
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)
π Source & License
π Source :
- Service-Public.fr official website
- Data.Gouv.fr : Fiches pratiques et ressources de Service-Public.fr Particuliers
- Data.Gouv.fr : Fiches pratiques et ressources Entreprendre - Service-Public.fr
π Licence :
Open License (Etalab) β This dataset is publicly available and can be reused under the conditions of the Etalab open license.