dataset_info:
features:
- name: etextno
dtype: int64
- name: book_title
dtype: string
- name: author
dtype: string
- name: issued
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 21144011332
num_examples: 58653
download_size: 12884319326
dataset_size: 21144011332
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
Gutenberg-BookCorpus-Cleaned-Data-English
This dataset is been cleaned and preprocessed using Gutenberg_English_Preprocessor class method (given below) from preference Kaggle dataset 75,000+ Gutenberg Books and Metadata 2025. This dataset is only specialisation for english contented with rights as "Public domain in the USA" hence you can free used it anywhere. Following reference metadata of Gutenberg is also available and downloaded it using following CLI command below :-
pip install kaggle
kaggle kernels output lokeshparab/gutenberg-metadata-downloader -p /path/to/dest
About Project Gutenberg
Project Gutenberg is a digital library that hosts over 75,000 free eBooks. Users can choose among EPUB, Kindle, and plain text formats, download them, or read them online. The library primarily focuses on older literary works whose U.S. copyright has expired. Thousands of volunteers have digitized and meticulously proofread the eBooks for readers to enjoy.
Dataset Details
| Column | Description |
|---|---|
| etextno | Unique identifier for each book. |
| book_title | Title of the book. |
| Authors | Author(s) of the respective book. |
| Issued | Date when the book was published or added to the collection. |
| Context | Cleaned and preprocessed the plain text version in UTF-8 encoding. |
Gutenberg English Preprocessor (Methodolody)
The Gutenberg English Preprocessor is designed to clean and preprocess text data from Project Gutenberg files by removing unwanted patterns such as special markers, Gutenberg-specific sentences, and decorative text blocks.
Notebook Reference :- Click here
Following Features :-
- Removes Blocks Enclosed in
=Symbols — Eliminates text sections framed by lines of=symbols, often found in decorative headers or footers. - Removes Gutenberg-Specific Sentences — Filters out sentences containing the term "Gutenberg" in any case (uppercase, lowercase, or mixed).
- Removes Small Print Notices — Identifies and removes text segments marked as "Small Print" content.
- Trims Text Between Project Gutenberg Start/End Markers — Extracts content enclosed between
*** START OF THE PROJECT GUTENBERG...and*** END OF THE PROJECT GUTENBERG.... - Removes Inline and Block Patterns Marked with
*,**,***, etc. — Effectively cleans unwanted text patterns that are enclosed in stars.
- Removes Blocks Enclosed in
Class function :-
class Gutenberg_English_Preprocessor: """ A text preprocessor designed to clean Project Gutenberg text data. This class removes unwanted patterns like: - Blocks enclosed in '=' lines - Sentences containing "Gutenberg" (case insensitive) - "Small Print" sections from Project Gutenberg files - Blocks enclosed in '*' patterns """ def __init__(self, text: str): """ Initializes the Gutenberg_English_Preprocessor with the provided text. Args: text (str): The text content to be processed. """ self.text = text def remove_equal_sign_blocks(self): """ Removes blocks of text enclosed by lines containing only '=' symbols. Example: ======================== This content will be removed. ======================== """ equal_block_pattern = r'^\s*=+\s*\n(?:.*?\n)*?\s*=+\s*$' self.text = re.sub(equal_block_pattern, '', self.text, flags=re.MULTILINE) self.text = self.text.strip() def remove_gutenberg_sentences(self): """ Removes sentences that contain the word "Gutenberg" in any case format. Example: "This is a Project Gutenberg text." → Removed "Random sentence without Gutenberg." → Removed "This is a normal sentence." → Retained """ gutenberg_pattern = r'^[^\n]*\bgutenberg\b[^\n]*\n?' self.text = re.sub(gutenberg_pattern, '', self.text, flags=re.IGNORECASE | re.MULTILINE) self.text = self.text.strip() def remove_small_print(self): """ Removes Project Gutenberg's "Small Print" sections. These sections often contain legal disclaimers and metadata. """ pattern1 = r'\*\*\*START\*\*THE SMALL PRINT.*?\*END\*THE SMALL PRINT!' pattern2 = r'\*\*\*START\*\*THE SMALL PRINT.*?\*END THE SMALL PRINT' self.text = re.sub(pattern1, '', self.text, flags=re.DOTALL) self.text = re.sub(pattern2, '', self.text, flags=re.DOTALL) self.text = self.text.strip() def start_end(self): """ Trims the text to retain only the content between: - "*** START OF THE PROJECT GUTENBERG..." - "*** END OF THE PROJECT GUTENBERG..." Ensures non-essential content outside these markers is excluded. """ str_str = "*** START OF THE PROJECT GUTENBERG" end_str = "*** END OF THE PROJECT GUTENBERG" start_idx = self.text.find(str_str) end_idx = self.text.find(end_str) if start_idx != -1 and end_idx != -1: self.text = self.text[start_idx:end_idx] def remove_patterns(self): """ Removes patterns enclosed by '*' characters, such as: - Inline patterns like "* text *", "** text **", etc. - Standalone patterns and multi-line blocks enclosed in '*' """ star_pattern = r'^\s*\*{1,4}.*?\*{1,4}\s*$' self.text = re.sub(star_pattern, '', self.text, flags=re.MULTILINE | re.DOTALL) self.text = self.text.strip() def preprocess(self): """ Executes the full text preprocessing pipeline by calling all individual cleaning functions in the desired sequence. Returns: str: The cleaned and processed text content. """ self.start_end() self.remove_small_print() self.remove_patterns() self.remove_equal_sign_blocks() self.remove_gutenberg_sentences() return self.textExecution Steps
preprocessor = Gutenberg_English_Preprocessor(text="Here contents Gutenberg text") clean_text = preprocessor.preprocess()
Usage
This dataset can be effectively applied to various Natural Language Processing (NLP) and Machine Learning (ML) tasks, such as:
- Creating Embeddings: Extract meaningful vector representations for search engines and recommendation systems.
- Training Transformers: Utilize the dataset to train transformer models like BERT, GPT, etc., for improved language understanding and generation.
- Language Model Fine-tuning: Fine-tune LLMs (Large Language Models) to enhance performance in specific domains or tasks.
- Text Analysis and Classification: Conduct topic modeling, sentiment analysis, or language detection.
- Information Retrieval: Develop powerful search systems by indexing the dataset with metadata attributes.