DOM Formula Assignment using K-Nearest Neighbors
A Machine Learning Approach to Enhanced Molecular Formula Assignment in Fulvic Acid DOM Mass Spectra
Paper: Under review
Abstract
Dissolved organic matter (DOM) is a critical component of aquatic ecosystems, with the fulvic acid fraction (FA-DOM) exhibiting high mobility and ready bioavailability to microbial communities. While understanding the molecular composition is a vital area of study, the heterogeneity of the material, with a vast number of diverse compounds, makes this task challenging. Existing methods often struggle with incomplete formula assignment or reduced coverage highlighting the need for a better approach. In this study, we developed a machine learning approach using the k-nearest neighbors (KNN) algorithm to predict molecular formulas from ultra-high-resolution mass spectrometry data. The model was trained on chemical formulas assigned to multiple DOM samples using 7 Tesla(7T) and a 21 Tesla(21T) Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) system, and tested on an independent 9.4 T FT-ICR MS Fulvic Acid dataset. A synthetic dataset of plausible elemental combinations (C, H, O, N, S) was also generated to enhance generalization. Our approach achieved a 99.9% assignment rate on the labeled test set and assigned a total of 13,605 formulas for unlabeled peaks compared to the existing approach, which assigned 5914 formulas, achieving up to a 2.3X improvement in formula assignment coverage compared to existing methods.
Model Variants
Single Models (8 variants)
Trained on individual datasets (7T or 21T FT-ICR MS data):
| Data Source | K | Metric | Variant Name |
|---|---|---|---|
| 7T | 1 | Euclidean | knn_7T_k1_euclidean |
| 7T | 1 | Manhattan | knn_7T_k1_manhattan |
| 7T | 3 | Euclidean | knn_7T_k3_euclidean |
| 7T | 3 | Manhattan | knn_7T_k3_manhattan |
| 21T | 1 | Euclidean | knn_21T_k1_euclidean |
| 21T | 1 | Manhattan | knn_21T_k1_manhattan |
| 21T | 3 | Euclidean | knn_21T_k3_euclidean |
| 21T | 3 | Manhattan | knn_21T_k3_manhattan |
Ensemble Models (8 variants)
Each combines multiple sub-models trained on different data versions:
| Data Source | K | Metric | Variant Name | Sub-models |
|---|---|---|---|---|
| 7T-21T | 1 | Euclidean | knn_7T21T_k1_euclidean_ensemble |
2 (ver2+ver3) |
| 7T-21T | 1 | Manhattan | knn_7T21T_k1_manhattan_ensemble |
2 (ver2+ver3) |
| 7T-21T | 3 | Euclidean | knn_7T21T_k3_euclidean_ensemble |
2 (ver2+ver3) |
| 7T-21T | 3 | Manhattan | knn_7T21T_k3_manhattan_ensemble |
2 (ver2+ver3) |
| Synthetic | 1 | Euclidean | knn_Synthetic_k1_euclidean_ensemble |
3 (ver2+ver3+synth) |
| Synthetic | 1 | Manhattan | knn_Synthetic_k1_manhattan_ensemble |
3 (ver2+ver3+synth) |
| Synthetic | 3 | Euclidean | knn_Synthetic_k3_euclidean_ensemble |
3 (ver2+ver3+synth) |
| Synthetic | 3 | Manhattan | knn_Synthetic_k3_manhattan_ensemble |
3 (ver2+ver3+synth) |
Performance
Results on combined test sets (Suwannee River Fulvic Acid + Pahokee River Fulvic Acid + others):
| Model | True Predictions | New Assignments | False Predictions | Assignment Rate |
|---|---|---|---|---|
| Synthetic (K=1, Euclidean) | 2,623 | 1,423 | 1 | 99.975% |
| Synthetic (K=1, Manhattan) | 2,623 | 1,423 | 1 | 99.975% |
| Synthetic (K=3, Euclidean) | 2,631 | 1,415 | 1 | 99.975% |
| Synthetic (K=3, Manhattan) | 2,631 | 1,415 | 1 | 99.975% |
| 7T-21T (K=1, Euclidean) | 3,851 | 8 | 188 | 95.355% |
| 7T-21T (K=1, Manhattan) | 3,851 | 8 | 188 | 95.355% |
| 7T-21T (K=3, Euclidean) | 3,846 | 10 | 191 | 95.280% |
| 7T-21T (K=3, Manhattan) | 3,846 | 10 | 191 | 95.280% |
| 21T (K=1, Euclidean) | 3,835 | 10 | 202 | 95.009% |
| 21T (K=1, Manhattan) | 3,835 | 10 | 202 | 95.009% |
| 21T (K=3, Euclidean) | 3,831 | 11 | 205 | 94.935% |
| 21T (K=3, Manhattan) | 3,831 | 11 | 205 | 94.935% |
| 7T (K=1, Euclidean) | 3,201 | 6 | 840 | 79.244% |
| 7T (K=1, Manhattan) | 3,201 | 6 | 840 | 79.244% |
| 7T (K=3, Euclidean) | 3,201 | 6 | 840 | 79.244% |
| 7T (K=3, Manhattan) | 3,201 | 6 | 840 | 79.244% |
Key Findings:
- Synthetic models achieve highest assignment rate (99.975%) and make many new predictions (1,423 novel formulas)
- 7T-21T ensemble models provide best performance for real DOM samples (95.4% with only 8 new assignments)
- Recommended for most users: 7T-21T ensemble (K=1) - optimal balance of accuracy and confidence
Quick Start
Installation
pip install transformers huggingface_hub joblib scikit-learn
Load Default Model
from transformers import AutoModel
import numpy as np
# Load best model (7T-21T, K=1, Euclidean)
model = AutoModel.from_pretrained(
"SaeedLab/dom-formula-assignment-using-knn",
trust_remote_code=True
)
# Prepare mass data
masses = np.array([[245.1234], [387.2156], [512.3478]])
# Get formula predictions
predictions = model(masses)
print(predictions)
# Output: ['C12H15O6' 'C20H31O8' 'C28H48O9']
Load Specific Variant
# Load 21T model with K=1 and Euclidean distance
model = AutoModel.from_pretrained(
"SaeedLab/dom-formula-assignment-using-knn",
data_source="21T",
k_neighbors=1,
metric="euclidean",
trust_remote_code=True
)
# Load 7T-21T ensemble (automatically loads 2 sub-models)
model = AutoModel.from_pretrained(
"SaeedLab/dom-formula-assignment-using-knn",
data_source="7T-21T",
k_neighbors=1,
metric="euclidean",
trust_remote_code=True
)
Batch Prediction
import pandas as pd
# Load your peak list
peaks = pd.read_csv("my_peaks.csv")
masses = peaks['m/z'].values.reshape(-1, 1)
# Predict formulas
formulas = model(masses)
# Add to dataframe
peaks['formula'] = formulas
peaks.to_csv("annotated_peaks.csv", index=False)
Model Selection Guide
| Use Case | Recommended Model | Why? |
|---|---|---|
| Real DOM samples (best overall) | 7T-21T ensemble (K=1) | Highest verified accuracy (95.4%), minimal new assignments |
| Maximum assignment rate | Synthetic ensemble (K=1) | 99.98% assignment rate (note: makes many novel predictions) |
| 21T data only | 21T (K=1, Euclidean) | Optimized for 21T instrument data |
| 7T data only | 7T (K=1, Euclidean) | Optimized for 7T instrument data |
| Synthetic/simulated data | Synthetic ensemble | Trained on computationally generated formulas |
License
This model and associated code are released under the CC-BY-NC-ND 4.0 license and may only be used for non-commercial, academic research purposes with proper attribution. Any commercial use, sale, or other monetization of this model and its derivatives, which include models trained on outputs from the model or datasets created from the model, is prohibited and requires prior approval. Downloading the model requires prior registration on Hugging Face and agreeing to the terms of use. By downloading this model, you agree not to distribute, publish or reproduce a copy of the model. If another user within your organization wishes to use the model, they must register as an individual user and agree to comply with the terms of use. Users may not attempt to re-identify the deidentified data used to develop the underlying model. If you are a commercial entity, please contact the corresponding author.
Contact
For any additional questions or comments, contact Fahad Saeed ([email protected]).
- Downloads last month
- -
