license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-Small-24B-Instruct-2501
pipeline_tag: text-generation
tags:
- smiles
- chemistry
- reasoning
ether0
ether0 is a 24B language model trained to reason in English and output molecular structures as SMILES. It is derived from fine-tuning and reinforcement learning training from Mistral-Small-24B-Instruct-2501. Ask questions in English, but they may also include molecules specified as SMILES. The SMILES need not be canonical. ether0 has limited support for IUPAC names.
Usage
This model is trained to reason in English and output a molecule. It is NOT a general purpose chat model. It has been trained specifically for these tasks:
- IUPAC-names
- formulas to structures
- modifying solubilities by speciifc LogS
- constrained edits (e.g., do not affect group X or do not affect scaffold)
- pKA
- smell/scent
- human cell receptor binding + mode (e.g., agonist)
- ADME properties (e.g., MDDK efflux ratio, LD50)
- GHS classifications (as words, not codes, like "carcinogen")
- some electronic properties
- 1-step retrosynthesis
- reaction outcome prediction
- natural language caption to molecule
- natural product elucidation (formula + organism to SMILES)
- blood-brain barrier permeability
For example, you can ask "Propose a molecule with a pKa of 9.2" or "Modify CCCCC(O)=OH to increase its pKa by about 1 unit." You cannot ask it "What is the pKa of CCCCC(O)=OH?" If you ask it questions that lie significantly beyond those tasks, it can fail.
Limitations
It does not know general synonyms and it has poor textbook knowledge (e.g. it does not perform especially well on chembench). For best results, input molecules as SMILES: if you input molecules with their common names, the model may reason using the incorrect smiles, resulting in poor results. For example, we have observed that the model often confuses lysine and glutamic acid if you ask questions using their common names, but should correctly reason about their chemistry if you provide their structures as SMILES.
Training data and details
See our preprint for details on data and training process.
Safety
We performed refusal post-training for compounds listed on OPCW schedules 1 and 2. As the model knows pharmacokinetics, it can modulate toxicity. As the structure of toxic or narcotic compounds are generally known, we do not consider this a significant safety risk. The model can provide no uplift on "tacit knowledge" tasks like purification, scale-up, or processing beyond a web search or similar sized language model.
License
Open-weights (Apache 2.0) for any use.