XingyiHe's picture
ADD: MatchAnything
6f13a83
|
raw
history blame
3.29 kB

MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training

Project Page | Paper

MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training
Xingyi He, Hao Yu, Sida Peng, Dongli Tan, Zehong Shen, Xiaowei Zhou, Hujun Bao
Arxiv 2025

animated

TODO List

  • Pre-trained models and inference code
  • Huggingface demo
  • Data generation and training code
  • Finetune code to further train on your own data
  • Incorporate more synthetic modalities and image generation methods

Quick Start

HuggingFace demo for MatchAnything

Setup

Create the python environment by:

conda env create -f environment.yaml
conda activate env

We have tested our code on the device with CUDA 11.7.

Download pretrained weights from here and place it under repo directory. Then unzip it by running the following command:

unzip weights.zip
rm -rf weights.zip

Test:

We evaluate the models pretrained by our framework using a single network weight on all cross-modality matching and registration tasks.

Data Preparing

Download the test_data directory from here and plase it under repo_directory/data. Then, unzip all datasets by:

cd repo_directiry/data/test_data

for file in *.zip; do
    unzip "$file" && rm "$file"
done

The data structure should looks like:

repo_directiry/data/test_data
    - Liver_CT-MR
    - havard_medical_matching
    - remote_sense_thermal
    - MTV_cross_modal_data
    - thermal_visible_ground
    - visible_sar_dataset
    - visible_vectorized_map

Evaluation

# For Tomography datasets:
sh scripts/evaluate/eval_liver_ct_mr.sh
sh scripts/evaluate/eval_harvard_brain.sh



# For visible-thermal datasets:
sh scripts/evaluate/eval_thermal_remote_sense.sh
sh scripts/evaluate/eval_thermal_mtv.sh
sh scripts/evaluate/eval_thermal_ground.sh

# For visible-sar dataset:
sh scripts/evaluate/eval_visible_sar.sh

# For visible-vectorized map dataset:
sh scripts/evaluate/eval_visible_vectorized_map.sh

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{he2025matchanything,
title={MatchAnything: Universal Cross-Modality Image Matching with Large-Scale Pre-Training},
author={He, Xingyi and Yu, Hao and Peng, Sida and Tan, Dongli and Shen, Zehong and Bao, Hujun and Zhou, Xiaowei},
booktitle={Arxiv},
year={2025}
}

Acknowledgement

We thank the authors of ELoFTR, ROMA for their great works, without which our project/code would not be possible.