Skip to content

MIT-SPARK/BUFFER-X

Repository files navigation

BUFFER-X (ICCV 2025, 🌟Highlight🌟)

Minkyun Seo*, Hyungtae Lim*, Kanghee Lee, Luca Carlone, Jaesik Park



BUFFER-X

Towards zero-shot and beyond! 🚀
Official repository of BUFFER-X, a zero-shot point cloud registration method
across diverse scenes without retraining or tuning.


🧭 Structure Overview

fig1

💻 Installation of BUFFER-X

Set up environment

After cloning this repository:

git clone https://github.com/MIT-SPARK/BUFFER-X && cd BUFFER-X

Setup your own virtual environment (e.g., conda create -n bufferx python=3.x or setting your Nvidia Docker env.) and then install the required libraries. We present some shellscripts as follows.

[Python 3.8, Pytorch 1.9.1, CUDA 11.1 on Ubuntu 22.04]

./scripts/install_py3_8_cuda11_1.sh

[Python 3.10, Pytorch 2.7.1, CUDA 11.8, Cudnn 9.1.0 on Ubuntu 24.04]

./scripts/install_py3_10_cuda11_8.sh

[Python 3.11, Pytorch 2.6.0, CUDA 12.4, Cudnn 9.1.0 on Ubuntu 24.04]

./scripts/install_py3_11_cuda12_4.sh

🚀 Quick Start

Training and Test

Test on Our Generalization Benchmark

You can easily run our generalization benchmark with BUFFER-X. First, download the model using the following script:

./scripts/download_pretrained_models.sh
Detailed explanation about file directory

The structure should be as follows:

  • BUFFER-X
    • snapshot # <- this directory is generated by the command above
      • threedmatch
        • Desc
        • Pose
      • kitti
        • Desc
        • Pose
    • config
    • dataset
    • ...

Next, to evaluate BUFFER-X in diverse scenes, please download the preprocessed data by running the following command. It requires around 130 GB. However, to include all other datasets (i.e., Scannetpp_iphone, Scannetpp_faro), approximately 150 GB more storage is required. Due to the data copyrights, we cannot provide preprocessed data for ScanNet++, so if you want to reproduce whole results, please refer to here

./scripts/download_all_data.sh

Then, you can run the below command as follows:

python test.py --dataset <LIST OF DATASET NAMES> --experiment_id <EXPERIMENT ID> --verbose

Experiment ID refers to the saved model’s filename. Provided snapshots include threedmatch and kitti, each trained on the corresponding dataset.

e.g.,

python test.py --dataset 3DMatch TIERS Oxford MIT --experiment_id threedmatch --verbose

You can also run the evaluation script for all datasets at once by using the provided script:

./scripts/eval_all.sh <EXPERIMENT ID>

For heterogeneous evaluation settings introduced in the extended version, use:

./scripts/eval_all_hetero.sh <EXPERIMENT ID>
Detailed explanation about configuration
  • --dataset: The name of the dataset to test on. Must be one of:

    • 3DMatch
    • 3DLoMatch
    • Scannetpp_iphone
    • Scannetpp_faro
    • TIERS
    • KITTI
    • WOD
    • MIT
    • KAIST
    • ETH
    • Oxford
    • TIERS_hetero
    • KAIST_hetero
  • --experiment_id: The ID of the experiment to use for testing.

  • --pose_estimator: Pose estimation backend. Choices: ransac (default) or kiss_matcher.

  • --gpu: GPU device index to use (default: 0).

  • --num_points_per_patch, --num_scales, --num_fps, --search_radius_thresholds: Override the corresponding config values for ablation studies.

For heterogeneous evaluation, additional arguments are:

  • --src_sensor: Source sensor name (e.g., os0_128, Aeva).
  • --tgt_sensor: Target sensor name (e.g., os1_64, Avia).

e.g.,

python test.py --dataset TIERS_hetero --src_sensor os0_128 --tgt_sensor os1_64 --experiment_id threedmatch --verbose

Due to the large number and variety of datasets used in our experiments, we provide detailed download instructions and folder structures in a separate document:

DATASETS.md



Using KISS-Matcher as the Pose Solver

This branch adds support for KISS-Matcher as an alternative to RANSAC for the final pose estimation step.

Installation

Please follow the official Python installation instructions provided in the KISS-Matcher repository: https://github.com/MIT-SPARK/KISS-Matcher?tab=readme-ov-file#package-installation

Usage

Pass --pose_estimator kiss_matcher on the command line:

python test.py --dataset 3DMatch --experiment_id threedmatch --pose_estimator kiss_matcher --verbose

To use RANSAC (default behavior):

python test.py --dataset 3DMatch --experiment_id threedmatch --pose_estimator ransac --verbose

Configuration

You can also set the pose estimator and its options directly in the config files (e.g., config/indoor_config.py):

cfg.match.pose_estimator = "kiss_matcher"  # "ransac" or "kiss_matcher"
cfg.match.kiss_resolution = 0.3            # Voxel resolution for KISS-Matcher

Note: If kiss-matcher is not installed, the pipeline automatically falls back to RANSAC with a warning.


Early Exit (Confidence-Aware Multi-Scale Processing)

BUFFER-X++ introduces an incremental multi-scale processing strategy that stops computing additional scales once the pose estimate is already confident enough. This reduces unnecessary descriptor extraction and speeds up inference.

The early exit is triggered when the number of RANSAC/KISS-Matcher inliers exceeds early_exit_min_inliers after the first scale.

Configuration

cfg.match.enable_early_exit = False   # Enable confidence-aware early exit (default: False)
cfg.match.early_exit_min_inliers = 50  # Minimum inlier count to trigger early exit

Output Files

After each test run, results are automatically saved:

  • Per-sample .txt: detailed per-frame metrics (success, RTE, RRE, inlier counts, timing) under per_sample_results/<exp_name>/.
  • Summary .csv: aggregated statistics (recall, RTE/RRE mean ± std, timing) saved to the root directory as full_results/results_<exp_name>_<params>_<timestamp>.csv.

Training

BUFFER-X supports training on either the 3DMatch or KITTI dataset. As un example, run the following command to train the model:

python train.py --dataset 3DMatch

📝 Citation

If you find our work useful in your research, please consider citing:

@article{Seo_BUFFERX_arXiv_2025,
Title={BUFFER-X: Towards Zero-Shot Point Cloud Registration in Diverse Scenes},
Author={Minkyun Seo and Hyungtae Lim and Kanghee Lee and Luca Carlone and Jaesik Park},
Journal={2503.07940 (arXiv)},
Year={2025}
}

@misc{lim2026zeroshotpointcloudregistration,
title={Towards Zero-Shot Point Cloud Registration Across Diverse Scales, Scenes, and Sensor Setups}, 
author={Hyungtae Lim and Minkyun Seo and Luca Carlone and Jaesik Park},
year={2026},
eprint={2601.02759},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.02759}, 
}

🙏 Acknowledgements

This work was supported by IITP grant (RS-2021-II211343: AI Graduate School Program at Seoul National University) (5%), and by NRF grants funded by the Korea government (MSIT) (No. 2023R1A1C200781211 (65%) and No. RS-2024-00461409 (30%), respectively).

In addition, we appreciate the open-source contributions of previous authors, and especially thank Sheng Ao, the first author of BUFFER, for allowing us to use the term 'BUFFER' as part of the title of our study.


Updates

  • 03/03/2026: Refactored evaluation/testing code for cleaner structure, improved logging, and more reliable result reporting.
  • 28/02/2026: Added KISS-Matcher pose solver support and confidence-aware early exit for multi-scale processing.
  • 06/01/2026: Extended version of the paper has been uploaded.
  • 25/07/2025: This work is selected as a Highlight paper at ICCV 2025.
  • 25/06/2025: This work is accepted by ICCV 2025.