Minkyun Seo*,
Hyungtae Lim*,
Kanghee Lee,
Luca Carlone,
Jaesik Park
Towards zero-shot and beyond! 🚀
Official repository of BUFFER-X, a zero-shot point cloud registration method
across diverse scenes without retraining or tuning.
After cloning this repository:
git clone https://github.com/MIT-SPARK/BUFFER-X && cd BUFFER-X
Setup your own virtual environment (e.g., conda create -n bufferx python=3.x or setting your Nvidia Docker env.) and then install the required libraries. We present some shellscripts as follows.
[Python 3.8, Pytorch 1.9.1, CUDA 11.1 on Ubuntu 22.04]
./scripts/install_py3_8_cuda11_1.sh
[Python 3.10, Pytorch 2.7.1, CUDA 11.8, Cudnn 9.1.0 on Ubuntu 24.04]
./scripts/install_py3_10_cuda11_8.sh
[Python 3.11, Pytorch 2.6.0, CUDA 12.4, Cudnn 9.1.0 on Ubuntu 24.04]
./scripts/install_py3_11_cuda12_4.sh
You can easily run our generalization benchmark with BUFFER-X. First, download the model using the following script:
./scripts/download_pretrained_models.sh
Detailed explanation about file directory
The structure should be as follows:
BUFFER-Xsnapshot# <- this directory is generated by the command abovethreedmatchDescPose
kittiDescPose
configdataset- ...
Next, to evaluate BUFFER-X in diverse scenes, please download the preprocessed data by running the following command. It requires around 130 GB.
However, to include all other datasets (i.e., Scannetpp_iphone, Scannetpp_faro), approximately 150 GB more storage is required.
Due to the data copyrights, we cannot provide preprocessed data for ScanNet++, so if you want to reproduce whole results, please refer to here
./scripts/download_all_data.sh
Then, you can run the below command as follows:
python test.py --dataset <LIST OF DATASET NAMES> --experiment_id <EXPERIMENT ID> --verbose
Experiment ID refers to the saved model’s filename. Provided snapshots include threedmatch and kitti, each trained on the corresponding dataset.
e.g.,
python test.py --dataset 3DMatch TIERS Oxford MIT --experiment_id threedmatch --verbose
You can also run the evaluation script for all datasets at once by using the provided script:
./scripts/eval_all.sh <EXPERIMENT ID>
For heterogeneous evaluation settings introduced in the extended version, use:
./scripts/eval_all_hetero.sh <EXPERIMENT ID>
Detailed explanation about configuration
-
--dataset: The name of the dataset to test on. Must be one of:3DMatch3DLoMatchScannetpp_iphoneScannetpp_faroTIERSKITTIWODMITKAISTETHOxfordTIERS_heteroKAIST_hetero
-
--experiment_id: The ID of the experiment to use for testing. -
--pose_estimator: Pose estimation backend. Choices:ransac(default) orkiss_matcher. -
--gpu: GPU device index to use (default:0). -
--num_points_per_patch,--num_scales,--num_fps,--search_radius_thresholds: Override the corresponding config values for ablation studies.
For heterogeneous evaluation, additional arguments are:
--src_sensor: Source sensor name (e.g.,os0_128,Aeva).--tgt_sensor: Target sensor name (e.g.,os1_64,Avia).
e.g.,
python test.py --dataset TIERS_hetero --src_sensor os0_128 --tgt_sensor os1_64 --experiment_id threedmatch --verbose
Due to the large number and variety of datasets used in our experiments, we provide detailed download instructions and folder structures in a separate document:
This branch adds support for KISS-Matcher as an alternative to RANSAC for the final pose estimation step.
Please follow the official Python installation instructions provided in the KISS-Matcher repository: https://github.com/MIT-SPARK/KISS-Matcher?tab=readme-ov-file#package-installation
Pass --pose_estimator kiss_matcher on the command line:
python test.py --dataset 3DMatch --experiment_id threedmatch --pose_estimator kiss_matcher --verboseTo use RANSAC (default behavior):
python test.py --dataset 3DMatch --experiment_id threedmatch --pose_estimator ransac --verboseYou can also set the pose estimator and its options directly in the config files (e.g., config/indoor_config.py):
cfg.match.pose_estimator = "kiss_matcher" # "ransac" or "kiss_matcher"
cfg.match.kiss_resolution = 0.3 # Voxel resolution for KISS-MatcherNote: If
kiss-matcheris not installed, the pipeline automatically falls back to RANSAC with a warning.
BUFFER-X++ introduces an incremental multi-scale processing strategy that stops computing additional scales once the pose estimate is already confident enough. This reduces unnecessary descriptor extraction and speeds up inference.
The early exit is triggered when the number of RANSAC/KISS-Matcher inliers exceeds early_exit_min_inliers after the first scale.
cfg.match.enable_early_exit = False # Enable confidence-aware early exit (default: False)
cfg.match.early_exit_min_inliers = 50 # Minimum inlier count to trigger early exitAfter each test run, results are automatically saved:
- Per-sample
.txt: detailed per-frame metrics (success, RTE, RRE, inlier counts, timing) underper_sample_results/<exp_name>/. - Summary
.csv: aggregated statistics (recall, RTE/RRE mean ± std, timing) saved to the root directory asfull_results/results_<exp_name>_<params>_<timestamp>.csv.
BUFFER-X supports training on either the 3DMatch or KITTI dataset. As un example, run the following command to train the model:
python train.py --dataset 3DMatch
If you find our work useful in your research, please consider citing:
@article{Seo_BUFFERX_arXiv_2025,
Title={BUFFER-X: Towards Zero-Shot Point Cloud Registration in Diverse Scenes},
Author={Minkyun Seo and Hyungtae Lim and Kanghee Lee and Luca Carlone and Jaesik Park},
Journal={2503.07940 (arXiv)},
Year={2025}
}
@misc{lim2026zeroshotpointcloudregistration,
title={Towards Zero-Shot Point Cloud Registration Across Diverse Scales, Scenes, and Sensor Setups},
author={Hyungtae Lim and Minkyun Seo and Luca Carlone and Jaesik Park},
year={2026},
eprint={2601.02759},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2601.02759},
}
This work was supported by IITP grant (RS-2021-II211343: AI Graduate School Program at Seoul National University) (5%), and by NRF grants funded by the Korea government (MSIT) (No. 2023R1A1C200781211 (65%) and No. RS-2024-00461409 (30%), respectively).
In addition, we appreciate the open-source contributions of previous authors, and especially thank Sheng Ao, the first author of BUFFER, for allowing us to use the term 'BUFFER' as part of the title of our study.
- 03/03/2026: Refactored evaluation/testing code for cleaner structure, improved logging, and more reliable result reporting.
- 28/02/2026: Added KISS-Matcher pose solver support and confidence-aware early exit for multi-scale processing.
- 06/01/2026: Extended version of the paper has been uploaded.
- 25/07/2025: This work is selected as a Highlight paper at ICCV 2025.
- 25/06/2025: This work is accepted by ICCV 2025.