Skip to content

IRVLUTD/L2G

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

L2G-Det

From Local Matches to Global Masks: Novel Instance Detection in Open-World Scenes

arXiv, Project

Detecting and segmenting novel object instances in open-world environments is a fundamental problem in robotic perception. Given only a small set of template images, a robot must locate and segment a specific object instance in a cluttered, previously unseen scene. Existing proposal-based approaches are highly sensitive to proposal quality and often fail under occlusion and background clutter. We propose L2G-Det, a local-to-global instance detection framework that bypasses explicit object proposals by leveraging dense patch-level matching between templates and the query image. Locally matched patches generate candidate points, which are refined through a candidate selection module to suppress false positives. The filtered points are then used to prompt an augmented Segment Anything Model (SAM) with instance-specific object tokens, enabling reliable reconstruction of complete instance masks. Experiments demonstrate improved performance over proposal-based methods in challenging open-world settings.

Framework

L2G.

πŸ“Έ Detection Examples

RoboTools

High Resolution

Getting Started

Prerequisites

  • Python 3.10
  • torch (tested 2.6)
  • torchvision

Installation

We test the code on Ubuntu 20.04.

git clone https://github.com/IRVLUTD/L2G.git
cd L2G
# Create the conda env
conda create -n L2G python=3.10
conda activate L2G
# Install PyTorch
pip install torch==2.6.0+cu118 torchvision==0.21.0+cu118 torchaudio==2.6.0+cu118 --index-url https://download.pytorch.org/whl/cu118
# Install other packages
pip install -e.

Preparing models

Please put them into "checkpoints" folder as follows:

checkpoints/
β”œβ”€β”€ dinov3/
β”‚   └── dinov3_vitl16_pretrain_*.pt
β”‚
β”œβ”€β”€ SAM/
β”‚   └── sam2.1_hiera_large.pt
β”‚
β”œβ”€β”€ Adapter/
β”‚   β”œβ”€β”€ High_Res_Adapter.pt
β”‚   └── RoboTools_Adapter.pt
β”‚
β”œβ”€β”€ Object_tokens_High_Res/
β”‚   β”œβ”€β”€ full_mask_tokens_000001.pt
β”‚   β”œβ”€β”€ full_mask_tokens_000002.pt
β”‚   β”œβ”€β”€ ...
β”‚
└── Object_tokens_RoboTools/
    β”œβ”€β”€ full_mask_tokens_000001.pt
    β”œβ”€β”€ full_mask_tokens_000002.pt
    β”œβ”€β”€ ...

Preparing Datasets

Setting Up Detection Datasets

The RoboTools dataset is divided into 24 scenes (Scene 1–24). Download the dataset:

The High_Resolution dataset is divided into 22 scenes (Hard : Scene 1–10; Easy: Scene 11-22). Download the dataset:

Please put them into "Data" folder as follows:

data/
β”‚
β”œβ”€β”€ Query/
β”‚   β”œβ”€β”€ High_Resolution/
β”‚   β”‚   β”œβ”€β”€ 000001/
β”‚   β”‚   β”œβ”€β”€ 000002/
β”‚   β”‚   └── ...
β”‚   β”‚
β”‚   └── RoboTools/
β”‚       β”œβ”€β”€ 000001/
β”‚       β”œβ”€β”€ 000002/
β”‚       └── ...
β”‚
└── Templates/
    β”œβ”€β”€ High_Resolution_all/
    β”‚   β”œβ”€β”€ rgb/
    β”‚   β”‚   β”œβ”€β”€ 000001/
    β”‚   β”‚   β”œβ”€β”€ 000002/
    β”‚   β”‚   └── ...
    β”‚   └── mask/
    β”‚       β”œβ”€β”€ 000001/
    β”‚       β”œβ”€β”€ 000002/
    β”‚       └── ...
    β”‚
    └── RoboTools_all/
        β”œβ”€β”€ rgb/
        β”‚   β”œβ”€β”€ 000001/
        β”‚   β”œβ”€β”€ 000002/
        β”‚   └── ...
        β”‚
        └── mask/
            β”œβ”€β”€ 000001/
            β”œβ”€β”€ 000002/
            └── ...

Usage

Demo

You can directly run the demo:

python run.py --config Demo.yaml

or check inference on the image

Benchmark

Sample the template images:

cd tools

# --n 8          : Number of templates to sample per object
# --datasets     : Dataset name (e.g., RoboTools; High_Resolution)
python sample_templates.py --n 8 --datasets RoboTools

Run L2G on the Benchmark:

python run.py --config RoboTools.yaml  #or High_Res.yaml

# then merge results using tools/utils/merge.py. You can download Ground truth files in the following link.

We include the ground truth files and our predictions in this link. You can run eval_results.py to evaluate them.

Create the template-based training images

Download the background with the link. Among these, Backgrounds_2048 is constructed by cropping local regions from the original high-resolution background images, resulting in images of size 2048 Γ— 1536.

# Create the template-based training images on RoboTools
python tools/Compose_objects.py \
--objects-root data/Templates/RoboTools_all \
--backgrounds Backgrounds_2048 \
--out-root RoboTools_create \
--bbox-out-root RoboTools_create_bbox \
--start-object-id 1 \
--end-object-id 20

Training

Check the training demo in notebooks.

Real-World Robot Experiment

Click the following image to watch the video.

Watch the video

Acknowledgments

This project is based on the following repositories:

About

From Local Matches to Global Masks: Novel Instance Detection in Open-World Scenes

Resources

License

Apache-2.0, BSD-3-Clause licenses found

Licenses found

Apache-2.0
LICENSE
BSD-3-Clause
LICENSE_cctorch

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors