PoseGAM: Robust Unseen Object Pose Estimation via Geometry-Aware Multi-View Reasoning

King Abdullah University of Science and Technology (KAUST)

3D Models with Estimated Poses

Visualize 3D models with poses estimated by our method. Interact with the models by rotating, zooming, and panning to explore the results from different angles.

Sample 1 - Image

Sample 1 - 3D Model

Sample 2 - Image

Sample 2 - 3D Model

Abstract

6D object pose estimation, which predicts the transformation of an object relative to the camera, remains challenging for unseen objects. Existing approaches typically rely on explicitly constructing feature correspondences between the query image and either the object model or template images. In this work, we propose PoseGAM, a geometry-aware multi-view framework that directly predicts object pose from a query image and multiple template images, eliminating the need for explicit matching. Built upon recent multi-view-based foundation model architectures, the method integrates object geometry information through two complementary mechanisms: explicit point-based geometry and learned features from geometry representation networks. In addition, we construct a large-scale synthetic dataset containing more than 190k objects under diverse environmental conditions to enhance robustness and generalization. Extensive evaluations across multiple benchmarks demonstrate our state-of-the-art performance.

Method Overview

The Workflow of PoseGAM. Given a query image \( I_{\text{query}} \) and an object mesh \( \mathcal{M} \), the goal is to estimate the object-to-camera transformation \( T_{\text{query}} \). A set of camera poses is sampled around \( \mathcal{M} \) to render images \( \mathcal{V} \) and corresponding point maps \( \mathcal{P} \). Both \( I_{\text{query}} \) (with its foreground segmented) and \( \mathcal{V} \) are encoded into image tokens, each paired with a camera token. For the rendered views \( \mathcal{V} \), the camera tokens are computed from known intrinsics and extrinsics, whereas for \( I_{\text{query}} \) the camera token is a learnable embedding. A geometry feature extractor produces a global object representation, which is distributed to camera views to form view-specific features \( \mathcal{F} \). These, together with point maps \( \mathcal{P} \), are encoded as key–value tokens for cross-attention. The output camera tokens are decoded to predict the camera-to-object transformation \( T^{\text{Cam}}_{\text{Obj}} \), from which the final object-to-camera pose \( T_{\text{query}} \) is obtained by matrix inversion.

Method Comparison Results

Comparison of different methods across multiple scenarios. Select a sequence to view results.

⚠️ Note: If you notice unsynchronized GIFs, please switch between sequences.

Sequence:

Sequence 1 Sequence 2 Sequence 3 Sequence 4

Sequence 1

Query

Mask

Ground Truth

MegaPose

GenFlow

FoundPose

GigaPose

VGGT

Ours ⭐

Citation


    @article{chen2025PoseGAM,
        title={PoseGAM: Robust Unseen Object Pose Estimation via Geometry-Aware Multi-View Reasoning},
        author={Chen, Jianqi and Zhang, Biao and Tang, Xiangjun and Wonka, Peter},
        journal={arXiv preprint arXiv:xxxx.xxxxx},
        year={2025}
    }