Our benchmark dataset is designed to evaluate robust multi-view geometry and novel view synthesis algorithms in a variety of conditions. The full dataset is split into two main archives:
The dataset includes three main types of scenes, grouped under the top-level Benchmark/
folder:
Inside each of the scanning/
, indoors/
, and outdoors/
folders, the data is further divided into subfolders:
part_1/
, part_2/
, ..., part_N/
: Each part contains one or more video sequences captured with the same rotation of the 360° user camera. Intrinsic parameters for the 360 camera view are provided per part.LEFT/
– Left camera imagesRIGHT/
– Right camera imagesimu/
– Inertial measurement unit (IMU) readings
- stereo-val.zip (Coming on Monday 6/22/25)
- stereo-test.zip (Coming on Monday 6/22/25)
We also propose a challenging benchmark for Novel View Synthesis (NVS) based on six sequences selected from our Princeton365
dataset. These scenes involve complete 360° camera rotations around reflective or non-Lambertian objects—ideal for evaluating methods that aim to reconstruct challenging materials under diverse lighting.
The NVS benchmark data and evaluation trajectories can be downloaded here:
NVS Benchmark Sequences