๐ EDGS: Eliminating Densification for Efficient Convergence of 3DGS
๐ Project Page
๐ ๏ธ How to Use This Demo
- Upload a front-facing video or a folder of images of a static scene.
- Use the sliders to configure the number of reference views, correspondences, and optimization steps.
- First press on preprocess Input to extract frames from video(for videos) and COLMAP frames.
- Then click ๐ Start Reconstruction to actually launch the reconstruction pipeline.
- Watch the training visualization and explore the 3D model. โผ๏ธ If you see nothing in the 3D model viewer, try rotating or zooming โ sometimes the initial camera orientation is off.
โ Best for scenes with small camera motion. โ For full 360ยฐ or large-scale scenes, we recommend the Colab version (see project page).
๐๏ธ Alternatively, try an Example Video
Pages:
4 32
5000 30000
100 5000
๐๏ธ Training Visualization
๐ Final 3D Model
๐ฆ Output Files
๐ Detailed Overview
If you uploaded a video, it will be automatically cut into a smaller number of frames (default: 16).
The model pipeline:
- ๐ง Runs PyCOLMAP to estimate camera intrinsics & poses (~3โ7 seconds for <16 images).
- ๐ Computes 2D-2D correspondences between views. More correspondences generally improve quality.
- ๐ง Optimizes a 3D Gaussian Splatting model for several steps.
๐ฅ Training Visualization
You will see a visualization of the entire training process in the "Training Video" pane.
๐ 3D Model
The 3D model is shown in the right viewer. You can explore it interactively:
- On PC: WASD keys, arrow keys, and mouse clicks
- On mobile: pan and pinch to zoom