Print

As a cheap and efficient alternative to hardware-heavy approaches such as laser scanning, the field of computer vision offers methods for extraction of full 3D models from overlapping digital images (aka multiple view reconstruction).

Recently, immense progress has been made on algorithms for multiple view reconstruction. They are now robust and accurate to a point that allows general purpose production use. The open source tools we are currently working with are:

Bundler (for reconstruction of original camera locations and parameters)

PMVS2 (for generating a dense 3D point cloud model from Bundler's output)

MeshLab (for interactively refining and meshing the point cloud into a polygonal, textured model via surface reconstruction)

 

3D model of Weymouth

Fig.: A 3D textured model of the ancient mass burial at Weymouth. Derived from overlapping images.

 

See also this video on Youtube for an animated impression of the 3D model (produced with ParaView).

 

We have had very promising results with these tools and identified the following, specific advantages:

  • Cheap: All it takes is a digital camera that can embed focal length and CCD width into the EXIF data.
  • Robust: Can handle combinations of different cameras, fluctuations in lighting conditions, eliminates haphazard occlusions.
  • Versatile: Whatever can be photographed in a series can be modelled. Scale free: will work with microscopy as well as satellite images, as long as there is a series of overlapping images with different viewpoints and angles.

Currently, the following disadvantages exist:

  • No georeferencing. Models live in their own space and have to be manually transformed, e.g. in a 3D CAD application.
  • Command line tools only. A series of tools have to be run and output has to be passed between them to produce the final model.
  • Processing time: Complex scenes can take days to run for high-resolution models.
  • Manual retouching needed: Current surface reconstruction algorithms are optimized for dense and regular laser scanning data.

In the future, we hope to develop the following potential:

  • Replace photogrammetry tools: Computer vision supersedes classical photogrammetry and the generated 3D models can be  used to derive typical photogrammetry products such as orthophotos.
  • Derive volumteric models and products:  Casting the 3D model into voxel space will allow us to derive cross section, delineate features in 3D, etc.
  • Quick-and-easy field documentation: A simple GUI and the ability to rapidly generate low-resolution model previews would make this technology an essential tool for field documentation.

 

We are looking for project partners and sponsors to help us boost development, find solutions for existing problems and  develop the full potential of this project.

If you want to know more about this project or wish to get involved in its development, please get in touch with us.