Imaging: the entire process

Using images to explore other planets is not a straightforward process; there are a lot of aspects that must be taken into account if we want to be able to derive certain information just from the pictures. 
 
Imaging strategies

It is most essential to thoroughly plan image acquisition strategies and procedures, since any drawback in image acquisition can be hard or even impossible recovered by further processing. Some of these issues include:

  • Proper exposure: A landscape such as the one encountered in el Teide National Park shows high differences of dynamics (=brightness, simply speaking), that can be hardly covered of the 8 or 10 bits dynamics of the cameras used. We therefore combine images of (exactly) the same scene made with different exposure times (for example 5, 20 and 80 milliseconds, with 20 being the “optimum” value as automatic exposure would tell us. The combination allows us to extend the dynamic range, and still be able to represent very bright areas (e.g. reflections of the sun) or very dark areas within shadows.
    Watch the following PanCam image captured with the auto exposure value, and four times this exposure value: The texture on the dark shadowed rocks come up in the higher exposed image, while all the rest is fine in the autoexposed image.
  • Proper overlap of images: The larger the overlap between images is the easier it is to combine them. Within panoramas, the aim is to have a 10-20% of overlap area. During the field trials, images were taken every 2 m and during turns at shorter intervals to keep the overlap high enough.
  • Non-changing scenes: It is best if the scene remains unchanged. There are some unavoidable effects like illumination. Nothing can be done about that, since the Earth rotates, and so does Mars. In the case of the field trials, they were done in an environment open to visits. We managed to get most of the images without humans on them - so thanks for all the visitors that hided behind rocks when asked to.
  • Calibration: If anything happens to the camera (mounting brackets fall off, the camera being dropped, or a lens being replaced), it's better to calibrate it again (information could be recovered just with image processing but it would be much harder). See calibration targets below:

During the field trials...
For the field tests to be valuable it's important to generate reference data. What's the point of doing a test if you cannot assess its success at the end? White Styrofoam spheres were located on the scene and on top of Bridget's mast. PRoVisG 3D Vision products contain coordinates of these spheres and they are measured geodetically, making it possible to assess the accuracy of mapping and navigation.  An IMU was also incorporated on Bridget (delivered by Frank Trauthan, from DLR), which allowed GPS coordinates and heading,pitch and roll angles to be recorded.


Image processing: from 2D to 3D
Camera setups are inspired on the human visual system. We have two eyes that allow us to detect and guess distances, since the different parallaxes caused by objects at different distances can be easily “processed” by our brain. It detects one point in the scene in both views and generates its own 3D model. We are making a similar thing with the cameras: We try to find for all pixels in the left image (taken from left camera) their corresponding scene point in the right image (taken from right camera). This is called “Matching”. Using previous camera calibration, generated parallaxes and algebra 3D coordinates can be obtained. In principle, we would not need stereo vision (having two cameras, like two eyes) to generate 3D coordinates: This could be obtained by just moving one camera also (this process is called “Structure from Motion” - SFM). However, SFM in most cases is less accurate when using few images since we know the distance between stereo cameras better than the length of a path driven by the rover.

Products that can be obtained from stereo images are:
  • Digital elevation model (DEM): It contains heights in a regular grid on the local horizontal plane. It is like a map where each square cm is telling you how high the scene is at this position. We are able to project the image content onto this map, with this we get an “Ortho” image, which looks like an aerial projection (like Google Earth from above). DEM and ortho image are the most commonly used products nowadays. Sometimes DEMs are colour coded for better visualization as in the following example (Left: DEM; Right: Ortho Image generated from a full panorama):

  • Distance map: It shows how far away is the scene from a single point, such as the spot between the PanCam stereo cameras. One can select different coordinate systems (e.g. spherical or cylindric), and also here an Ortho image can be projected from the image textures (Above: Distance map, below: ortho image). A distance map without Ortho is called a panorama.

Another product that is obtained from imaging is visual odometry, i.e. information about Rover position and pointing. A certain landscape is imaged after intervals and from the differences in the pictures the distance travelled by the rover can be inferred.

For PRoVisG, images are taken with lots of different cameras. The combination of information from different sensors is tricky: data from different resolutions and wavelengths needs to be brought together, as well as Rover imagery and remote sensing images. Inaccuracies and missing information are also present. This is the computer vision science part of the PRoVisG project.

Use
Scientists will use PRoVisG products for decision making what to do next (having a look on global views) and direct assessment of detailed parts of the products. Such global views allow the virtual combination of the Rover with its environment (example from prior work in PRoVisG)
:


Interactive (real time) access to landscape reconstructions such as the one provided from Clarach Bay (in Cool Field Trials videos) gives even a more immersive view.

The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 218814 "PRoVisG".”

Comments