Photogrammetric UAV Flight Visualization On The Cloud

Introduction to Photogrammetry and Aerial Imagery using UAVs: Photogrammetry is the science of making measurements, generating maps or reconstructing 3D models of real-world objects from stereo imagery.

Introduction to Photogrammetry and Aerial Imagery using UAVs

Photogrammetry is the science of making measurements, generating maps or reconstructing 3D models of real-world objects from stereo imagery. The word “Photogrammetry" is derived from the words Fotos (light), Grama (Drawing), Metron (Measuring) in Ancient Greek, which means “measuring-drawing with the help of the light".

In Aerial Photogrammetry, the camera is mounted on an aircraft, and is usually pointed vertically towards the ground. Multiple overlapping photos of the ground are taken as the aircraft flies along the flight path. While making the photogrammetric flight plan, it is important to ensure that the surveillance area is covered with the minimum number of columns. The photogrammetric photos generate better results when the camera pointing direction is perpendicular to the earth surface, mountains, hills or the seashores to prevent excessive altitude difference and pixel data calculation in the model. Generally, fixed-wing aircraft/drones are preferred for long endurance data collection. However in recent years, UAVs/drones are preferred for cost reduction, pilots/personnel requirements and operational ease. With the help of GPS (Global Positioning System) and IMU (Inertial Measurement Unit) that are attached to the aerial camera, the ground coordinates of the image projection centers and camera rotations are also obtained simultaneously, which reduced the number of GCPs (Ground Control Points) required on the ground by 90%. Photogrammetry can be classified several ways. Primary method is to differentiate based on camera location domain during photography.

Photogrammetry Data Classification

* Aerial Photogrammetry; the camera is mounted on an aircraft and is usually pointed towards the ground. Nadir (vertically, 90°) is the most common collection angle.

* Close-range (Terrestrial) Photogrammetry; the camera is located on the ground, either carried by hand, mounted on a tripod or pole.

* Satellite Photogrammetry (Remote Sensing); the sensor is located on satellites in Low Earth Orbit (LEO), Medium Earth Orbit (MEO) or Geosynchronous Equatorial Orbit (GEO) orbits.

Photogrammetric flight visualization using CONNECTED OPS cloud solution allows users to quickly obtain information about flight route, photo projections on terrain, weather conditions (wind, etc.) and environmental variables compiled in “missions". In these realistic flight simulations, many different types of 4-Dimensional data or flight can be realistically simulated using different aircraft, UAV, rocket and satellite models. In this study, the photogrammetric flight is visualized using a fixed wing aircraft. All the attribute information of the flight (Photo ID, acquisition date and time), coordinates of the image center (latitude, longitude, altitude) and camera rotation parameters (heading, pitch, roll) are stored and retrieved from a distributed large scale database. The flight route was reconstructed using the aerial locations of the aircraft, photos, and the time difference for each photo acquisition was calculated, and visualized to be in real position and time in the air. In addition, information and preview of the images taken can be previewed by clicking on the camera icon in the air. The general view of the flight route can be seen in the figure below. The period and frequency of the flight location data transmitted directly impact the smoothness and accuracy of the visualization. We are able to manage thousands of data points per second per aircraft.

Cloud Ararat Team UAV Flight over Istanbul

Camera Specs and Image Projections

The projection sizes of the photos on the ground varies according to the camera type, image resolution and flight altitude. In this project, stereo images with 80% forward overlap and 60% lateral overlap with 5 cm GSD were taken using a 24MP camera at 6000×4000 pixel resolution. Image projections on the ground were calculated with custom workbench generated in FME software and clamped to the terrain. An overview of generated workbench in FME for calculating image projection coordinates on the ground and flight visualization video can be seen below .

Data Processing Flow

The rotation parameters of the camera were acquired during the flight with the help of IMU. Since the rotation parameters acquired during the flight belongs to the camera not the UAV, the heading parameters of UAV can also be calculated by subtracting 90 ° from the heading values of the Nadir camera used in the flight. Perhaps, flight information and IMU data are transmitted via separate Connected Ops communication channels. Since the roll and pitch values will be the same, all rotation parameters of the UAV can be calculated using Camera IMU as well, if the camera is fixed to the aircraft. On payload gimbals, this approach would not work. However, these parameters are not in the proper data structure for visualization directly on the virtual globe, so that some unit and format conversions are required. Rotation values are converted from degrees to radians first and then from direction (heading, pitch, roll) to Quaternion (x, y, z, w) values. Finally, live rotations of the model are visualized. A video that shows real-time rotations of a UAV can be seen below. Location data frequency is low, however you may see the external effects such as wind, turbulence…etc.

As a conclusion, 4D data visualization describes the geometrical, graphical, and semantic aspects of geospatial objects and accurately specifies the changes over time. In addition to representing dynamic positions of geospatial objects, describing geospatial attributes changing over time, including but not limited to visuals and semantic properties can be performed. Connected Ops can stream massive time-based geo-location data sets on web, mobile, tablets and operations video walls. Aerospace and space industries require high demand for progressive transmission of geospatial information. Different types of 4D data (x, y, z, t) can be automatically visualized on virtual globe. Visualization of referred data provides situational awareness about the flight without any technical knowledge. Flights and various 4D mission data provide valuable tool for flight simulations and pre-mission tests.

Comparison of Photogrammetry vs LiDAR

LiDAR (Light Detection and Ranging) is an active sensor that emits pulses of laser light, and measures the time it takes from the transmitter to the target and back to receiver. Data collection can be performed during the day or night, however it’s not an all weather sensor. Clouds, rain or fog impacts the performance of LiDAR negatively. The laser pulse can penetrate vegetation cover of the terrain. Objects like trees or corn fields may produce several separately recordable reflections of one incident laser pulse. Both of these partial returns (vegetation and ground) can be recorded, allowing both measurements of both vegetation canopy height and ground surface elevation. Just like Aerial Photogrammetry, LiDAR uses GPS and IMU for recording coordinates and orientation of the LiDAR sensor. Advantages and disadvantages of Photogrammetry and LiDAR can be summarized as following:

Photogrammetry

*Pros

  • • Can be used to generate ortho / true orthophotos and object textures.
  • • Low operational data acquisition cost.
  • • Can generate denser point cloud compared to LiDAR data (depending on image quality and replaceing method).
  • • High horizontal accuracy.
  • • Support for Multi-spectral sensors.

* Cons

  • • Too many error resources (image radiometry, shadow, blurriness, etc.).
  • • Low vertical accuracy.
  • • Long flight time (small swath width).
  • • Long processing time.

LiDAR

*Pros

  • Quick acquisition of ready-to-use DSM (Digital Surface Model).
  • • Less flight time (large swath width).
  • • High vertical accuracy.
  • • Can be used day and night.
  • • Laser beams can penetrate objects and have multiple return values.

*Cons

  • • High operational costs in some applications. Heavy equipment.
  • • Very large datasets are difficult to interpret and requires adjustment.
  • • Requires experienced and skilled data analysts for processing.
  • • Does not work on water surfaces (laser absorbed – no reflection)

Would you like to cooperate? or hear more about our solutions?