top of page

Image Stitching

We have achieved on-board image stitching to form panorama images. The script used was initially written by an author at PyImageSearch.com, afterwhich some modifications had to be made to fit our need. The script is meant to stitch only two images, but we would need to stitch multiple. By yawing the drone to take images at different angles, we are then able to stitch the images as per the following:

  1. Detecting keypoints (Difference of Gaussians (DoG), Harris corner detection, etc) and extracting local invariant descriptors (Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), etc) from the input images.

  2. Matching the descriptors between the two images.

  3. Using the Random Sample Consensus (RANSAC) algorithm to estimate a homography matrix (two images on the same planar surface in space) using the matched feature vectors.

  4. Applying a warping transformation using the homography matrix.

By doing this, we are then able to stitch two images together. In order to stitch more, we have to iterate this process over and over again with various pairs of images, eventually forming a long stitched panorama image. The success of this algorithm depends on the images being fed into it. If the images are not taken properly or if completely different images are fed, the algorithm will produce an image of garbage. Very incoherent and absurd things will be displayed on screen as the algorithm failed to detect similarities (keypoints) between the two images. Its success also depends on the image quality, a difference in brightness, contrast, saturation, hue, or any other property of the pictures taken can result in peculiar images being produced. If images are taken at odd angles, the stitching will produce images with a lot of black space, which then needs to be cropped out.


This is able to run on the Raspberry Pi, although execution time is slow. Due to the slow execution time, it is more reasonable to stitch the images off-board, however, basic stitching can be done on-board. Once the images start getting larger, it takes longer for the script to carry out its computations and this will cause the system to slow down, which is something we cannot afford mid-flight as the drone control is of the highest priority.


The script works very well with sample images from online, however in the field we are expecting to receive erroneous and poorly captured images. This will be seen in our field test soon!


About Me.

I'm a paragraph. Click here to add your own text and edit me. I’m a great place for you to tell a story and let your users know a little more about you.

  • Black Facebook Icon
  • Black Instagram Icon
  • Black Twitter Icon
Never Miss a Post!
bottom of page