CS 180 Proj 4A: Image Warping and Mosaicing

Part 1: Shoot the Pictures

Here are some sample images I took by fixing the center of projection (COP) and rotating my camera while capturing the photos:

Monastery Image
Monastery no crop Image
Monastery Image
Monastery no crop Image
Monastery Image
Monastery no crop Image

Part 2: Recover Homography Matrices

First, I used the given online tool to pick out corresponding points between the chosen images. I then wrote the computeH function which serves to compute the homography matrix that maps the points from the first set of corresponding points to the second set of corresponding points. Below are the equations I used to compute the homography matrix. SVD was used to solve for the homography matrix, and the homography matrix H was set to the last vector of V. This vector was then converted into a 3x3 matrix, and the matrix was scaled to have the bottom right value equal to 1.

Monastery Image
Monastery no crop Image

Part 3: Image Warping

I then used the homography matrix computed in the previous part to warp one image to match up with the other image. Inverse warping was used to map pixels in the warped output to pixels in the original input image. In order to stitch the images together to create the panaorama, I had to compute the bounding box that would contain both images. I did this by finding the minimum and maximum x and y values among the warped image and the other unwarped image. The bounding box was then filled with the pixels of the warped image and the unwarped image. The function scipy.interpolate.griddata was used for the pixel interpolation.

Monastery Image
Monastery no crop Image
Monastery Image
Monastery no crop Image
Monastery Image
Monastery no crop Image

Part 4: Rectification

To test the implentation of my computeH and warpImage functions, I used those functions to rectify some sample images I took. The results are displayed below:

Monastery Image
Monastery no crop Image
Monastery Image
Monastery no crop Image

Part 5: Blending into a Mosaic

In order to blend the stitched together images, I first created an alpha mask by finding the distance transforms of each image and setting the mask equal to distanceTransform1 > distanceTransform2. I then used my code from Project 2 to create the Gaussian and Laplacian stacks. A level 2 stack was used for each Gaussian and Laplacian. The results of the blending are shown below:

Monastery Image
Monastery Image
Monastery Image

CS 180 Proj 4B: Feature Matching for Autostitching

In this part, we automatically select corresponding points to stitch images together, which can be a lot less tedious and time consuming than manually selecting all of the corresponding points. We follow the steps outlined in this research paper: https://inst.eecs.berkeley.edu/~cs180/fa24/hw/proj4/Papers/MOPS.pdf.

Part 1: Harris Interest Point Detector

First, I used the Harris Interest Point Detector to select key points from images, notably "corners" of an image. I used the functions provided to us to carry out this task, which included the use of the corner_harris function from skimage.feature. The results are displayed below with the selected points overlayed with the images.

Monastery Image

Sproul Image1 with Harris Points Overlaid

Monastery no crop Image

Sproul Image2 with Harris Points Overlaid

Monastery Image

Room Image1 with Harris Points Overlaid

Monastery no crop Image

Room Image2 with Harris Points Overlaid

Monastery Image

Zellerbach Image1 with Harris Points Overlaid

Monastery no crop Image

Zellerbach Image2 with Harris Points Overlaid

Part 2: ANMS: Adaptive Non-Maximal Suppression

The next step is to use Adaptive Non-Maximal Suppression to reduce redundancy in the selected Harris points and ensure that the selected points are more evenly distributed across the image. To do this, the suppression radius is initialized to 0. As the radius is increased, more and more interest points are added until we reach the desired number of interest points. The minimum suppression radius is defined below:

Monastery no crop Image
Monastery Image

Sproul Image1 with ANMS Points Overlaid

Monastery no crop Image

Sproul Image2 with ANMS Points Overlaid

Monastery Image

Room Image1 with ANMS Points Overlaid

Monastery no crop Image

Room Image2 with ANMS Points Overlaid

Monastery Image

Zellerbach Image1 with ANMS Points Overlaid

Monastery no crop Image

Zellerbach Image2 with ANMS Points Overlaid

Part 3: Feature Descriptor Extraction

After ANMS, I then implemented the feature descriptor extraction. This was done by extracting an 8x8 patch for each interest point, which are sampled from a larger 40x40 window with a spacing of 5 pixels between samples. After the patch was extracted, I then normalized it by adjusted the values so that the mean is 0 and the standard deviation is 1. The resulting patches are shown below:

Monastery Image

Sproul Image1 Patch

Monastery no crop Image

Sproul Image2 Patch

Monastery Image

Room Image1 Patch

Monastery no crop Image

Room Image2 Patch

Monastery Image

Zellerbach Image1 Patch

Monastery no crop Image

Zellerbach Image2 Patch

Part 4: Feature Matching

The matched features are shown below, although some points don't correspond with each other.

Monastery Image

Sproul Image1 Feature Matches

Monastery no crop Image

Sproul Image2 Feature Matches

Monastery Image

Room Image1 Feature Matches

Monastery no crop Image

Room Image2 Feature Matches

Monastery Image

Zellerbach Image1 Feature Matches

Monastery no crop Image

Zellerbach Image2 Feature Matches

Part 5: RANSAC

This goal of this part is to eliminate the outlier points and find the largest set of inliers. This is done by randomly selected 4 points from both images and using these points to calculate the homography from one image to another. After this process is over, we keep the homography that produces the most inliers. We can see from the results that all of the points that don't match up between the two images are elliminated.

Monastery Image

Sproul Image1 RANSAC Matches

Monastery no crop Image

Sproul Image2 RANSAC Matches

Monastery Image

Room Image1 RANSAC Matches

Monastery no crop Image

Room Image2 RANSAC Matches

Monastery Image

Zellerbach Image1 RANSAC Matches

Monastery no crop Image

Zellerbach Image2 RANSAC Matches

Final Results

The results from both the automatic and manual stitching are displayed below.

Monastery Image

Sproul Mosaic With Automatic Correspondences

Monastery no crop Image

Sproul Mosaic With Manual Correspondences

Monastery Image

Room Mosaic With Automatic Correspondences

Monastery no crop Image

Room Mosaic With Manual Correspondences

Monastery Image

Zellerbach Mosaic With Automatic Correspondences

Monastery no crop Image

Zellerbach Mosaic With Manual Correspondences

What I Learned

The coolest thing I learned from this project is how to accurately auto-select points between two images and how that leads to an accurately stitched-together image. It was really interesting getting to try out 2 different approaches for stitching together images and seeing which method produces better results.