

If the current pixel is inside the mask S, for each neighbor n of s i:.If the current pixel is outside the mask S, set-up the equation: s(x, y) = t(x, y) in row A i, b i.Let A be a matrix with dimensions (h * w, h * w) and b be a vector of length h * w (1 row for each pixel in s).We can set-up the above minimization problem as a system of equations Av = b: The second equation captures theĮdge case where one of the neighbors fall outside of the mask: in that case, we use the target image t as the parameter for that specific gradient. The first equation attempts to minimize the gradients between our source image inside the mask and the resulting image v. the pixels directly above/below and left/right of i). Additionally, S in this case refers to the set of pixels that are contained within our defined mask (i.e. Let v be the resulting image, s is the flattened source image that we want to blend, and t is the flattened target image that we areīlending s into. Now that we've solved the toy problem, we can implement Poisson Blending! Poisson Blending attempts to minimize the following equation: NOTE: these images are not the same, but they look nearly identical.The before and after are shown below with a MSE error of 7.678e-12. Solving Av = b generates a near-replica of the original flattened image s. This vector represents the gradients of s Generate vector b with length h * w by setting it to A s.This condition, we could generate an infinite amount of solutions by adding an arbitrary constant value to v) This is necessary for generating a unique solution for v (without The last row represents the constraint s(0, 0) = v(0, 0).The next h * w rows represent the y-gradients of S (where s(x, y+1) - s(x, y) = v(x, y+1) - v(x, y)).The first h * w rows represent the x-gradients of S (where s(x+1, y) - s(x, y) = v(x+1, y) - v(x, y)).This matrix acts as the gradient operator Create a matrix A with dimensions (2 * h * w + 1, h * w).Flatten the 2D image S into a 1D vector s with length h * w.We set-up our systems of equations with the following: To do this, we want to solve the systems of equations Av = b, where v is a vector with length h * w that represents a flattened image emphasis on image features over color) but is still quite effective.īefore implementing Poisson Blending, let's attempt to reconstruct a toy image S of dimension (h, w) using only x and y gradients. This fundamentally ignores the intensity of our pixel values (i.e. Poisson Blending works by setting up a systems of equations that maximally preserves the gradients of the source image while keeping all the background pixels To remedy this issue, we explore the concept of Poisson Blending. perfectĬut-outs of the source image to blend into the target image). Although this method worked well, it was frustrating having to create masks that fit each image perfectly (i.e. In Project 2, I explored different ways of blending images together Gap between the 3D world (represented by light values and directions) and the 2D world (pixels on a flat screen) in a fun and interesting way. It also gave me a new perspective in image processing, as it helps bridge the Simulating effects that are now commonplace in modern cameras and smartphones. Overall, I thought it was interesting how sub-apertures within lightfield data could be combined in very simple ways to simulate changes in depth and aperture, Knight Aperture Adjustment (C=-0.175, r=0 to r=100) Summary Some examples of this are provided below for different C values. Different values of C will cause the camera's focus to change locations. shift every sub-apeture towards the center image), then average all the images together. In the lightfield by C * (u I - u c, v I - v c) (i.e. To refocus an image, we first shift every image I x, y We define the center image as I 8, 8 with a corresponding (u c, v c) location. depth refocusing).Įach lightfield dataset contains 289 sub-aperture images I x, y in a 17x17 grid (0-indexed) with corresponding (u, v) values to signify the camera's location. Generate images that focus on different objects at different depths (i.e.

We can take advantage of this phenomenon to Images together without modifications generates a photo with nearby objects appearing blurry and far-away objects appearing sharp. Objects that are closer to the camera vary their positions significantly between images whereas objects that are farther don't vary as much. If we move the camera while keeping the lens' optical axis unchanged, When generating lightfield data, we move the camera and have it capture photos from a variety of angles.
