Lightfield Camera Jingyi Li, cs194-bx

In this assignment, I implemented two effects using real lightfield data: depth refocusing and aperture adjustment.


Depth Refocusing

As this paper by our very own Ren Ng describes, we can create "lightfields" of objects by taking photographs of them from various planes orthogonal to the optic axis. I used the data available in the Stanfrod Light Field Archive to reconstruct some lightfield effects.

I chose the amethyst to demonstrate depth refocusing, since she's a cool character on Steven Universe. The light fields were available as a 17x17 grid of images, all offset from each other. By calculating the difference of each image from the center point, and shifting its location by that times scaling factor, and finally averaging over all 289 images, I was able to emulate a camera lens refocusing. I used scales from -2 to 2, in .25 step increments. A more negative scale corresponded to the back of the object being in focus, while a positive one corresponded to the front.


If you stare at it long enough, it looks like the crystals are alive.

Aperture Adjustment

You can also emulate adjusting the aperture/field of view with lightfield data. This is done by setting a "radius" of photos, defined by their distance away from the center point, to use in the average. The larger the radius, the greater the depth of field. I demonstrate this on both the amethyst and the refractive ball.

Bell & Whistle: Own Lightfield Data

I recently found a website which sold really cute Japanese plushies for really cheap, and because I have no self control, I bought 10 things. They've been arriving one at a time, so I decided to try to model light field data using them. To the right we see a Baymax, Flareon, Cyndaquil, and some bamboo I bought myself from 99 Ranch. We also see my curtain as a fancy backdrop.

I tried to create a 5x5 grid of these objects and took 25 photos in total to run my depth refocusing and aperture adjustment algorithms on.

Unfortunately, things did not work out so well. I am not a Lytro camera and my manual moving of the camera around the objects was neither well calibrated nor very precise. As a result, all the averages are very off. Furthermore, I had no way of measuring what a "pixel" in real life would be, so I had to do some experimentation on the scaling of how many digital pixels to actually shift—especially since my camera images are like 5000x7000px, so I had to resize them.

While it may not look like how it's "supposed to", the blurred images give a cool paintery effect!


After some experimentation, I used a scale from 13-17 with increments of 1. Ironically, since I think 15 was the closest converging scaling factor, the depth refocusing looks more like an aperture adjustment! You can also see border artifacts since the pixels were rolled so much.


Since it was only a 5x5 grid, I have a 4 frame animation. I mean, it does get out of focus very quickly...

Takeaways

This project was very simple, which I greatly appreciated. In it I learned that you too can start a multi-million company with a few matrix operations...given that you have very talented hardware engineers, that is. I also learned to appreciate the ingenuity of the lightfield's simple but powerful dual grid structure.