Time and location: TR 2:20 - 3:35 pm in ZACH 350
Office hours: TR 4:00 - 5:00 pm
Office location: 406 PETR
Campuswire: link in the syllabus
Computational photography is a collection of computational algorithms and system designs (e.g., sensors, optics) to avoid the limitations of standard cameras and enable novel applications. In recent years, there has been increasing interest in computational photography because of the widespread use of the cameras by the general public through smartphones and other cheap imaging devices. In this course, we first discuss the cameras and the image formation process. We then study basic image and video processing tools like sampling, filtering, and pyramids. Finally, we discuss several image-based algorithms, such as image retargeting, high dynamic range imaging, and texture synthesis.
Undergraduate: (CSCE 315 or CSCE 331) and (MATH 304 or MATH 311)
Graduate: Graduate students are expected to have similar background.
The primary reference of the course is the following book, which covers most of the topics related to computational photography:
Computer Vision: Algorithms and Applications, by Richard Szeliski, 2010
You will lose 20% from each assignment for each day that it is late. However, there will be 5 granted late days for the entire course. You are free to use it for any of the assignments (note that, you CANNOT use it for the final project!). You will not get any bonuses for any of the unused late days. All the assignments are due at 11:59 pm on Canvas unless otherwise stated. Note that, one minute over and 23 hours over both count as one full day.
The assignments in this class are individual unless otherwise stated. For the individual assignments, all the codes need to be written by the student. If indicated in the assignment’s instruction, the use of external libraries for performing basic operations is allowed. However, using an outside source code is NOT permitted. Moreover, collaborating with other students on assignments beyond general discussions is NOT allowed. In general, looking at other students’ code and/or written answers is NOT allowed. If the students have any questions regarding this issue, they should contact the instructor. The students should not post their code online even after the deadline for the assignment has passed.
|Jan 18||Introduction and Overview||pptx||Szeliski Ch. 1|
|Jan 20||Camera and Image Formation||pptx||Szeliski Ch. 2|
|Jan 25||Camera and Image Formation||See above||Szeliski Ch. 2|
|Feb 3||Sampling, Frequency, and Filtering|
|Feb 8||Sampling, Frequency, and Filtering|
|Feb 10||Sampling, Frequency, and Filtering|
|Feb 17||Blending and Compositing|
|Feb 22||Blending and Compositing|
|Feb 24||Point processing and Image Warping|
|Mar 1||Homographies and Mosaics|
|Mar 3||Automatic Image Alignment and RANSAC|
|Mar 8||Automatic Image Alignment and RANSAC|
|Mar 15||Spring Break -- No Class|
|Mar 17||Spring Break -- No Class|
|Mar 24||Modeling Light and Lightfields|
|Mar 31||Image Retargeting|
|Apr 5||Image Retargeting|
|Apr 7||Image Morphing|
|Apr 12||HDR & Tonemapping|
|Apr 14||HDR & Tonemapping|
|Apr 19||Video Textures|
|Apr 21||Texture Synthesis and Filling|
|Apr 26||Image Analogies and Scene Completion|
|Apr 28||Coded Exposures and Apertures|
|May 3||Redefined Day -- No Class|
*Schedule might change during the semester.
The slides in this class are heavily based on the slides from other instructors. Specifically, many slides are the exact or modified version of the slides by Alexei A. Efros, James Hays, and Rob Fergus, who in turn have used materials from Steve Seitz, Rick Szeliski, Paul Debevec, Stephen Palmer, Paul Heckbert, David Forsyth, Steve Marschner, Fredo Durand, Bill Freeman, and others, as noted in the slides. The instructor gives full permission to use these slides for academic and research purposes, but please maintain all the acknowlegements.