Learning Consistent Visual Synthesis
dc.contributor.author | Gao, Chen | en |
dc.contributor.committeechair | Huang, Jia-Bin | en |
dc.contributor.committeemember | Dhillon, Harpreet Singh | en |
dc.contributor.committeemember | Kopf, Johannes Peter | en |
dc.contributor.committeemember | Huang, Bert | en |
dc.contributor.committeemember | Abbott, A. Lynn | en |
dc.contributor.department | Electrical and Computer Engineering | en |
dc.date.accessioned | 2022-08-23T08:00:27Z | en |
dc.date.available | 2022-08-23T08:00:27Z | en |
dc.date.issued | 2022-08-22 | en |
dc.description.abstract | With the rapid development of photography, we can easily record the 3D world by taking photos and videos. In traditional images and videos, the viewer observes the scene from fixed viewpoints and cannot navigate the scene or edit the 2D observation afterward. Thus, visual content editing and synthesis become an essential task in computer vision. However, achieving high-quality visual synthesis often requires a complex and expensive multi-camera setup. This is not practical for daily use because most people only have one cellphone camera. But a single camera, on the contrary, could not provide enough multi-view constraints to synthesize consistent visual content. Therefore, in this thesis, I address this challenging single-camera visual synthesis problem by leveraging different regularizations. I study three consistent synthesis problems: time-consistent synthesis, view-consistent synthesis, and view-time-consistent synthesis. I show how we can take cellphone-captured monocular images and videos as input to model the scene and consistently synthesize new content for an immersive viewing experience. | en |
dc.description.abstractgeneral | With the rapid development of photography, we can easily record the 3D world by taking photos and videos. More recently, we have incredible cameras on cell phones, which enable us to take pro-level photos and videos. Those powerful cellphones even have advanced computational photography features build-in. However, these features focus on faithfully recording the world during capturing. We can only watch the photo and video as it is, but not navigate the scene, edit the 2D observation, or synthesize content afterward. Thus, visual content editing and synthesis become an essential task in computer vision. We know that achieving high-quality visual synthesis often requires a complex and expensive multi-camera setup. This is not practical for daily use because most people only have one cellphone camera. But a single camera, on the contrary, is not enough to synthesize consistent visual content. Therefore, in this thesis, I address this challenging single-camera visual synthesis problem by leveraging different regularizations. I study three consistent synthesis problems: time-consistent synthesis, view-consistent synthesis, and view-time-consistent synthesis. I show how we can take cellphone-captured monocular images and videos as input to model the scene and consistently synthesize new content for an immersive viewing experience. | en |
dc.description.degree | Doctor of Philosophy | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:35023 | en |
dc.identifier.uri | http://hdl.handle.net/10919/111588 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | Creative Commons Attribution 4.0 International | en |
dc.rights.uri | http://creativecommons.org/licenses/by/4.0/ | en |
dc.subject | Computer vision | en |
dc.subject | Computational photography | en |
dc.subject | View synthesis | en |
dc.subject | Temporal consistency | en |
dc.title | Learning Consistent Visual Synthesis | en |
dc.type | Dissertation | en |
thesis.degree.discipline | Computer Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |
Files
Original bundle
1 - 1 of 1