Environment Mapping in Larger Spaces
dc.contributor.author | Ciambrone, Andrew James | en |
dc.contributor.committeechair | Gracanin, Denis | en |
dc.contributor.committeemember | North, Christopher L. | en |
dc.contributor.committeemember | Ogle, J. Todd | en |
dc.contributor.department | Computer Science | en |
dc.date.accessioned | 2017-02-09T18:28:34Z | en |
dc.date.available | 2017-02-09T18:28:34Z | en |
dc.date.issued | 2017-02-09 | en |
dc.description.abstract | Spatial mapping or environment mapping is the process of exploring a real world environment and creating its digital representation. To create convincing mixed reality programs, an environment mapping device must be able to detect a user's position and map the user's environment. Currently available commercial spatial mapping devices mostly use infrared camera to obtain a depth map which is effective only for short to medium distances (3-4 meters). This work describes an extension to the existing environment mapping devices and techniques to enable mapping of larger architectural environments using a combination of a camera, Inertial Measurement Unit (IMU), and Light Detection and Ranging (LIDAR) devices supported by sensor fusion and computer vision techniques. There are three main parts to the proposed system. The first part is data collection and data fusion using embedded hardware, the second part is data processing (segmentation) and the third part is creating a geometry mesh of the environment. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as Microsoft HoloLens device. | en |
dc.description.abstractgeneral | Mixed reality is the mixing of computer generated graphics and real world objects together to create an augmented view of the space. Environmental mapping, the process of creating a digital representation of an environment, is used in mixed reality applications so that its virtual objects can logically interact with the physical environment. Most of the current approaches to this problem work only for short to medium distances. This work describes an extension to the existing devices and techniques to enable mapping of larger architectural spaces. The developed system was evaluated against its ability to determine the dimension of the room and of objects within the room. With certain conditions the system was able to evaluate the dimensions of a room with an error less than twenty percent and is capable of determining the dimensions of objects with an error less than five percent in adequate conditions. This low cost system can significantly expand the mapping range of the existing mixed reality devices such as the Microsoft HoloLens device, allowing for more diverse mixed reality applications to be developed and used. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:9480 | en |
dc.identifier.uri | http://hdl.handle.net/10919/74984 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Sensor Fusion | en |
dc.subject | Environment Mapping | en |
dc.subject | Image Processing | en |
dc.subject | Computer Vision | en |
dc.title | Environment Mapping in Larger Spaces | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Science and Applications | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |
Files
Original bundle
1 - 1 of 1