Strategic Growth Area: Creativity and Innovation (C&I)

Permanent URI for this collection

C&I is the refinement of two early SGAs: Creative Technologies and Experiences and Innovation and Entrepreneurship. C&I melds the exploration of innovative technologies and the design of creative experiences with best practices for developing impact-driven and meaningful outcomes and solutions. C&I builds and strengthens creative communities; supports economic development; and enhances quality of life through self-sustaining and entrepreneurial activities. 

The Creative Technologies and Experiences (CT+E) Strategic Growth Area develops 21st-century transdisciplinarians who are well-versed in the unique processes of collaborative environments and whose creative portfolios and capstone projects generate new, or address an existing, real-world opportunity. CT+E exists at the technology-mediated intersection of the arts, design, science, and engineering. Participants are uniquely empowered to focus on and to holistically explore opportunities while developing an integrative approach to thinking and problem solving.

The Innovation and Entrepreneurship SGA was described as Working across all disciplines, we strive to address problems, innovate solutions, and make an impact through entrepreneurial ventures... We create an atmosphere and culture that unleashes creativity, sparks vision and innovation, and teaches the governing principles that are the foundation of every successful progressive enterprise. Our training, investments, and activities include discovery science, applied science, and processes related to commercialization/implementation and management – all in a global context and consistent with ethical principles.

Browse

Recent Submissions

Now showing 1 - 20 of 147
  • Programmable microbial ink for 3D printing of living materials produced from genetically engineered protein nanofibers
    Duraj-Thatte, Anna M.; Manjula-Basavanna, Avinash; Rutledge, Jarod; Xia, Jing; Hassan, Shabir; Sourlis, Arjirios; Rubio, Andres G.; Lesha, Ami; Zenkl, Michael; Kan, Anton; Weitz, David A.; Zhang, Yu Shrike; Joshi, Neel S. (Nature Portfolio, 2021-11-23)
    Living cells have the capability to synthesize molecular components and precisely assemble them from the nanoscale to build macroscopic living functional architectures under ambient conditions. The emerging field of living materials has leveraged microbial engineering to produce materials for various applications but building 3D structures in arbitrary patterns and shapes has been a major challenge. Here we set out to develop a bioink, termed as "microbial ink" that is produced entirely from genetically engineered microbial cells, programmed to perform a bottom-up, hierarchical self-assembly of protein monomers into nanofibers, and further into nanofiber networks that comprise extrudable hydrogels. We further demonstrate the 3D printing of functional living materials by embedding programmed Escherichia coli (E. coli) cells and nanofibers into microbial ink, which can sequester toxic moieties, release biologics, and regulate its own cell growth through the chemical induction of rationally designed genetic circuits. In this work, we present the advanced capabilities of nanobiotechnology and living materials technology to 3D-print functional living architectures. Living cells can precisely assemble to build 3D functional architectures. Here the authors produce an extrudable microbial ink entirely from the engineered cells, which can be further programmed to 3D print functional living materials.
  • CPES Annual Report 2022
    (Virginia Tech, 2022)
    This book aims to be a comprehensive record of the Center’s accomplishments during the year 2021.
  • Microporous Multiresonant Plasmonic Meshes by Hierarchical Micro-Nanoimprinting for Bio-Interfaced SERS Imaging and Nonlinear Nano-Optics
    Garg, Aditya; Mejia, Elieser; Nam, Wonil; Nie, Meitong; Wang, Wei; Vikesland, Peter J.; Zhou, Wei (Wiley-V C H Verlag, 2022-04)
    Microporous mesh plasmonic devices have the potential to combine the biocompatibility of microporous polymeric meshes with the capabilities of plasmonic nanostructures to enhance nanoscale light-matter interactions for bio-interfaced optical sensing and actuation. However, scalable integration of dense and uniformly structured plasmonic hotspot arrays with microporous polymeric meshes remains challenging due to the processing incompatibility of conventional nanofabrication methods with flexible microporous substrates. Here, scalable nanofabrication of microporous multiresonant plasmonic meshes (MMPMs) is achieved via a hierarchical micro-/nanoimprint lithography approach using dissolvable polymeric templates. It is demonstrated that MMPMs can serve as broadband nonlinear nanoplasmonic devices to generate second-harmonic generation, third-harmonic generation, and upconversion photoluminescence signals with multiresonant plasmonic enhancement under fs pulse excitation. Moreover, MMPMs are employed and explored as bio-interfaced surface-enhanced Raman spectroscopy mesh sensors to enable in situ spatiotemporal molecular profiling of bacterial biofilm activity. Microporous mesh plasmonic devices open exciting avenues for bio-interfaced optical sensing and actuation applications, such as inflammation-free epidermal sensors in conformal contact with skin, combined tissue-engineering and biosensing scaffolds for in vitro 3D cell culture models, and minimally invasive implantable probes for long-term disease diagnostics and therapeutics.
  • Mixed reality based environment for learning sensing technology applications in construction
    Ogunseiju, Omobolanle O.; Akanmu, Abiola A.; Bairaktarova, Diana (2021-11)
    With the growing rate of adoption of sensing technologies in the construction industry, there is an increased need for technically skilled workforce to successfully deploy these technologies on construction projects. Inspired by opportunities offered by mixed reality, this paper presents the development and evaluation of a holographic learning environment that can afford learners an experiential opportunity to acquire competencies for implementing sensing systems on construction projects. To develop the content of the learning environment, construction industry practitioners and instructors were surveyed, and construction industry case studies on the applications of sensing technologies were explored. Findings of the surveys revealed sensing technologies domain-specific skill gap in the construction industry. Further, the findings informed the requirements of the learning environment. Based on these requirements, key characteristics of the learning environment are identified and employed in designing the environment. Still, a formative evaluation is important for developing an effective mixed reality learning environment for teaching domain-specific competencies. Thus, it is imperative to understand the quality, appropriateness, and representativeness of the content of the learning environment. This paper also presents a learnability assessment of the developed mixed reality learning environment. The assessment was conducted utilizing a focus group discussion with construction industry practitioners. Feedback was sought from the participants regarding the reflectiveness of the layout of the virtual environment of an actual construction site and the appropriateness of the represented construction applications. This study contributes to the definition of the type of domain-specific skills required of the future workforce for implementing sensing technologies in the construction industry and how such skills can be developed and enhanced within a mixed reality learning environment.
  • Effect of Collaboration Mode and Position Arrangement on Immersive Analytics Tasks in Virtual Reality: A Pilot Study
    Chen, Lei; Liang, Hai-Ning; Lu, Feiyu; Wang, Jialin; Chen, Wenjun; Yue, Yong (MDPI, 2021-11-08)
    [Background] Virtual reality (VR) technology can provide unique immersive experiences for group users, and especially for analytics tasks with visual information in learning. Providing a shared control/view may improve the task performance and enhance the user experience during VR collaboration. [Objectives] Therefore, this research explores the effect of collaborative modes and user position arrangements on task performance, user engagement, and collaboration behaviors and patterns in a VR learning environment that supports immersive collaborative tasks. [Method] The study involved two collaborative modes (shared and non-shared view and control) and three position arrangements (side-by-side, corner-to-corner, and back-to-back). A user study was conducted with 30 participants divided into three groups (Single, Shared, and Non-Shared) using a VR application that allowed users to explore the structural and transformational properties of 3D geometric shapes. [Results] The results showed that the shared mode would lead to higher task performance than single users for learning analytics tasks in VR. Besides, the side-by-side position got a higher score and more favor for enhancing the collaborative experience. [Conclusion] The shared view would be more suitable for improving task performance in collaborative VR. In addition, the side-by-side position may provide a higher user experience when collaborating in learning VR. From these results, a set of guidelines for the design of collaborative visualizations for VR environments are distilled and presented at the end of the paper. All in all, although our experiment is based on a colocated setting with two users, the results are applicable to both colocated and distributed collaborative scenarios with two or more users.
  • L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music Performance
    Tsoukalas, Kyriakos D.; Kubalak, Joseph R.; Bukvic, Ivica Ico (ACM, 2018-06)
    Laptop orchestras create music, although digitally produced, in a collaborative live performance not unlike a traditional orchestra. The recent increase in interest and investment in this style of music creation has paved the way for novel methods for musicians to create and interact with music. To this end, a number of nontraditional instruments have been constructed that enable musicians to control sound production beyond pitch and volume, integrating filtering, musical effects, etc. Wii Remotes (WiiMotes) have seen heavy use in maker communities, including laptop orchestras, for their robust sensor array and low cost. The placement of sensors and the form factor of the device itself are suited for video games, not necessarily live music creation. In this paper, the authors present a new controller design, based on the WiiMote hardware platform, to address usability in gesture-centric music performance. Based on the pilot-study data, the new controller offers unrestricted two-hand gesture production, smaller footprint, and lower muscle strain.
  • Introducing D⁴: An Interactive 3D Audio Rapid Prototyping and Transportable Rendering Environment Using High Density Loudspeaker Arrays
    Bukvic, Ivica Ico (University of Michigan, 2016)
    With a growing number of multimedia venues and research spaces equipped with High Density Loudspeaker Arrays, there is a need for an integrative 3D audio spatialization system that offers both a scalable spatialization algorithm and a battery of supporting rapid prototyping tools for time-based editing, rendering, and interactive low-latency manipulation. D⁴ library aims to assist this newfound whitespace by introducing a Layer Based Amplitude Panning algorithm and a collection of rapid prototyping tools for the 3D time-based audio spatialization and data sonification. The ensuing ecosystem is designed to be transportable and scalable. It supports a broad array of configurations, from monophonic to as many as hardware can handle. D⁴’s rapid prototyping tools leverage oculocentric strategies to importing and spatially rendering multidimensional data and offer an array of new approaches to time-based spatial parameter manipulation and representation. The following paper presents unique affordances of D⁴’s rapid prototyping tools.
  • Introducing a K-12 Mechatronic NIME Kit
    Tsoukalas, Kyriakos D.; Bukvic, Ivica Ico (ACM, 2018-06)
    The following paper introduces a new mechatronic NIME kit that uses new additions to the Pd-L2Ork visual programing environment and its K-12 learning module. It is designed to facilitate the creation of simple mechatronics systems for physical sound production in K- 12 and production scenarios. The new set of objects builds on the existing support for the Raspberry Pi platform to also include the use of electric actuators via the microcomputer’s GPIO system. Moreover, we discuss implications of the newly introduced kit in the creative and K-12 education scenarios by sharing observations from a series of pilot workshops, with particular focus on using mechatronic NIMEs as a catalyst for the development of programing skills.
  • NIMEhub: Toward a Repository for Sharing and Archiving Instrument Designs
    McPherson, Andrew P.; Berdahl, Edgar; Lyons, Michael J.; Jensensius, Alexander Refsum; Bukvic, Ivica Ico; Knudson, Arve (ACM, 2016-07)
    This workshop will explore the potential creation of a community database of digital musical instrument (DMI) designs. In other research communities, reproducible research practices are common, including open-source software, open datasets, established evaluation methods and community standards for research practice. NIME could benefit from similar practices, both to share ideas amongst geographically distant researchers and to maintain instrument designs after their first performances. However, the needs of NIME are different from other communities on account of NIME's reliance on custom hardware designs and the interdependence of technology and arts practice. This half-day workshop will promote a community discussion of the potential benefits and challenges of a DMI repository and plan concrete steps toward its implementation.
  • Introducing Locus: a NIME for Immersive Exocentric Aural Environments
    Sardana, Disha; Joo, Woohun; Bukvic, Ivica Ico; Earle, Gregory D. (ACM, 2019-06)
    Locus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. Below we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
  • 3D Time-Based Aural Data Representation Using D⁴ Library’s Layer Based Amplitude Panning Algorithm
    Bukvic, Ivica Ico (Georgia Institute of Technology, 2016-07)
    The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D⁴ library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D⁴ ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production.
  • Cinemacraft: Immersive Live Machinima as an Empathetic Musical Storytelling Platform
    Narayanan, Siddhart; Bukvic, Ivica Ico (University of Michigan, 2016)
    In the following paper we present Cinemacraft, a technology-mediated immersive machinima platform for collaborative performance and musical human-computer interaction. To achieve this, Cinemacraft innovates upon a reverse-engineered version of Minecraft, offering a unique collection of live machinima production tools and a newly introduced Kinect HD module that allows for embodied interaction, including posture, arm movement, facial expressions, and a lip syncing based on captured voice input. The result is a malleable and accessible sensory fusion platform capable of delivering compelling live immersive and empathetic musical storytelling that through the use of low fidelity avatars also successfully sidesteps the uncanny valley.
  • New Interfaces for Spatial Musical Expression
    Bukvic, Ivica Ico; Sardana, Disha; Joo, Woohun (ACM, 2020-07)
    With the proliferation of venues equipped with the high den- sity loudspeaker arrays there is a growing interest in developing new interfaces for spatial musical expression (NISME). Of particular interest are interfaces that focus on the emancipation of the spatial domain as the primary dimension for musical expression. Here we present Monet NISME that leverages multitouch pressure-sensitive surface and the D⁴ library’s spatial mask and thereby allows for a unique approach to interactive spatialization. Further, we present a study with 22 participants designed to assess its usefulness and compare it to the Locus, a NISME introduced in 2019 as part of a localization study which is built on the same design principles of using natural gestural interaction with the spatial content. Lastly, we briefly discuss the utilization of both NISMEs in two artistic performances and propose a set of guidelines for further exploration in the NISME domain.
  • A Single-Actuated, Cable-Driven, and Self-Contained Robotic Hand Designed for Adaptive Grasps
    Nikafrooz, Negin; Leonessa, Alexander (MDPI, 2021-09-23)
    Developing a dexterous robotic hand that mimics natural human hand movements is challenging due to complicated hand anatomy. Such a practical design should address several requirements, which are often conflicting and force the designer to prioritize the main design characteristics for a given application. Therefore, in the existing designs the requirements are only partially satisfied, leading to complicated and bulky solutions. To address this gap, a novel single-actuated, cable-driven, and self-contained robotic hand is presented in this work. This five-fingered robotic hand supports 19 degrees of freedom (DOFs) and can perform a wide range of precision and power grasps. The external structure of fingers and the thumb is inspired by Pisa/IIT SoftHand, while major modifications are implemented to significantly decrease the number of parts and the effect of friction. The cable configuration is inspired by the tendon structure of the hand anatomy. Furthermore, a novel power transmission system is presented in this work. This mechanism addresses compactness and underactuation, while ensuring proper force distribution through the fingers and the thumb. Moreover, this power transmission system can achieve adaptive grasps of objects with unknown geometries, which significantly simplifies the sensory and control systems. A 3D-printed prototype of the proposed design is fabricated and its base functionality is evaluated through simulations and experiments.
  • Improving Autonomous Robotic Navigation Using Imitation Learning
    Cèsar-Tondreau, Brian; Warnell, Garrett; Stump, Ethan; Kochersberger, Kevin B.; Waytowich, Nicholas R. (Frontiers Media, 2021-06-01)
    Autonomous navigation to a specified waypoint is traditionally accomplished with a layered stack of global path planning and local motion planning modules that generate feasible and obstacle-free trajectories. While these modules can be modified to meet task-specific constraints and user preferences, current modification procedures require substantial effort on the part of an expert roboticist with a great deal of technical training. In this paper, we simplify this process by inserting a Machine Learning module between the global path planning and local motion planning modules of an off-the shelf navigation stack. This model can be trained with human demonstrations of the preferred navigation behavior, using a training procedure based on Behavioral Cloning, allowing for an intuitive modification of the navigation policy by non-technical users to suit task-specific constraints. We find that our approach can successfully adapt a robot’s navigation behavior to become more like that of a demonstrator. Moreover, for a fixed amount of demonstration data, we find that the proposed technique compares favorably to recent baselines with respect to both navigation success rate and trajectory similarity to the demonstrator.
  • Feasibility and accuracy of 3D printed patient-specific skull contoured brain biopsy guides
    Shinn, Richard L.; Park, Clair; DeBose, Kyrille; Hsu, Fang-Chi; Cecere, Thomas E.; Rossmeisl, John H. Jr. (2021-07)
    Objective Design 3D printed skull contoured brain biopsy guides (3D-SCGs) from computed tomography (CT) or T1-weighted magnetic resonance imaging (T1W MRI). Study Design Feasibility study. Sample Population Five beagle dog cadavers and two client-owned dogs with brain tumors. Methods Helical CT and T1W MRI were performed on cadavers. Planned target point was the head of the caudate nucleus. Three-dimensional-SCGs were created from CT and MRI using commercially available open-source software. Using 3D-SCGs, biopsy needles were placed into the caudate nucleus in cadavers, and CT was performed to assess needle placement accuracy, followed by histopathology. Three-dimensional-SCGs were then created and used to perform in vivo brain tumor biopsies. Results No statistical difference was found between the planned target point and needle placement. Median needle placement error for all planned target points was 2.7 mm (range: 0.86-4.5 mm). No difference in accuracy was detected between MRI and CT-designed 3D-SCGs. Median needle placement error for the CT was 2.8 mm (range: 0.86-4.5 mm), and 2.2 mm (range: 1.7-2.7 mm) for MRI. Biopsy needles were successfully placed into the target in the two dogs with brain tumors and biopsy was successfully acquired in one dog. Conclusion Three-dimensional-SCGs designed from CT or T1W MRI allowed needle placement within 4.5 mm of the intended target in all procedures, resulting in successful biopsy in one of two live dogs. Clinical Significance This feasibility study justifies further evaluation of 3D-SCGs as alternatives in facilities that do not have access to stereotactic brain biopsy.
  • CPES Annual Report 2021
    (Virginia Tech, 2021)
    This book is a comprehensive record of the center’s accomplishments during the year 2020.
  • Bioactive Cellulose Nanocrystal-Poly(epsilon-Caprolactone) Nanocomposites for Bone Tissue Engineering Applications
    Hong, Jung Ki; Cooke, Shelley L.; Whittington, Abby R.; Roman, Maren (2021-02-25)
    3D-printed bone scaffolds hold great promise for the individualized treatment of critical-size bone defects. Among the resorbable polymers available for use as 3D-printable scaffold materials, poly(epsilon-caprolactone) (PCL) has many benefits. However, its relatively low stiffness and lack of bioactivity limit its use in load-bearing bone scaffolds. This study tests the hypothesis that surface-oxidized cellulose nanocrystals (SO-CNCs), decorated with carboxyl groups, can act as multi-functional scaffold additives that (1) improve the mechanical properties of PCL and (2) induce biomineral formation upon PCL resorption. To this end, an in vitro biomineralization study was performed to assess the ability of SO-CNCs to induce the formation of calcium phosphate minerals. In addition, PCL nanocomposites containing different amounts of SO-CNCs (1, 2, 3, 5, and 10 wt%) were prepared using melt compounding extrusion and characterized in terms of Young's modulus, ultimate tensile strength, crystallinity, thermal transitions, and water contact angle. Neither sulfuric acid-hydrolyzed CNCs (SH-CNCs) nor SO-CNCs were toxic to MC3T3 preosteoblasts during a 24 h exposure at concentrations ranging from 0.25 to 3.0 mg/mL. SO-CNCs were more effective at inducing mineral formation than SH-CNCs in simulated body fluid (1x). An SO-CNC content of 10 wt% in the PCL matrix caused a more than 2-fold increase in Young's modulus (stiffness) and a more than 60% increase in ultimate tensile strength. The matrix glass transition and melting temperatures were not affected by the SO-CNCs but the crystallization temperature increased by about 5.5 degrees C upon addition of 10 wt% SO-CNCs, the matrix crystallinity decreased from about 43 to about 40%, and the water contact angle decreased from 87 to 82.6 degrees. The abilities of SO-CNCs to induce calcium phosphate mineral formation and increase the Young's modulus of PCL render them attractive for applications as multi-functional nanoscale additives in PCL-based bone scaffolds.
  • An elbow exoskeleton for haptic feedback made with a direct drive hobby motor
    Kim, Hubert; Asbeck, Alan T. (Elsevier, 2020)
    A direct drive motor is one of the simplest mechanisms that can be used to move a mechanical joint. In particular, a brushless direct current (BLDC) motor with no gearing produces a low parasitic torque due to its backdrivability and low inertia, which is ideal for some applications such as wearable systems. While capable of operating with a higher power density than brushed motors, BLDC motors require accurate position feedback to be controlled via vector control at slow speeds. The MotorWare™ library from Texas Instruments (TI), which is designed to run with a C2000 microcontroller, is written to run BLDCs. However, the code was written to run the motor continuously with an incremental encoder and requires further engineering to be used at low speeds such as in an exoskeleton. In this paper, we present the design of an elbow exoskeleton that can be used for haptic feedback. We provide instructions to build the exoskeleton hardware, custom code to modify software provided by TI so that a motor can provide a controlled torque at low speeds, code to enable the microcontroller to communicate with a computer for high-level commands and data storage, and also provide an overview of how alternate motors could be used with this software setup.
  • A Novel Method and Exoskeletons for Whole-Arm Gravity Compensation
    Hull, Joshua; Turner, Ranger; Simon, Athulya A.; Asbeck, Alan T. (IEEE, 2020-08-17)
    We present a new method for providing gravity compensation to a human or robot arm. This method allows the arm to be supported in any orientation, and also allows for the support of a load held in the hand. We accomplish this with a pantograph, whereby one portion of the linkage duplicates the arm's geometry, and another portion of the linkage contains a scaled copy of the arm. Forces applied to the scaled copy are transferred back to the original arm.We implement these concepts with two exoskeletons: the Panto- Arm Exo, a low-prole exoskeleton that supports the arm's weight, and the Panto-Tool Exo that supports a mass held in the hand. We present two linkages used for pantographs, and analyze how different linkage dimensions and their positioning relative to the body affect the forces providing gravity compensation. We also measured the effect of the Panto-Arm exoskeleton on fourteen subjects' arm muscles during static holding tasks and a task in which subjects drew horizontal and vertical lines on a whiteboard. Even though the Panto-Arm Exo linkage geometry and forces were not optimized, it reduced the Mid Deltoid by 33-43% and the Biceps Brachii by up to 52% in several arm postures.