Browsing by Author "Shukla, Sandeep K."
Now showing 1 - 20 of 87
Results Per Page
Sort Options
- Abstraction Guided Semi-formal VerificationParikh, Ankur (Virginia Tech, 2007-06-15)Abstraction-guided simulation is a promising semi-formal framework for design validation in which an abstract model of the design is used to guide a logic simulator towards a target property. However, key issues still need to be addressed before this framework can truly deliver on it's promise. Concretizing, or finding a real trace from an abstract trace, remains a hard problem. Abstract traces are often spurious, for which no corresponding real trace exits. This is a direct consequence of the abstraction being an over-approximation of the real design. Further, the way in which the abstract model is constructed is an open-ended problem which has a great impact on the performance of the simulator. In this work, we propose a novel approaches to address these issues. First, we present a genetic algorithm to select sets of state variables directly from the gate-level net-list of the design, which are highly correlated to the target property. The sets of selected variables are used to build the Partition Navigation Tracks (PNTs). PNTs capture the behavior of expanded portions of the state space as they related to the target property. Moreover, the computation and storage costs of the PNTs is small, making them scale well to large designs. Our experiments show that we are able to reach many more hard-to-reach states using our proposed techniques, compared to state-of-the-art methods. Next, we propose a novel abstraction strengthening technique, where the abstract design is constrained to make it more closely resemble the concrete design. Abstraction strengthening greatly reduces the need to refine the abstract model for hard to reach properties. To achieve this, we efficiently identify sequentially unreachable partial sates in the concrete design via intelligent partitioning, resolution and cube enlargement. Then, these partial states are added as constraints in the abstract model. Our experiments show that the cost to compute these constraints is low and the abstract traces obtained from the strengthened abstract model are far easier to concretize.
- Accelerating Hardware Simulation on Multi-coresNanjundappa, Mahesh (Virginia Tech, 2010-05-04)Electronic design automation (EDA) tools play a central role in bridging the productivity gap for designing complex hardware systems. However, with an increase in the size and complexity of today's design requirements, current methodologies and EDA tools are unable to effectively mitigate the further widening of productivity gap. It is estimated that testing and verification takes 2/3rd of the total development time of complex hardware systems. Functional simulation forms the main stay of testing and verification process and is the most widely used technique for testing and verification. Most of the simulation algorithms and their implementations are designed for uniprocessor systems that cannot easily leverage the parallelism in multi-core and GPU platforms. For example, logic simulation often uses levelized sequential algorithms, whereas the discrete-event simulation frameworks for Verilog, VHDL and SystemC employ concurrency in the form of multi-threading to given an illusion of the inherent parallelism present in circuits. However, the discrete-event model of computation requires a global notion of an event-queue, which makes improving its simulation performance via parallelization even more challenging. This work investigates automatic parallelization of simulation algorithms used to simulate hardware models. In particular, we focus on parallelizing the simulation of hardware designs described at the RTL using SystemC/HDL with examples to clearly describe the parallelization. Even though multi-cores and GPUs other parallelism, efficiently exploiting this parallelism with their programming models is not straightforward. To overcome this, we also focus our research on building intelligent translators to map simulation applications onto multi-cores and GPUs such that the complexity of the low-level programming models is hidden from the designers.
- Acceleration of Hardware Testing and Validation Algorithms using Graphics Processing UnitsLi, Min (Virginia Tech, 2012-09-17)With the advances of very large scale integration (VLSI) technology, the feature size has been shrinking steadily together with the increase in the design complexity of logic circuits. As a result, the efforts taken for designing, testing, and debugging digital systems have increased tremendously. Although the electronic design automation (EDA) algorithms have been studied extensively to accelerate such processes, some computational intensive applications still take long execution times. This is especially the case for testing and validation. In order tomeet the time-to-market constraints and also to come up with a bug-free design or product, the work presented in this dissertation studies the acceleration of EDA algorithms on Graphics Processing Units (GPUs). This dissertation concentrates on a subset of EDA algorithms related to testing and validation. In particular, within the area of testing, fault simulation, diagnostic simulation and reliability analysis are explored. We also investigated the approaches to parallelize state justification on GPUs, which is one of the most difficult problems in the validation area. Firstly, we present an efficient parallel fault simulator, FSimGP2, which exploits the high degree of parallelism supported by a state-of-the-art graphic processing unit (GPU) with the NVIDIA Compute Unified Device Architecture (CUDA). A novel three-dimensional parallel fault simulation technique is proposed to achieve extremely high computation efficiency on the GPU. The experimental results demonstrate a speedup of up to 4Ã compared to another GPU-based fault simulator. Then, another GPU based simulator is used to tackle an even more computation-intensive task, diagnostic fault simulation. The simulator is based on a two-stage framework which exploits high computation efficiency on the GPU. We introduce a fault pair based approach to alleviate the limited memory capacity on GPUs. Also, multi-fault-signature and dynamic load balancing techniques are introduced for the best usage of computing resources on-board. With continuously feature size scaling and advent of innovative nano-scale devices, the reliability analysis of the digital systems becomes more important nowadays. However, the computational cost to accurately analyze a large digital system is very high. We proposes an high performance reliability analysis tool on GPUs. To achieve highmemory bandwidth on GPUs, two algorithms for simulation scheduling and memory arrangement are proposed. Experimental results demonstrate that the parallel analysis tool is efficient, reliable and scalable. In the area of design validation, we investigate state justification. By employing the swarm intelligence and the power of parallelism on GPUs, we are able to efficiently find a trace that could help us reach the corner cases during the validation of a digital system. In summary, the work presented in this dissertation demonstrates that several applications in the area of digital design testing and validation can be successfully rearchitected to achieve maximal performance on GPUs and obtain significant speedups. The proposed algorithms based on GPU parallelism collectively aim to contribute to improving the performance of EDA tools in Computer aided design (CAD) community on GPUs and other many-core platforms.
- Adaptive Radio Resource Management in Cognitive Radio Communications using Fuzzy ReasoningShatila, Hazem Sarwat (Virginia Tech, 2012-03-20)As wireless technologies evolve, novel innovations and concepts are required to dynamically and automatically alter various radio parameters in accordance with the radio environment. These innovations open the door for cognitive radio (CR), a new concept in telecommunications. CR makes its decisions using an inference engine, which can learn and adapt to changes in radio conditions. Fuzzy logic (FL) is the proposed decision-making algorithm for controlling the CR's inference engine. Fuzzy logic is well-suited for vague environments in which incomplete and heterogeneous information is present. In our proposed approach, FL is used to alter various radio parameters according to experience gained from different environmental conditions. FL requires a set of decision-making rules, which can vary according to radio conditions, but anomalies rise among these rules, causing degradation in the CR's performance. In such cases, the CR requires a method for eliminating such anomalies. In our model, we used a method based on the Dempster-Shafer (DS) theory of belief to accomplish this task. Through extensive simulation results and vast case studies, the use of the DS theory indeed improved the CR's decision-making capability. Using FL and the DS theory of belief is considered a vital module in the automation of various radio parameters for coping with the dynamic wireless environment. To demonstrate the FL inference engine, we propose a CR version of WiMAX, which we call CogMAX, to control different radio resources. Some of the physical parameters that can be altered for better results and performance are the physical layer parameters such as channel estimation technique, the number of subcarriers used for channel estimation, the modulation technique, and the code rate.
- Advances in the Side-Channel Analysis of Symmetric CryptographyTaha, Mostafa Mohamed Ibrahim (Virginia Tech, 2014-06-10)Side-Channel Analysis (SCA) is an implementation attack where an adversary exploits unintentional outputs of a cryptographic module to reveal secret information. Unintentional outputs, also called side-channel outputs, include power consumption, electromagnetic radiation, execution time, photonic emissions, acoustic waves and many more. The real threat of SCA lies in the ability to mount attacks over small parts of the key and to aggregate information over many different traces. The cryptographic community acknowledges that SCA can break any security module if the adequate protection is not implemented. In this dissertation, we propose several advances in side-channel attacks and countermeasures. We focus on symmetric cryptographic primitives, namely: block-ciphers and hashing functions. In the first part, we focus on improving side-channel attacks. First, we propose a new method to profile highly parallel cryptographic modules. Profiling, in the context of SCA, characterizes the power consumption of a fully-controlled module to extract power signatures. Then, the power signatures are used to attack a similar module. Parallel designs show excessive algorithmic-noise in the power trace. Hence, we propose a novel attack that takes design parallelism into consideration, which results in a more powerful attack. Also, we propose the first comprehensive SCA of the new secure hashing function mbox{SHA-3}. Although the main application of mbox{SHA-3} is hashing, there are other keyed applications including Message Authentication Codes (MACs), where protection against SCA is required. We study the SCA properties of all the operations involved in mbox{SHA-3}. We also study the effect of changing the key-length on the difficulty of mounting attacks. Indeed, changing the key-length changes the attack methodology. Hence, we propose complete attacks against five different case studies, and propose a systematic algorithm to choose an attack methodology based on the key-length. In the second part, we propose different techniques for protection against SCA. Indeed, the threat of SCA can be mitigated if the secret key changes before every execution. Although many contributions, in the domain of leakage resilient cryptography, tried to achieve this goal, the proposed solutions were inefficient and required very high implementation cost. Hence, we highlight a generic framework for efficient leakage resiliency through lightweight key-updating. Then, we propose two complete solutions for protecting AES modes of operation. One uses a dedicated circuit for key-updating, while the other uses the underlying AES block cipher itself. The first one requires small area (for the additional circuit) but achieves negligible performance overhead. The second one has no area overhead but requires small performance overhead. Also, we address the problem of executing all the applications of hashing functions, e.g. the unkeyed application of regular hashing and the keyed application of generating MACs, on the same core. We observe that, running unkeyed application on an SCA-protected core will involve a huge loss of performance (3x to 4x). Hence, we propose a novel SCA-protected core for hashing. Our core has no overhead in unkeyed applications, and negligible overhead in keyed ones. Our research provides a better understanding of side-channel analysis and supports the cryptographic community with lightweight and efficient countermeasures.
- AlcoZone: An Adaptive Hypermedia based Personalized Alcohol EducationBhosale, Devdutta (Virginia Tech, 2006-05-08)In our knowledge based economy, demand for better and effective learning has led to innovative instructional technologies. However, the one-size-fit-all approach taken by many e-Learning systems is not adequate to the different requirements of people who have different goals, preferences, and previous knowledge about a subject. Many e-Learning systems have approached this problem with personalized and customized content. However, many of these systems are closely tied to one particular subject that they are trying to teach; authoring of courses on different subjects using the same framework is a difficult process. Adaptive Hypermedia is an approach in which content presentation and navigation assistance is personalized depending on the requirements of the user. The user requirements are represented using a user model, while the content is represented using a content model. By using a set of algorithms, an Adaptive Hypermedia based system is able to select the most appropriate content to be presented, as the user interacts with the system. The objective of AlcoZone is to educate all of the 5,000 freshman students of Virginia Tech about alcohol education using Adaptive Hypermedia technology, as part of the mandatory university requirement. The course presents different content to different students based on their drinking pattern. AlcoZone integrates Curriculum Sequencing, Multimedia and Interactivity, Alternate Content Explanation, and Navigational Assistance to make the course interesting for students. This research investigates the design & implementation of AlcoZone and its Adaptive Hypermedia based reusable framework for course creation and delivery.
- APECS: A Polychrony based End-to-End Embedded System Design and Code SynthesisAnderson, Matthew Eric (Virginia Tech, 2015-05-19)The development of high integrity embedded systems remains an arduous and error-prone task, despite the efforts by researchers in inventing tools and techniques for design automation. Much of the problem arises from the fact that the semantics of the modeling languages for the various tools, are often distinct, and the semantics gaps are often filled manually through the engineer's understanding of one model or an abstraction. This provides an opportunity for bugs to creep in, other than standardizing software engineering errors germane to such complex system engineering. Since embedded systems applications such as avionics, automotive, or industrial automation are safety critical, it is very important to invent tools, and methodologies for safe and reliable system design. Much of the tools, and techniques deal with either the design of embedded platforms (hardware, networking, firmware etc), and software stack separately. The problem of the semantic gap between these two, as well as between models of computation used to capture semantics must be solved in order to design safer embedded systems. In this dissertation we propose a methodology for the end-to-end modeling and analysis of safety-critical embedded systems. Our approach consists of formal platform modeling, and analysis; formal application modeling; and 'correct-by-construction' code synthesis with the aim of bridging semantic gaps between the various abstractions and models required for the end-to-end system design. While the platform modeling language AADL has formal semantics, and analysis tools for real-time, and performance verification, the application behavior modeling in AADL is weak and part of an annex. In our work, we create the APECS (AADL and Polychrony based Embedded Computing Synthesis) methodology to allow an embedded system design specification all the way from platform architecture and platform components, the real-time behavior, non-functional properties, as well as the application software modeling. Our main contribution is to integrate a polychronous application software modeling language, and synthesis algorithms in order for synthesis of the embedded software running on the target platform, with the required constraints being met. We believe that a polychronous approach is particularly well suited for a multiprocessor/multi-controller distributed platform where different components often operate at independent rates and concurrently. Further, the use of a formal polychronous language will allow for formal validation of the software prior to code generation. We present a prototype framework that implements this approach, which we refer to as the AADL and Polychrony based Embedded Computing System (APECS). Our prototype utilizes an extended version of Ocarina to provide code generation for the AADL model. Our polychronous modeling language is MRICDF. Our prototype extends Ocarina to support software specification in MRICDF and generate multi-threaded software. Additionally, we implement an automated translation from Simulink to MRICDF, allowing designers to benefit from its formal semantics and exploit engineers' familiarity with Simulink tools, and legacy models. We present case studies utilizing APECS to implement safety critical systems both natively in MRICDF and in Simulink through automated translation.
- Architecture for Issuing DoD Mobile Derived CredentialsSowers, David Albert (Virginia Tech, 2014-07-01)With an increase in performance, dependency and ubiquitousness, the necessity for secure mobile device functionality is rapidly increasing. Authentication of an individual's identity is the fundamental component of physical and logical access to secure facilities and information systems. Identity management within the Department of Defense relies on Public Key Infrastructure implemented through the use of X.509 certificates and private keys issued on smartcards called Common Access Cards (CAC). However, use of CAC credentials on smartphones is difficult due to the lack of effective smartcard reader integration with mobile devices. The creation of a mobile phone derived credential, a new X.509 certificate and key pair based off the credentials of the CAC certificates, would eliminate the need for CAC integration with mobile devices This thesis describes four architectures for securely and efficiently generating and delivering a derived credential to a mobile device for secure communications with mobile applications. Two architectures generate credentials through a software cryptographic module providing a LOA-3 credential. The other two architectures provide a LOA-4 credential by utilizing a hardware cryptographic module for the generation of the key pair. In two of the architectures, the Certificate Authority']s (CA) for the new derived credentials is the digital signature certificate from the CAC. The other two architectures utilize a newly created CA, which would reside on the DoD network and be used to approve and sign the derived credentials. Additionally, this thesis demonstrates the prototype implementations of the two software generated derived credential architectures using CAC authentication and outlines the implementation of the hardware cryptographic derived credential.
- Behavior-based Incentives for Node Cooperation in Wireless Ad Hoc NetworksSrivastava, Vivek (Virginia Tech, 2008-09-17)A Mobile Ad Hoc Network (MANET) adopts a decentralized communication architecture which relies on cooperation among nodes at each layer of the protocol stack. Its reliance on cooperation for success and survival makes the ad hoc network particularly sensitive to variations in node behavior. Specifically, for functions such as routing, nodes which are limited in their resources may be unwilling to cooperate in forwarding for other nodes. Such selfish behavior leads to degradation in the performance of the network and possibly, in the extreme case, a complete cessation of operations. Consequently it is important to devise solutions to encourage resource-constrained nodes to cooperate. Incentive schemes have been proposed to induce selfish nodes to cooperate. Though many of the proposed schemes in the literature are payment-based, nodes can be incentivized to cooperate by adopting policies which are non-monetary in nature, but rather are based on the threat of retaliation for non-cooperating nodes. These policies, for which there is little formal analysis in the existing literature on node cooperation, are based on observed node behavior. We refer to them as behavior-based incentives. In this work, we analyze the effectiveness of behavior-based incentives in inducing nodes to cooperate. To determine whether an incentive scheme is effective in fostering cooperation we develop a game-theoretic model. Adopting a repeated game model, we show that nodes may agree to cooperate in sharing their resources and forward packets, even if they perceive a cost in doing so. This happens as the nodes recognize that refusing to cooperate will result in similar behavior by others, which ultimately would compromise the viability of the network as a whole. A major shortcoming in the analysis done in past works is the lack of consideration of practical constraints imposed by an ad hoc environment. One such example is the assumption that a node, when making decisions about whether to cooperate, has perfect knowledge of every other node's actions. In a distributed setting this is impractical. In our work, we analyze behavior-based incentives by incorporating such practical considerations as imperfect monitoring into our game-theoretic models. In modeling the problem as a game of imperfect public monitoring (nodes observe a common public signal that reflects the actions of other nodes in the network) we show that, under the assumption of first order stochastic dominance of the public signal, the grim trigger strategy leads to an equilibrium for nodes to cooperate. Even though a trigger-based strategy like grim-trigger is effective in deterring selfish behavior it is too harsh in its implementation. In addition, the availability of a common public signal in a distributed setting is rather limited. We, therefore, consider nodes that individually monitor the behavior of other nodes in the network and keep this information private. Note that this independent monitoring of behavior is error prone as a result of slow switching between transmit and promiscuous modes of operation, collisions and congestion due to the wireless medium, or incorrect feedback from peers. We propose a probability-based strategy that induces nodes to cooperate under such a setting. We analyze the strategy using repeated games with imperfect private monitoring and show it to be robust to errors in monitoring others" actions. Nodes achieve a near-optimal payoff at equilibrium when adopting this strategy. This work also characterizes the effects of a behavior-based incentive, applied to induce cooperation, on topology control in ad hoc networks. Our work is among the first to consider selfish behavior in the context of topology control. We create topologies based on a holistic view of energy consumption " energy consumed in forwarding packets as well as in maintaining links. Our main results from this work are to show that: (a) a simple forwarding policy induces nodes to cooperate and leads to reliable paths in the generated topology, (b) the resulting topologies are well-connected, energy-efficient and exhibit characteristics similar to those in small-world networks.
- C-Based Design of Heterogeneous Embedded SystemsGrimm, Christoph; Jantsch, Axel; Shukla, Sandeep K.; Villar, Eugenio (2008-07-15)
- Coexistence of Wireless Networks for Shared Spectrum AccessGao, Bo (Virginia Tech, 2014-09-18)The radio frequency spectrum is not being efficiently utilized partly due to the current policy of allocating the frequency bands to specific services and users. In opportunistic spectrum access (OSA), the ``white spaces'' that are not occupied by primary users (a.k.a. incumbent users) can be opportunistically utilized by secondary users. To achieve this, we need to solve two problems: (i) primary-secondary incumbent protection, i.e., prevention of harmful interference from secondary users to primary users; (ii) secondary-secondary network coexistence, i.e., mitigation of mutual interference among secondary users. The first problem has been addressed by spectrum sensing techniques in cognitive radio (CR) networks and geolocation database services in database-driven spectrum sharing. The second problem is the main focus of this dissertation. To obtain a clear picture of coexistence issues, we propose a taxonomy of heterogeneous coexistence mechanisms for shared spectrum access. Based on the taxonomy, we choose to focus on four typical coexistence scenarios in this dissertation. Firstly, we study sensing-based OSA, when secondary users are capable of employing the channel aggregation technique. However, channel aggregation is not always beneficial due to dynamic spectrum availability and limited radio capability. We propose a channel usage model to analyze the impact of both primary and secondary user behaviors on the efficiency of channel aggregation. Our simulation results show that user demands in both the frequency and time domains should be carefully chosen to minimize expected cumulative delay. Secondly, we study the coexistence of homogeneous CR networks, termed as self-coexistence, when co-channel networks do not rely on inter-network coordination. We propose an uplink soft frequency reuse technique to enable globally power-efficient and locally fair spectrum sharing. We frame the self-coexistence problem as a non-cooperative game, and design a local heuristic algorithm that achieves the Nash equilibrium in a distributed manner. Our simulation results show that the proposed technique is mostly near-optimal and improves self-coexistence in spectrum utilization, power consumption, and intra-cell fairness. Thirdly, we study the coexistence of heterogeneous CR networks, when co-channel networks use different air interface standards. We propose a credit-token-based spectrum etiquette framework that enables spectrum sharing via inter-network coordination. Specifically, we propose a game-auction coexistence framework, and prove that the framework is stable. Our simulation results show that the proposed framework always converges to a near-optimal distributed solution and improves coexistence fairness and spectrum utilization. Fourthly, we study database-driven OSA, when secondary users are mobile. The use of geolocation databases is inadequate in supporting location-aided spectrum sharing if the users are mobile. We propose a probabilistic coexistence framework that supports mobile users by locally adapting their location uncertainty levels in order to find an appropriate trade-off between interference mitigation effectiveness and location update cost. Our simulation results show that the proposed framework can determine and adapt the database query intervals of mobile users to achieve near-optimal interference mitigation with minimal location updates.
- Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data NetworksLin, Hua (Virginia Tech, 2012-09-27)The vision of the smart grid is predicated upon pervasive use of modern digital communication techniques in today's power system. As wide area measurements and control techniques are being developed and deployed for a more resilient power system, the role of communication networks is becoming prominent. Advanced communication infrastructure provides much wider system observability and enables globally optimal control schemes. Wide area measurement and monitoring with Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IED) is a growing trend in this context. However, the large amount of data collected by PMUs or IEDs needs to be transferred over the data network to control centers where real-time state estimation, protection, and control decisions are made. The volume and frequency of such data transfers, and real-time delivery requirements mandate that sufficient bandwidth and proper delay characteristics must be ensured for the correct operations. Power system dynamics get influenced by the underlying communication infrastructure. Therefore, extensive integration of power system and communication infrastructure mandates that the two systems be studied as a single distributed cyber-physical system. This dissertation proposes a global event-driven co-simulation framework, which is termed as GECO, for interconnected power system and communication network. GECO can be used as a design pattern for hybrid system simulation with continuous/discrete sub-components. An implementation of GECO is achieved by integrating two software packages: PSLF and NS2 into the framework. Besides, this dissertation proposes and studies a set of power system applications which can be only properly evaluated on a co-simulation framework like GECO, namely communication-based distance relay protection, all-PMU state estimation and PMU-based out-of-step protection. All of them take advantage of interplays between the power grid and the communication infrastructure. The GECO experiments described in this dissertation not only show the efficacy of the GECO framework, but also provide experience on how to go about using GECO in smart grid planning activities.
- Constraint Based Program Synthesis for Embedded SoftwareEldib, Hassan Shoukry (Virginia Tech, 2015-07-30)In the world that we live in today, we greatly rely on software in nearly every aspect of our lives. In many critical applications, such as in transportation and medical systems, catastrophic consequences could occur in case of buggy software. As the computational power and storage capacity of computer hardware keep increasing, so are the size and complexity of the software. This makes testing and verification increasingly challenging in practice, and consequentially creates a chance for software with critical bugs to find their way into the consumer market. In this dissertation, I present a set of innovative new methods for automatically verifying, as well as synthesizing, critical software and hardware in embedded computing applications. Based on a set of rigorous formal analysis techniques, my methods can guarantee that the resulting software are efficient and secure as well as provably correct.
- Data-Link Layer Traceback in Ethernet NetworksSnow, Michael Thomas (Virginia Tech, 2006-11-27)The design of the most commonly-used Internet and Local Area Network protocols provide no way of verifying the sender of a packet is who it claims to be. Protocols and applications exist that provide authentication but these are generally for special use cases. A malicious host can easily launch an attack while pretending to be another host to avoid being discovered. At worst, the behavior may implicate a legitimate host causing it and the user to be kicked off the network. A malicious host may further conceal its location by sending the attack packets from one or more remotely-controlled hosts. Current research has provided techniques to support traceback, the process of determining the complete attack path from the victim back to the attack coordinator. Most of this research focuses on IP traceback, from the victim through the Internet to the edge of the network containing the attack packet source, and Stepping-Stone traceback, from source to the host controlling the attack. However, little research has been conducted on the problem of Data-Link Layer Traceback (DLT), the process of tracing frames from the network edge to the attack source, across what is usually a layer-2 network. We propose a scheme called Tagged-fRAme tracebaCK (TRACK) that provides a secure, reliable DLT technique for Ethernet networks. TRACK defines processes for Ethernet switches and a centralized storage and lookup host. As a frame enters a TRACK-enabled network, a tag is added indicating the switch and port on which the frame entered the network. This tag is collected at the network edge for later use in the traceback operation. An authentication method is defined to prevent unauthorized entities from generating or modifying tag data. Simulation results indicate that TRACK provides accurate DLT operation while causing minimal impact on network and application performance.
- Design and Analysis of Defect- and Fault-tolerant Nano-Computing SystemsBhaduri, Debayan (Virginia Tech, 2007-02-19)The steady downscaling of CMOS technology has led to the development of devices with nanometer dimensions. Contemporaneously, maturity in technologies such as chemical self-assembly and DNA scaffolding has influenced the rapid development of non-CMOS nanodevices including vertical carbon nanotube (CNT) transistors and molecular switches. One main problem in manufacturing defect-free nanodevices, both CMOS and non-CMOS, is the inherent variability in nanoscale fabrication processes. Compared to current CMOS devices, nanodevices are also more susceptible to signal noise and thermal perturbations. One approach for developing robust digital systems from such unreliable nanodevices is to introduce defect- and fault-tolerance at the architecture level. Structurally redundant architectures, reconfigurable architectures and architectures that are a hybrid of the previous two have been proposed as potential defect- and fault-tolerant nanoscale architectures. Hence, the design of reliable nanoscale digital systems will require detailed architectural exploration. In this dissertation, we develop probabilistic methodologies and CAD tools to expedite the exploration of defect- and fault-tolerant architectures. These methodologies and tools will provide nanoscale system designers with the capability to carry out trade-off analysis in terms of area, delay, redundancy and reliability. During execution, the next state of a digital system is only dependent on the present state and the digital signals propagate in discrete time. Hence, we have used Markov processes to analyze the reliability of nanoscale digital architectures. Discrete Time Markov Chains (DTMCs) have been used to analyze logic architectures and Markov Decision processes (MDPs) have been used to analyze memory architectures. Since structurally redundant and reconfigurable nanoarchitectures may consist of millions of nanodevices, we have applied state space partitioning techniques and Belief propagation to scale these techniques. We have developed three toolsets based on these Markovian techniques. One of these toolsets has been specifically developed for the architectural exploration of molecular logic systems. The toolset can generate defect maps for isolating defective nanodevices and provide capabilities to organize structurally redundant fault-tolerant architectures with the non-defective devices. Design trade-offs for each of these architectures can be computed in terms of signal delay, area, redundancy and reliability. Another tool called HMAN (Hybrid Memory Analyzer) has been developed for analyzing molecular memory systems. Besides analyzing reliability-redundancy trade-offs using MDPs, HMAN provides a very accurate redundancy-delay trade-off analysis using HSPICE. SETRA (Scalable, Extensible Tool for Reliability Analysis) has been specifically designed for analyzing nanoscale CMOS logic architectures with DTMCs. SETRA also integrates well with current industry-standard CAD tools. It has been shown that multimodal computational models capture the operation of emerging nanoscale devices such as vertical CNT transistors, instead of the bimodal Boolean computational model that has been used to understand the operation of current electronic devices. We have extended an existing multimodal computational model based on Markov Random Fields (MRFs) for analyzing structurally redundant and reconfigurable architectures. Hence, this dissertation develops multiple probabilistic methodologies and tools for performing nanoscale architectural exploration. It also looks at different defect- and fault-tolerant architectures and explores different nanotechnologies.
- Design Space Exploration for Embedded Systems in AutomotivesJoshi, Prachi (Virginia Tech, 2018-04-16)With ever increasing contents (safety, driver assistance, infotainment, etc.) in today's automotive systems that rely on electronics and software, the supporting architecture is integrated by a complex set of heterogeneous data networks. A modern automobile contains up to 100 ECUs and several heterogeneous communication buses (such as CAN, FlexRay, etc.), exchanging thousands of signals. The automotive Original Equipment Manufacturers (OEMs) and suppliers face a number of challenges such as reliability, safety and cost to incorporate the growing functionalities in vehicles. Additionally, reliability, safety and cost are major concerns for the industry. One of the important challenges in automotive design is the efficient and reliable transmission of signals over communication networks such as CAN and CAN-FD. With the growing features in automotives, the OEMs already face the challenge of saturation of bus bandwidth hindering the reliability of communication and the inclusion of additional features. In this dissertation, we study the problem of optimization of bandwidth utilization (BU) over CAN-FD networks. Signals are transmitted over the CAN/CAN-FD bus in entities called frames. The signal-to-frame-packing has been studied in the literature and it is compared to the bin packing problem which is known to be NP-hard. By carefully optimizing signal-to-frame packing, the CAN-FD BU can be reduced. In Chapter 3, we propose a method for offset assignment to signals and show its importance in improving BU. One of our contributions for an industrial setting is a modest improvement in BU of about 2.3%. Even with this modest improvement, the architecture's lifetime could potentially be extended by several product cycles, which may translate to saving millions of dollars for the OEM. Therefore, the optimization of signal-to-frame packing in CAN-FD is the major focus of this dissertation. Another challenge addressed in this dissertation is the reliable mapping of a task model onto a given architecture, such that the end-to-end latency requirements are satisfied. This avoids costly redesign and redevelopment due to system design errors.
- Design Techniques for Side-channel Resistant Embedded SoftwareSinha, Ambuj Sudhir (Virginia Tech, 2011-08-05)Side Channel Attacks (SCA) are a class of passive attacks on cryptosystems that exploit implementation characteristics of the system. Currently, a lot of research is focussed towards developing countermeasures to side channel attacks. In this thesis, we address two challenges that are an inherent part of the efficient implementation of SCA countermeasures. While designing a system, design choices made for enhancing the efficiency or performance of the system can also affect the side channel security of the system. The first challenge is that the effect of different design choices on the side channel resistance of a system is currently not well understood. It is important to understand these effects in order to develop systems that are both secure and efficient. A second problem with incorporating SCA countermeasures is the increased design complexity. It is often difficult and time consuming to integrate an SCA countermeasure in a larger system. In this thesis, we explore that above mentioned problems from the point of view of developing embedded software that is resistant to power based side channel attacks. Our first work is an evaluation of different software AES implementations, from the perspective of side channel resistance, that shows the effect of design choices on the security and performance of the implementation. Next we present work that identifies the problems that arise while designing software for a particular type of SCA resistant architecture - the Virtual Secure Circuit. We provide a solution in terms of a methodology that can be used for developing software for such a system - and also demonstrate that this methodology can be conveniently automated - leading to swifter and easier software development for side channel resistant designs.
- Digital to Analog Converter Design using Single Electron TransistorsPerry, Jonathan (Virginia Tech, 2005-04-29)CMOS Technology has advanced for decades under the rule of Moore's law. But all good things must come to an end. Researchers estimate that CMOS will reach a lower limit on feature size within the next 10 to 15 years. In order to assure further progress in the field, new computing architectures must be investigated. These nanoscale architectures are many and varied. It remains to be seen if any will become a legitimate successor to CMOS. Single electron tunneling is a process by which electrons can be trans- ported (tunnel) across a thin insulating surface. A conducting island sepa rated by a pair of quantum tunnel junctions creates a Single Electron Transistor (SET). SETs exhibit higher functionality than traditional MOSFETs, and function best at very small feature sizes, in the neighborhood of 1nm. Many circuits must be developed before SETs can be considered a viable contender to CMOS technology. One important circuit is the Digital to Analog Converter (DAC). DACs are present on many microprocessors and microcontrollers in use today and are necessary in many situations. While other SET circuits have been proposed, including ADCs, no DAC design exists in open literature. We propose three possible SET DAC designs and characterize them with an HSPICE SET simulation model. The first design is a charge scaling architecture similar to what is frequently used in CMOS. The second two designs are based on a current steering architecture, but are unique in their implementation with SETs.
- Dynamic Invariant Generation for Concurrent ProgramsChattopadhyay, Arijit (Virginia Tech, 2014-06-23)We propose a fully automated and dynamic method for generating likely invariants from multithreaded programs and then leveraging these invariants to infer atomic regions and diagnose concurrency errors in the software code. Although existing methods for dynamic invariant generation perform reasonably well on sequential programs, for multithreaded programs, their effectiveness often reduces dramatically in terms of both the number of invariants that they can generate and the likelihood of them being true invariants. We solve this problem by developing a new dynamic invariant generator, which consists of a new LLVM based code instrumentation tool, an INSPECT based thread interleaving explorer, and a customized inference engine inside Daikon. We have evaluated the resulting system on public domain multithreaded C/C++ benchmarks. Our experiments show that the new method is effective in generating high-quality invariants. Furthermore, the state and transition invariants generated by our new method have been proved useful both in error diagnosis and in identifying likely atomic regions in the concurrent software code.
- An Efficient 2-Phase Strategy to Achieve High Branch CoveragePrabhu, Sarvesh P. (Virginia Tech, 2012-02-03)Symbolic execution-based test generation is gaining popularity for software test generation. The increasing complexity of the software program is posing new challenges in software execution-based test generation because of the path explosion problem. We present a new 2-phase symbolic execution driven strategy that achieves high branch coverage in software quickly. Phase 1 follows a greedy approach that quickly covers as many branches as possible by exploring each branch through its corresponding shortest path prefix. Phase 2 covers the remaining branches that are left uncovered if the shortest path to the branch was infeasible. In Phase 1, a basic conflict driven learning is used to skip all the paths that may have any of the earlier encountered conflicting conditions, while in Phase 2, a more intelligent conflict driven learning is used to skip regions that do not have a feasible path to any unexplored branch. This results in considerable reduction in unnecessary SMT solver calls. Experimental results show that significant speedup can be achieved, effectively reducing the time to detect a bug and providing higher branch coverage for a fixed time out period than previous techniques.