Browsing by Author "Egyhazy, Csaba J."
Now showing 1 - 20 of 31
Results Per Page
Sort Options
- Automated Seed Point Selection in Confocal Image Stacks of Neuron CellsBilodeau, Gregory Peter (Virginia Tech, 2013-07-25)This paper provides a fully automated method of finding high-quality seed points in 3D space from a stack of images of neuron cells. These seed points may then be used as initial starting points for automated local tracing algorithms, removing a time consuming required user interaction in current methodologies. Methods to collapse the search space and provide rudimentary topology estimates are also presented.
- Automatic Lexicon Generation for Unsupervised Part-of-Speech Tagging Using Only Unannotated TextPereira, Dennis V. (Virginia Tech, 1999-05-07)With the growing number of textual resources available, the ability to understand them becomes critical. An essential first step in understanding these sources is the ability to identify the parts-of-speech in each sentence. The goal of this research is to propose, improve, and implement an algorithm capable of finding terms (words in a corpus) that are used in similar ways--a term categorizer. Such a term categorizer can be used to find a particular part-of-speech, i.e. nouns in a corpus, and generate a lexicon. The proposed work is not dependent on any external sources of information, such as dictionaries, and it shows a significant improvement (~30%) over an existing method of categorization. More importantly, the proposed algorithm can be applied as a component of an unsupervised part-of-speech tagger, making it truly unsupervised, requiring only unannotated text. The algorithm is discussed in detail, along with its background, and its performance. Experimentation shows that the proposed algorithm performs within 3% of the baseline, the Penn-TreeBank Lexicon.
- A case study in object-oriented development: code reuse for two computer gamesScott, Roger E. (Virginia Tech, 1992)A case study of the object-oriented development of two computer games using commercially available products was conducted. The games were constructed for use on Apple Macintosh computers using a C+ + like programming language and an accompanying object-oriented class library. Object-oriented techniques are compared with procedure oriented techniques, and benefits of object-oriented techniques for code reuse are introduced. The reuse of object-oriented code within a target domain of applications is discussed, with examples drawn from the reuse of specific functions between the two games. Other reuse topics encountered in the development effort which are discussed: reuse of operating system routines, reuse of code provided by an object-oriented class library, and reuse of code to provide functions needed for a graphical user interface.
- A Class of Call Admission Control Algorithms for Resource Management and Reward Optimization for Servicing Multiple QoS Classes in Wireless Networks and Its ApplicationsYilmaz, Okan (Virginia Tech, 2008-11-17)We develop and analyze a class of CAC algorithms for resource management in wireless networks with the goal not only to satisfy QoS constraints, but also to maximize a value or reward objective function specified by the system. We demonstrate through analytical modeling and simulation validation that the CAC algorithms developed in this research for resource management can greatly improve the system reward obtainable with QoS guarantees, when compared with existing CAC algorithms designed for QoS satisfaction only. We design hybrid partitioning-threshold, spillover and elastic CAC algorithms based on the design techniques of partitioning, setting thresholds and probabilistic call acceptance to use channel resources for servicing distinct QoS classes. For each CAC algorithm developed, we identify optimal resource management policies in terms of partitioning or threshold settings to use channel resources. By comparing these CAC algorithms head-to-head under identical conditions, we determine the best algorithm to be used at runtime to maximize system reward with QoS guarantees for servicing multiple service classes in wireless networks. We study solution correctness, solution optimality and solution efficiency of the class of CAC algorithms developed. We ensure solution optimality by comparing optimal solutions achieved with those obtained by ideal CAC algorithms via exhaustive search. We study solution efficiency properties by performing complexity analyses and ensure solution correctness by simulation validation based on real human mobility data. Further, we analyze the tradeoff between solution optimality vs. solution efficiency and suggest the best CAC algorithm used to best tradeoff solution optimality for solution efficiency, or vice versa, to satisfy the system's solution requirements. Moreover, we develop design principles that remain applicable despite rapidly evolving wireless network technologies since they can be generalized to deal with management of 'resources' (e.g., wireless channel bandwidth), 'cells' (e.g., cellular networks), "connections" (e.g., service calls with QoS constraints), and "reward optimization" (e.g., revenue optimization in optimal pricing determination) for future wireless service networks. To apply the CAC algorithms developed, we propose an application framework consisting of three stages: workload characterization, call admission control, and application deployment. We demonstrate the applicability with the optimal pricing determination application and the intelligent switch routing application.
- Cluster Algebra: A Query Language for Heterogeneous DatabasesBhasker, Bharat; Egyhazy, Csaba J.; Triantis, Konstantinos P. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1992)This report describes a query language based on algebra for heterogeneous databases. The database logic is used as a uniform framework for studying the heterogeneous databases. The data model based on the database logic is referred to as cluster data model in this report. Generalized Structured Query Language (GSQL) is used for expressing ad-hoc queries over the relational, hierarchical and network database uniformly. For the purpose of query optimization, a query language that can express the primitive heterogeneous database operations is required. This report describes such a query language for the clusters (i.e., heterogeneous databases). The cluster algebra consists of (a) generalized relational operations such as selection, union, intersection, difference, semi-join, rename and cross-product; (b) modified relational operations such as normal projection and normal join; and (c) new operations such as normalize, embed, and unembed.
- Cyrano: a meta model for federated database systemsDzikiewicz, Joseph (Virginia Tech, 1996-05-01)The emergence of new data models requires further research into federated database systems. A federated database system (FDBS) provides uniform access to multiple heterogeneous databases. Most FDBS's provide access to only the older data models such as relational, hierarchical, and network models. A federated system requires a meta data model. The meta model is a uniform data model through which users access data regardless of the data model of the data's native database. This dissertation examines the question of meta models for use in an FDBS that provides access to relational, object oriented, and rule based databases. This dissertation proposes Cyrano, a hybrid of object oriented and rule based data models. The dissertation demonstrates that Cyrano is suitable as a meta model by showing that Cyrano satisfies the following three criteria: 1) Cyrano fully supports relational, object oriented, and rule based member data models. 2) Cyrano provides sufficient capabilities to support integration of heterogeneous databases. 3) Cyrano can be implemented as the meta model of an operational FDBS. This dissertation describes four primary products of this research: 1) The dissertation presents Cyrano, a meta model designed as part of this research that supports both the older and the newer data models. Cyrano is an example of analytic object orientation. Analytic object orientation is a conceptual approach that combines elements of object oriented and rule based data models. 2) The dissertation describes Roxanne, a proof-of-concept FDBS that uses Cyrano as its meta model. 3) The dissertation proposes a set of criteria for the evaluation of meta models. The dissertation uses these criteria to demonstrate Cyrano's Suitability as a meta model. 4) The dissertation presents an object oriented FDBS reference architecture suitable for use in describing and designing an FDBS.
- Design and Analysis of Adaptive Fault Tolerant QoS Control Algorithms for Query Processing in Wireless Sensor NetworksSpeer, Ngoc Anh Phan (Virginia Tech, 2008-04-17)Data sensing and retrieval in WSNs have a great applicability in military, environmental, medical, home and commercial applications. In query-based WSNs, a user would issue a query with QoS requirements in terms of reliability and timeliness, and expect a correct response to be returned within the deadline. Satisfying these QoS requirements requires that fault tolerance mechanisms through redundancy be used, which may cause the energy of the system to deplete quickly. This dissertation presents the design and validation of adaptive fault tolerant QoS control algorithms with the objective to achieve the desired quality of service (QoS) requirements and maximize the system lifetime in query-based WSNs. We analyze the effect of redundancy on the mean time to failure (MTTF) of query-based cluster-structured WSNs and show that an optimal redundancy level exists such that the MTTF of the system is maximized. We develop a hop-by-hop data delivery (HHDD) mechanism and an Adaptive Fault Tolerant Quality of Service Control (AFTQC) algorithm in which we utilize "source" and "path" redundancy with the goal to satisfy application QoS requirements while maximizing the lifetime of WSNs. To deal with network dynamics, we investigate proactive and reactive methods to dynamically collect channel and delay conditions to determine the optimal redundancy level at runtime. AFTQC can adapt to network dynamics that cause changes to the node density, residual energy, sensor failure probability, and radio range due to energy consumption, node failures, and change of node connectivity. Further, AFTQC can deal with software faults, concurrent query processing with distinct QoS requirements, and data aggregation. We compare our design with a baseline design without redundancy based on acknowledgement for data transmission and geographical routing for relaying packets to demonstrate the feasibility. We validate analytical results with extensive simulation studies. When given QoS requirements of queries in terms of reliability and timeliness, our AFTQC design allows optimal "source" and "path" redundancies to be identified and applied dynamically in response to network dynamics such that not only query QoS requirements are satisfied, as long as adequate resources are available, but also the lifetime of the system is prolonged.
- Design and Analysis of QoS-Aware Key Management and Intrusion Detection Protocols for Secure Mobile Group Communications in Wireless NetworksCho, Jin-Hee (Virginia Tech, 2008-11-12)Many mobile applications in wireless networks such as military battlefield, emergency response, and mobile commerce are based on the notion of secure group communications. Unlike traditional security protocols which concern security properties only, in this dissertation research we design and analyze a class of QoS-aware protocols for secure group communications in wireless networks with the goal to satisfy not only security requirements in terms of secrecy, confidentiality, authentication, availability and data integrity, but also performance requirements in terms of latency, network traffic, response time, scalability and reconfigurability. We consider two elements in the dissertation research: design and analysis. The dissertation research has three major contributions. First, we develop three "threshold-based" periodic batch rekeying protocols to reduce the network communication cost caused by rekeying operations to deal with outsider attacks. Instead of individual rekeying, i.e., performing a rekeying operation right after each group membership change event, these protocols perform batch rekeying periodically. We demonstrate that an optimal rekey interval exists that would satisfy an imposed security requirement while minimizing the network communication cost. Second, we propose and analyze QoS-aware intrusion detection protocols for secure group communications in mobile ad hoc networks to deal with insider attacks. We consider a class of intrusion detection protocols including host-based and voting-based protocols for detecting and evicting compromised nodes and examine their effect on the mean time to security failure metric versus the response time metric. Our analysis reveals that there exists an optimal intrusion detection interval under which the system lifetime metric can be best traded off for the response time performance metric, or vice versa. Furthermore, the intrusion detection interval can be dynamically adjusted based on the attacker behaviors to maximize the system lifetime while satisfying a system-imposed response time or network traffic requirement. Third, we propose and analyze a scalable and efficient region-based group key management protocol for managing mobile groups in mobile ad hoc networks. We take a region-based approach by which group members are broken into region-based subgroups, and leaders in subgroups securely communicate with each other to agree on a group key in response to membership change and member mobility events. We identify the optimal regional area size that minimizes the network communication cost while satisfying the application security requirements, allowing mobile groups to react to network partition/merge events for dynamic reconfigurability and survivability. We further investigate the effect of integrating QoS-aware intrusion detection with region-based group key management and identify combined optimal settings in terms of the optimal regional size and the optimal intrusion detection interval under which the security and performance properties of the system can be best optimized. We evaluate the merits of our proposed QoS-aware security protocols for mobile group communications through model-based mathematical analyses with extensive simulation validation. We perform thorough comparative analyses against baseline secure group communication protocols which do not consider security versus performance tradeoffs, including those based on individual rekeying, no intrusion detection, and/or no-region designs. The results obtained show that our proposed QoS-aware security protocols outperform these baseline algorithms. â
- Developing distributed applications with distributed heterogenous databasesDixon, Eric Richard (Virginia Tech, 1993-05-05)This report identifies how Tuxedo fits into the scheme of distributed database processing. Tuxedo is an On-Line Transaction Processing (OLTP) system. Tuxedo was studied because it is the oldest and most widely used transaction processing system on UNIX. That means that it is established, extensively tested, and has the most tools available to extend its capabilities. The disadvantage of Tuxedo is that newer UNIX OLTP systems are often based on more advanced technology. For this reason, other OLTPs were examined to compare their additional capabilities with those offered by Tuxedo. As discussed in Sections I and II, Tuxedo is modeled according to the X/Open's Distributed Transaction Processing (DTP) model. The DTP model includes three pieces: Application Programs (APs), Transaction Monitors (TMs), and Resource Managers (RMs). Tuxedo provides a TM in the model and uses the XA specification to communicate with RMs (e.g. Informix). Tuxedo's TX specification, which defines communications between the APs and TMs is also being considered by X/Open as the standard interface between APs and TMs. There is currently no standard interface between those two pieces. Tuxedo conforms to all X/Open's current standards related to the model. Like the other major OLTPs for UNIX, Tuxedo is based on the client/server model. Tuxedo expands that support to include both synchronous and asynchronous service calls. Tuxedo calls that extension the enhanced client/server model. Tuxedo also expands their OLTP support to allow distributed transactions to include databases on IBM compatible Personal Computers (PCs) and proprietary mainframe (Host) systems. Tuxedo calls this extension Enterprise Transaction Processing (ETP). The name enterprise comes from the fact that since Tuxedo supports database transactions supporting UNIX, PCs. and Host computers, transactions can span the computer systems of entire businesses, or enterprises. Tuxedo is not as robust as the distributed database system model presented by Date. Tuxedo requires programmer participation in providing the capabilities that Date says the distributed database manager should provide. The coordinating process is the process which is coordinating a global transaction. According to Date's model, agents exist on remote sites participating in the transaction in order to handle the calls to the local resource manager. In Tuxedo, the programmer must provide that agent code in the form of services. Tuxedo does provide location transparency, but not in the form Date describes. Date describes location transparency as controlled by a global catalog. In Tuxedo, location transparency is provided by the location of servers as specified in the Tuxedo configuration file. Tuxedo also does not provide replication transparency as specified by Date. In Tuxedo, the programmer must write services which maintain replicated records. Date also describes five problems faced by distributed database managers. The first problem is query processing. Tuxedo provides capabilities to fetch records from databases, but does not provide the capabilities to do joins across distributed databases. The second problem is update propagation. Tuxedo does not provide for replication transparency. Tuxedo does provide enough capabilities for programmers to reliably maintain replicated records. The third problem is concurrency control, which is supported by Tuxedo. The fourth problem is the commit protocol. Tuxedo's commit protocol is the two-phase commit protocol. The fifth problem is the global catalog. Tuxedo does not have a global catalog. The other comparison presented in the paper was between Tuxedo and the other major UNIX OL TPs: Transarc's Encina, Top End, and CICS. Tuxedo is the oldest and has the largest market share. This gives 38 Tuxedo the advantage of being the most thoroughly tested and the most stable. Tuxedo also has the most tools available to extend its capabilities. The disadvantage Tuxedo has is that since it is the oldest, it is based on the oldest technology. Transarc's Encina is the most advanced UNIX OLTP. Encina is based on DCB and supports multithreading. However, Encina has been slow to market and has had stability problems because of its advanced features. Also, since Encina is based on DCB, its success is tied to the success of DCB. Top End is less advanced than Encina, but more advanced than Tuxedo. It is also much more stable than Encina. However. Top End is only now being ported from the NCR machines on which it was originally built. CICS is not yet commercially available. CICS is good for companies with CICS code to port to UNIX and CICS programmers who are already experts. The disadvantage to CICS is that companies which work with UNIX already and do not use CICS will find the interface less natural than Tuxedo, which originated under UNIX.
- Development of a measurement-based approach for monitoring the changes in an evolving quality management systemCaroli, Vivek (Virginia Tech, 1994-11-12)The concept of quality management is operationalized in an organization through a Quality Management System (QMS) - a complex, coordinated set of activities and behaviors aimed at improving the quality of an organization's processes, goods, and services. Like all systems, a QMS must be planned, monitored, improved, and maintained over time to function at its best. For this, measurement is key. The standard of quality management performance developed by Triantis, et. al. (1991b) is the quality management system definition used in this thesis. The thesis subsequently makes three contributions. First, it provides a methodology for defining generic measures of QMS performance and evolution, and implements this methodology in creating more than 200 prototype measures for 10 out off the 37 component "modules" of a QMS. Second, a methodology is presented for developing a tool to collect the very data called for by the measures. This methodology is implemented and a prototype questionnaire developed to collect measurement data for the Vendor/Contractor Relations (VCR)module of a QMS. Third, given the vast amount of data collected with the various questionnaires that needs to be manipulated in order to manage the QMS, it is important to be able to use automation. Therefore, it becomes necessary to logically organize the data. The entity-relationship (E/R) modeling technique is one approach that can be used to achieve this objective. This E/R approach is used to logically organize data that is generated by the questionnaire for the VCR module. In so doing, one can assess the potential viability of this data modeling approach and begin laying the foundation for a database that will support the measurement requirements of a QMS.
- Domain knowledge specification using fact schemaParthasarathy, S. (Virginia Tech, 1991-05-01)The advantages of integrating artificial intelligence (AI) Technology with data base management system (DBMS) technology are widely recognized as indicated by the results from the survey of AI and data base (DB) researchers. ...In our work, we have focused on the use of data base systems to store large number of facts and rules for a rule-based AI system.
- Effects of driver characteristics and traffic composition on traffic flowGolden, Gaylynn (Virginia Tech, 1994-05-15)This paper describes the development of simulation models for a variety of traffic flow scenarios. The major goal of the models was to evaluate the effects of driver characteristics and traffic composition on traffic flow. The five scenarios modeled and their respective objectives were as follows: 1. Vehicles switching lanes to increase speed. Objectives were thruput and number of lane switches. 2. Vehicles merging into an adjacent lane. Objectives were distance traveled before merging and number of collisions during lane switching. 3. Vehicles switching from the left or right lane into the center lane. Objectives were number of collisions and number of new misses during lane switching. 4. Vehicles passing on a two-lane bidirectional road. Objective was number of collisions during passing. 5. Vehicles switching from the center lane to the left or right lane to avoid an impassible obstacle. Objectives were number of collisions during lane switching and number of collisions with obstacle. Various driver characteristics were implemented in the models. The concept of preoccupation/attentiveness was factored into the models through the use of varied reaction times. 0ther driver characteristics were incorporated in the models via the assignment of vehicle speed. The models provided for a wide variety of driver types. Examples are as follows: 1. Drivers in a hurry. 2. Tourists or drivers unfamiliar with the area. 3. Law-abiding drivers. 4. Aggressive and passive drivers. 5. Young, inexperienced drivers. 6. Tired truck drivers. The driver characteristics were varied via percentage allocations entered at run-time. The traffic composition for the models consisted of automobiles and multi-axle vehicles of fixed lengths. The percentages for each vehicle type were also entered at run-time. The scope and level of detail for each model was delineated with assumptions. General assumptions made included the following: 1. An automobile is 10 feet long, a multi-axle vehicle is 30 feet long. 2. The width of a lane is such that only one vehicle can be accommodated at a time. 3. A vehicle is considered to be entirely in one lane or another. 4. A vehicle switches lanes instantaneously. 5. The reaction time of an attentive driver is normally distributed with a mean of .5; the reaction time of a preoccupied driver is normally distributed with a mean of .7. Three standard deviations are included to ensure complete population coverage. 6. A collision between two vehicles results in the termination of the vehicle causing the collision: the other vehicle continues. Implementation of these models was performed using the student version of the simulation language GPSS/H. The models were validated. but not verified against their real world counterparts. Test results showed that select driver characteristics can affect Traffic flow; however, the effect of traffic composition was relatively unshown.
- The Effects of Open Source License Choice on Software ReuseBrewer, John VIII (Virginia Tech, 2012-05-04)Previous research shows that software reuse can have a positive impact on software development economics, and that the adoption of a specific open source license can influence how a software product is received by users and programmers. This study attempts to bridge these two research areas by examining how the adoption of an open source license affects software reuse. Two reuse metrics were applied to 9,570 software packages contained in the Fedora Linux software repository. Each package was evaluated to determine how many external components it reuses, as well as how many times it is reused by other software packages. This data was divided into subsets according to license type and software category. The study found that, in general, (1) software released under a restrictive license reuse more external components than software released under a permissive license, and (2) that software released under a permissive license is more likely to be reused than software released under a restrictive license. However, there are exceptions to these conclusions, as the effect of license choice on reuse varies by software category.
- Formal Specification and Verification of Data-Centric Web ServicesMoustafa, Iman Saleh (Virginia Tech, 2012-02-10)In this thesis, we develop and evaluate a formal model and contracting framework for data-centric Web services. The central component of our framework is a formal specification of a common Create-Read-Update-Delete (CRUD) data store. We show how this model can be used in the formal specification and verification of both basic and transactional Web service compositions. We demonstrate through both formal proofs and empirical evaluations that our proposed framework significantly decreases ambiguity about a service, enhances its reuse, and facilitates detection of errors in service-based implementations. Web Services are reusable software components that make use of standardized interfaces to enable loosely-coupled business-to-business and customer-to-business interactions over the Web. In such environments, service consumers depend heavily on the service interface specification to discover, invoke, and synthesize services over the Web. Data-centric Web services are services whose behavior is determined by their interactions with a repository of stored data. A major challenge in this domain is interpreting the data that must be marshaled between consumer and producer systems. While the Web Services Description Language (WSDL) is currently the de facto standard for Web services, it only specifies a service operation in terms of its syntactical inputs and outputs; it does not provide a means for specifying the underlying data model, nor does it specify how a service invocation affects the data. The lack of data specification potentially leads to erroneous use of the service by a consumer. In this work, we propose a formal contract for data-centric Web services. The goal is to formally and unambiguously specify the service behavior in terms of its underlying data model and data interactions. We address the specification of a single service, a flow of services interacting with a single data store, and also the specification of distributed transactions involving multiple Web services interacting with different autonomous data stores. We use the proposed formal contract to decrease ambiguity about a service behavior, to fully verify a composition of services, and to guarantee correctness and data integrity properties within a transactional composition of services.
- A Framework for the Study of Query Decomposition for Heterogeneous Distributed Database Management SystemsTriantis, Konstantinos P.; Egyhazy, Csaba J. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1987)This paper presents a framework for the study of the query decomposition translation for heterogeneous record -oriented database management systems. This framework is based on the applied database logic representation of relational, hierarchical and network databases. The input to the query decomposition translation is the query graph which is derived from the complex to basic, external to conceptual and logical optimization translations. Once the query graph is obtained the objective of the query decomposition translation is to break up a query expressed in terms of the actual or conceptual databases into its component parts or subqueries and find a strategy indicating the sequence of primitive or fundamental operations and their corresponding processing sites in the network necessary to answer the query. The query processing strategy is usually chosen so as to satisfy some performance criterion such as response time reduction. Contingent on after each primitive operation. The prequery decomposition translation, the query decomposition translation and the size estimation issues are presented through an example based on the current implementation of the Distributed Access View Integration Database (DAVID) currently being built at NASA's Goddard Space Flight Center (GSFC). The choice of a query processing strategy is the successful estimation of intermediate results
- A graphical alternative to direct SQL based queryingBeasley, Johnita (Virginia Tech, 1993-05-04)SQL provides a fairly straightforward means of querying database data. However, as with all command languages, SQL can get very complicated, even for experienced programmers. This complexity can be intimidating to the novice or intermediate user who needs to access data from a database with complex SQL statements, especially when users don't want to know or even become familiar with a command oriented query language like SQL.
- The impact of network characteristics on the selection of a deadlock detection algorithm for distributed databasesDaniel, Pamela Dorr Fuller (Virginia Tech, 1989-05-05)Much attention has been focused on the problem of deadlock detection in distributed databases, resulting in the publication of numerous algorithms to accomplish this function. The algorithms published to date differ greatly in many respects: timing, location, information collection, and basic approach. The emphasis of this research has been on theory and proof of correctness, rather than on practical application. Relatively few attempts have been made to implement the algorithms. The impact of the characteristics of the underlying database management system, transaction model, and communications network upon the effectiveness and performance of the proposed deadlock detection algorithms has largely been ignored. It is the intent of this study to examine more closely the interaction between a deadlock detection algorithm and one aspect of the environment in which it is implemented: namely, the communications network.
- Integrated Mobility and Service Management for Future All-IP Based Wireless NetworksHe, Weiping (Virginia Tech, 2009-03-20)Mobility management addresses the issues of how to track and locate a mobile node (MN) efficiently. Service management addresses the issues of how to efficiently deliver services to MNs. This dissertation aims to design and analyze integrated mobility and service management schemes for future all-IP based wireless systems. We propose and analyze per-user regional registration schemes extending from Mobile IP Regional Registration and Hierarchical Mobile IPv6 for integrated mobility and service management with the goal to minimize the network signaling and packet delivery cost in future all-IP based wireless networks. If access routers in future all-IP based wireless networks are restricted to perform network layer functions only, we investigate the design of intelligent routers, called dynamic mobility anchor points (DMAPs), to implement per-user regional management in IP wireless networks. These DMAPs are access routers (ARs) chosen by individual MNs to act as regional routers to reduce the signaling overhead for intra-regional movements. The DMAP domain size is based on a MN's mobility and service characteristics. A MN optimally determines when and where to launch a DMAP to minimize the network cost in serving the user's mobility and service management operations. We show that there exists an optimal DMAP domain size for each individual MN. We also demonstrate that the DMAP design can easily support failure recovery because of the flexibility of allowing a MN to choose any AR to be the DMAP for mobility and service management. If access routers are powerful and flexible in future all-IP based networks to perform network-layer and application-layer functions, we propose the use of per-user proxies that can run on access routers. The user proxies can carry service context information such as cached data items and Web processing objects, and perform context-aware functions such as content adaptation for services engaged by the MN to help application executions. We investigate a proxy-based integrated mobility and service management architecture (IMSA) under which a client-side proxy is created on a per-user basis to serve as a gateway between a MN and all services engaged by the MN. Leveraging Mobile IP with route optimization, the proxy runs on an access router and cooperates with the home agent and foreign agent of the MN to maintain the location information of the MN to facilitate data delivery by services engaged by the MN. Further, the proxy optimally determines when to move with the MN so as to minimize the network cost associated with the user's mobility and service management operations. Finally we investigate a proxy-based integrated cache consistency and mobility management scheme called PICMM to support client-server query-based mobile applications, to improve query performance, the MN stores frequently used data in its cache. The MN's proxy receives invalidation reports or updated data objects from application servers, i.e., corresponding nodes (Cans) for cached data objects stored in the MN. If the MN is connected, the proxy will forward invalidation reports or fresh data objects to the MN. If the MN is disconnected, the proxy will store the invalidation reports or fresh data objects, and, once the MN is reconnected, the proxy will forward the latest cache invalidation report or data objects to the MN. We show that there is an optimal ``service area'' under which the overall cost due to query processing, cache consistency management and mobility management is minimized. To further reduce network traffic, we develop a threshold-based hybrid cache consistency management policy such that whenever a data object is updated at the server, the server sends an invalidation report to the MN through the proxy to invalidate the cached data object only if the size of the data object exceeds the given threshold. Otherwise, the server sends a fresh copy of the data object through the proxy to the MN. We identify the best ``threshold'' value that would minimize the overall network cost. We develop mathematical models to analyze performance characteristics of DMAP, IMSA and PICMM developed in the dissertation research and demonstrate that they outperform existing schemes that do not consider integrated mobility and service management or that use static regional routers to serve all MNs in the system. The analytical results obtained are validated through extensive simulation. We conclude that integrated mobility and service management can greatly reduce the overall network cost for mobile multimedia and database applications, especially when the application's data service rate is high compared with the MN's mobility rate.
- Management Tools Associated with the Development of Computer Software ProductsEgyhazy, Csaba J. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1983)No abstract available.
- Microcomputer Based Database Management Systems in Support of Office AutomationEgyhazy, Csaba J. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1983)The evolutionary advancements in microprocessor technology as it relates to database management systems (DBMSs) are dis¬cussed. Practice and experience with five commercially available database management systems are reported, based mostly on data gathered from a series of interviews focusing on comparison among systems. Several prototype systems specifically designed to meet the needs of office information systems are identified, their conceptual framework ascertained and capabilities described. Finally, remarks on the limitations and future of microcomputer based DBMSs are made.