Browsing by Author "Allison, Donald C. S."
Now showing 1 - 20 of 35
Results Per Page
Sort Options
- An algebraic model of software evolutionKeller, Benjamin J. (Virginia Tech, 1990-09-05)A model of the software evolution process, called the Abstraction Refinement Model, is described which builds on the algebraic influence of the Laws of Programming and the transformational Draco Paradigm. The result is an algebraic structure consisting of the states of the software product (system descriptions) ordered by a relation of relative correctness with transformations defined between the system descriptions. This structure is interpreted as the software evolution space, with the intended semantics given by a model combining axiomatic semantics and the Lindenbaum algebra of a first-order logic. Using this interpretation, software evolution can be represented as a sequence of transformations on system descriptions. The primary contributions of the characterization of software evolution are to the understanding of maintenance and its relationship to development. The influence of development on maintenance is shown as the transfer of a "descriptive context" for the software system. This context is used as an information source during maintenance, and is progressively modified by maintenance activities. These activities are characterized by balanced forward and reverse transformations. The use of reverse transformations explaining the role of reverse engineering in maintenance for information gathering and document reconstruction. Additionally, the form of maintenance affects the performance of the activity, with adaptive maintenance differing from corrective, perfective and preventive maintenance. These factors contribute to the descriptive nature and utility of the Abstraction Refinement Model in defining methodologies for maintenance.
- Algorithms and Orders for Finding Noncummutative Gröbner BasesKeller, Benjamin J. (Virginia Tech, 1997-03-17)The problem of choosing efficient algorithms and good admissible orders for computing Gröbner bases in noncommutative algebras is considered. Gröbner bases are an important tool that make many problems in polynomial algebra computationally tractable. However, the computation of Gröbner bases is expensive, and in noncommutative algebras is not guaranteed to terminate. The algorithm, together with the order used to determine the leading term of each polynomial, are known to affect the cost of the computation, and are the focus of this thesis. A Gröbner basis is a set of polynomials computed, using Buchberger's algorithm, from another set of polynomials. The noncommutative form of Buchberger's algorithm repeatedly constructs a new polynomial from a triple, which is a pair of polynomials whose leading terms overlap and form a nontrivial common multiple. The algorithm leaves a number of details underspecified, and can be altered to improve its behavior. A significant improvement is the development of a dynamic dictionary matching approach that efficiently solves the pattern matching problems of noncommutative Gröbner basis computations. Three algorithmic alternatives are considered: the strategy for selecting triples (selection), the strategy for removing triples from consideration (triple elimination), and the approach to keeping the set interreduced (set reduction). Experiments show that the selection strategy is generally more significant than the other techniques, with the best strategy being the one that chooses the triple with the shortest common multiple. The best triple elimination strategy ignoring resource constraints is the Gebauer-Müller strategy. However, another strategy is defined that can perform as well as the Gebauer-Müller strategy in less space. The experiments also show that the admissible order used to determine the leading term of a polynomial is more significant than the algorithm. Experiments indicate that the choice of order is dependent on the input set of polynomials, but also suggest that the length lexicographic order is a good choice for many problems. A more practical approach to chosing an order may be to develop heuristics that attempt to find an order that minimizes the number of overlaps considered during the computation.
- Analysis and Implementation of Algorithms for Noncommutative AlgebraStruble, Craig Andrew (Virginia Tech, 2000-04-24)A fundamental task of algebraists is to classify algebraic structures. For example, the classification of finite groups has been widely studied and has benefited from the use of computational tools. Advances in computer power have allowed researchers to attack problems never possible before. In this dissertation, algorithms for noncommutative algebra, when ab is not necessarily equal to ba, are examined with practical implementations in mind. Different encodings of associative algebras and modules are also considered. To effectively analyze these algorithms and encodings, the encoding neutral analysis framework is introduced. This framework builds on the ideas used in the arithmetic complexity framework defined by Winograd. Results in this dissertation fall into three categories: analysis of algorithms, experimental results, and novel algorithms. Known algorithms for calculating the Jacobson radical and Wedderburn decomposition of associative algebras are reviewed and analyzed. The algorithms are compared experimentally and a recommendation for algorithms to use in computer algebra systems is given based on the results. A new algorithm for constructing the Drinfel'd double of finite dimensional Hopf algebras is presented. The performance of the algorithm is analyzed and experiments are performed to demonstrate its practicality. The performance of the algorithm is elaborated upon for the special case of group algebras and shown to be very efficient. The MeatAxe algorithm for determining whether a module contains a proper submodule is reviewed. Implementation issues for the MeatAxe in an encoding neutral environment are discussed. A new algorithm for constructing endomorphism rings of modules defined over path algebras is presented. This algorithm is shown to have better performance than previously used algorithms. Finally, a linear time algorithm, to determine whether a quotient of a path algebra, with a known Gröbner basis, is finite or infinite dimensional is described. This algorithm is based on the Aho-Corasick pattern matching automata. The resulting automata is used to efficiently determine the dimension of the algebra, enumerate a basis for the algebra, and reduce elements to normal forms.
- Analysis of Function Component Complexity for Hypercube Homotopy AlgorithmsChakraborty, Amal; Allison, Donald C. S.; Ribbens, Calvin J.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1990)Probability-one homotopy algorithms are a class of methods for solving nonlinear systems of equations that globally convergent from an arbitrary starting point with probability one. The essence of these homotopy algorithms is the construction of a homotopy map p-sub a and the subsequent tracking of a smooth curve y in the zero set p-sub a to the -1 (0) of p-sub a. Tracking the zero curve y requires repeated evaluation of the map p-sub a, its n x (v + 1) Jacobian matrix Dp-sub a and numerical linear algebra for calculating the kernel of Dp-sub a. This paper analyzes parallel homotopy algorithms on a hypercube, considering the numerical algebra, several communications topologies and problem decomposition strategies, functions component complexity, problem size, and the effect of different component complexity distributions. These parameters interact in complicated ways, but some general principles can be inferred based on empirical results.
- Analysis of Function Component Complexity for Hypercube Homotopy AlgorithmsChakraborty, Amal; Allison, Donald C. S.; Ribbens, Calvin J.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991)Probability-one homotopy algorithms are a class of methods for solving nonlinear systems of equations that are globally convergent from an arbitrary starting point with probability one. The essence of these homotopy algorithms is the construction of a homotopy map ra and the subsequent tracking of a smooth curve g in the zero set ra-1 (0) of ra . Tracking the zero curve g requires repeated evaluation of the map ra its n x (n + 1) Jacobian matrix Dra , and numerical linear algebra for calculating the kernel of Dra . This paper analyzes parallel homotopy algorithms on a hypercube, briefly reviewing the numerical linear algebra, several communication topologies and problem decomposition strategies, and concentrating on function component complexity, problem size, and the effect of different component complexity distributions. These parameters interact in complicated ways, but some general principles can be inferred based on empirical results. Implications for developing reliable and efficient parallel mathematical software packages for this problem area are also discussed.
- Analysis of networks with dynamic topologiesMoose, Robert Lewis (Virginia Polytechnic Institute and State University, 1987)Dynamic hierarchical networks represent an architectural strategy for employing adaptive behavior in applications sensitive to highly variable external demands or uncertain internal conditions. The characteristics of such architectures are described, and the significance of adaptive capability is discussed. The necessity for assessing cost/benefit tradeoffs leads to the use of queueing network models. The general model, a network of M/M/1 queues in a random environment, is introduced and then is simplified so that the links may be treated as isolated M/M/1 queues in a random environment. This treatment yields a formula for approximate mean network delay by combining matrix-geometric results (mean queue length and mean delay) for the individual links. Conditions under which the analytic model is considered valid are identified through comparison with a discrete event simulation model. Last, performance of the dynamic hierarchy is compared with that of the static hierarchy. This comparison establishes conditions for which the dynamic architecture enables performance equal or nearly equal to performance of the static architecture.
- Blending Methods for Composite Laminate OptimizationAdams, David Bruce (Virginia Tech, 2002-08-16)Composite panel structure optimization is commonly decomposed into panel optimization subproblems, with specified local loads, resulting in manufacturing incompatibilities between adjacent panel designs. Using genetic algorithms to optimize local panel stacking sequences allows panel populations of stacking sequences to evolve in parallel and send migrants to adjacent panels, so as to blend the local panel designs globally. The blending process is accomplished using the edit distance between individuals of a population and the set of migrants from adjacent panels. The objective function evaluating the fitness of designs is modified according to the severity of mismatches detected between neighboring populations. This lays the ground work for natural evolution to a blended global solution without leaving the paradigm of genetic algorithms. An additional method proposed here for constructing globally blended panel designs uses a parallel decomposition antithetical to that of earlier work. Rather than performing concurrent panel genetic optimizations, a single genetic optimization is conducted for the entire structure with the parallelism solely within the fitness evaluations. A guide based genetic algorithm approach is introduced to exclusively generate and evaluate valid globally blended designs, utilizing a simple master-slave parallel implementation, implicitly reducing the size of the problem design space and increasing the quality of discovered local optima.
- Data coverage performance evaluation for real-time systemsBrooks, Gail Dean (Virginia Tech, 1990-04-05)Throughout the life cycle of a real-time, distributed computer system, numerous situations arise which require access to detailed system performance information. One important type of performance information is known as data coverage. Data coverage is the process whereby all ranges of values for a particular variable in a piece of software have been exercised. Data regarding variable values can be very useful in determining the correctness of program execution. This project investigated the design, cost and usefulness of adding a data coverage performance evaluation capability within an existing Navy weapon system. The target system used is a real-time, loosely coupled distributed computer system. Experiments were created by writing programs in a high level language and executing the programs on the target system. Several of the programs contained "seeded" errors. Experiments were monitored and data was collected by an existing performance evaluation system. The data was collected in a real-time, non-interfering manner and evaluated off-line. The off-line evaluation was accomplished by modifying an existing software package. The package did not include any capabilities for producing a data coverage report. Therefore, it was necessary to enhance the software package such that a data coverage report was created. A description of the investigation and the various factors used for the evaluation of a data coverage capability, the design proposal, and the cost analysis are included in this report.
- Decomposing Rectilinear Figures into RectanglesChadha, Ritu; Allison, Donald C. S. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988)We discuss the problem of decomposing rectilinear regions, with or without holes, into a minimum number of rectangles. There are two different problems considered here: decomposing a figure into non-overlapping parts, called partitioning, and decomposing a figure into possibly overlapping parts, called covering. A method is outlined and proved for solving the above two problems, and algorithms for the solutions of these problems are presented. The partitioning problem can be solved in time O(n-to the 5/2), where n is the number of vertices of the figure, whereas the covering problem is exponential in its time complexity.
- Domain Decomposition Preconditioners for Hermite Collocation ProblemsMateescu, Gabriel (Virginia Tech, 1998-12-14)Accelerating the convergence rate of Krylov subspace methods with parallelizable preconditioners is essential for obtaining effective iterative solvers for very large linear systems of equations. Substructuring provides a framework for constructing robust and parallel preconditioners for linear systems arising from the discretization of boundary value problems. Although collocation is a very general and effective discretization technique for many PDE problems, there has been relatively little work on preconditioners for collocation problems. This thesis proposes two preconditioning methods for solving linear systems of equations arising from Hermite bicubic collocation discretization of elliptic partial differential equations on square domains with mixed boundary conditions. The first method, called edge preconditioning, is based on a decomposition of the domain in parallel strips, and the second, called edge-vertex preconditioning, is based on a two-dimensional decomposition. The preconditioners are derived in terms of two special rectangular grids -- a coarse grid with diameter H and a hybrid coarse/fine grid -- which together with the fine grid of diameter h provide the framework for approximating the interface problem induced by substructuring. We show that the proposed methods are effective for nonsymmetric indefinite problems, both from the point of view of the cost per iteration and of the number of iterations. For an appropriate choice of H, the edge preconditioner requires O(N) arithmetic operations per iteration, while the edge-vertex preconditioner requires O(N 4/3 ) operations, where N is the number of unknowns. For the edge-vertex preconditioner, the number of iterations is almost constant when h and H decrease such that H/h is held constant and it increases very slowly with H when h is held constant. For both the edge- and edge-vertex preconditioners the number of iterations depends only weakly on h when H is constant. The edge-vertex preconditioner outperforms the edge-preconditioner for small enough H. Numerical experiments illustrate the parallel efficiency of the preconditioners which is similar or even better than that provided by the well-known PETSc parallel software library for scientific computing.
- Efficient Numeric Computation of a Phase Diagram in Biased Diffusion of Two SpeciesParks, Michael Lawrence (Virginia Tech, 2000-05-09)A lattice gas with equal numbers of oppositely charged particles, diffusing under the influence of a uniform electric field and an excluded volume condition undergoes an order-disorder phase transition, controlled by the particle density and the field strength. This transition may be continuous (second order) or continuous (first order). Results from previous discrete simulations are shown, and a theoretical continuum model is developed. As this is a nonequilibrium system, there is no associated free energy to determine the location of a first order transition. Instead, the model equations for this system are evolved in time numerically, and the locus of this transition is determined via the presence of a stable state with coexisting regions of order and disorder. The Crank-Nicholson, nonlinear Gauss-Seidel, and GMRES algorithms used to solve the model equations are discussed. Performance enhancements and limits on convergence are considered.
- Enhancing and Reconstructing Digitized HandwritingSwain, David James (Virginia Tech, 1997-08-11)This thesis involves restoration, reconstruction, and enhancement of a digitized library of hand-written documents. Imaging systems that perform this digitization often degrade the quality of the original documents. Many techniques exist for reconstructing, restoring, and enhancing digital images; however, many require a priori knowledge of the imaging system. In this study, only partial a priori knowledge is available, and therefore unknown parameters must be estimated before restoration, reconstruction, or enhancement is possible. The imaging system used to digitize the documents library has degraded the images in several ways. First, it has introduced a ringing that is apparent around each stroke. Second, the system has eliminated strokes of narrow widths. To restore these images, the imaging system is modeled by estimating the point spread function from sample impulse responses, and the image noise is estimated in an attempt to apply standard linear restoration techniques. The applicability of these techniques is investigated in the first part of this thesis. Then nonlinear filters, structural techniques, and enhancement techniques are applied to obtain substantial improvements in image quality.
- Exploring the powers of stacks and queues via graph layoutsPemmaraju, Sriram V. (Virginia Tech, 1992)In this dissertation we employ stack and queue layouts of graphs to explore the relative power of stacks and queues. Stack layout and queue layouts of graphs can be examined from several points of view. A stack or a queue layout of a graph can be thought of as an embedding of the graph in a plane satisfying certain constraints, or as a graph linearization that optimizes certain cost measures, or as a scheme to process the edges of the graph using the fewest number of stacks or queues. All three points of view permeate this research, though the third point of view dominates. Specific problems in stack and queue layouts of graphs have their origin in the areas of VLSI, fault-tolerant computing, scheduling parallel processes, sorting with a network of stacks and queues, and matrix computations. We first present two tools that are useful in the combinatorial and algorithmic analysis of stack and queue layouts as well as in determining bounds on the stacknumber and the queuenumber for a variety of graphs. The first tool is a formulation of a queue layout of a graph as a covering of its adjacent matrix with staircases. Not only does this formulation serve as a tool for analyzing stack and queue layouts, it also leads to efficient algorithms for several problems related to sequences, graph theory, and computational geometry. The connection between queue layouts and matrix covers also forms the basis of a new scheme for performing matrix computations on a data driven network. Our analysis reveals that this scheme uses less hardware and is faster than existing schemes. The second tool is obtained by considering separated and mingled layouts of graphs. This tool allows us to obtain lower bounds on the stacknumber and the queuenumber of a graph by partitioning the graph into subgraphs and simply concentrating on the interaction of the subgraphs. These tools are used to obtain results in three areas. The first area is stack and queue layouts of directed acyclic graphs (dags). This area is motivated by problems of scheduling parallel processes. We establish the stacknumber and the queuenumber of classes of dags such as trees, unicylic graphs, outerplanar graphs, and planar graphs. We then present linear time algorithms to recognize 1-stack dags and leveled-planar dags. In contrast, we show that the problem of recognizing 9-stack dags and the problem of recognizing 4-queue dags are both NP-complete. The second area is stack and queue layouts of partially ordered sets (posets). We establish upper bounds on the queuenumber of a poset in terms of other measures such as length, width, and jumpnumber. We also present lower bounds on the stacknumber and on the queuenumber of certain classes of posets. We conclude by showing that the problem of recognizing a 4-queue poset is NP-complete. The third area is queue layouts of planar graphs. While it has been shown that the stacknumber of the family of planar graphs is 4, the queuenumber of planar graphs is unknown. We conjecture that a family of planar graphs—the stellated triangles—has unbounded queuenumber; using separated and mingled layouts, we demonstrate significant progress towards that result.
- Fast geometric algorithmsNoga, Mark T. (Virginia Polytechnic Institute and State University, 1984)This thesis addresses a number of important problems which fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L₁ hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L₁ diameter of a set of points, the metric traveling salesman problem, and finding the superrange of starshaped and monotone polygons. The main theme of all our work has been to develop a set of very fast state-of-the-art algorithms which supercede any rivals in terms of speed and ease of implementation. In some cases we have refined existing algorithms; for others we have ·developed new techniques which add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry.
- Generalized hill climbing algorithms for discrete optimization problemsJohnson, Alan W. (Virginia Tech, 1996-10-01)Generalized hill climbing (GHC) algorithms are introduced, as a tool to address difficult discrete optimization problems. Particular formulations of GHC algorithms include simulated annealing (SA), local search, and threshold accepting (T A), among. others. A proof of convergence of GHC algorithms is presented, that relaxes the sufficient conditions for the most general proof of convergence for stochastic search algorithms in the literature (Anily and Federgruen [1987]). Proofs of convergence for SA are based on the concept that deteriorating (hill climbing) transitions between neighboring solutions are accepted by comparing a deterministic function of both the solution change cost and a temperature parameter to a uniform (0,1) random variable. GHC algorithms represent a more general model, whereby deteriorating moves are accepted according to a general random variable. Computational results are reported that illustrate relationships that exist between the GHC algorithm's finite-time performance on three problems, and the general random variable formulations used. The dissertation concludes with suggestions for further research.
- Granularity Issues for Solving Polynomial Systems via Globally Convergent Algorithms on a HypercubeAllison, Donald C. S.; Chakraborty, Amal; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988-05-01)Polynomial systems of equations frequently arise in many applications such as solid modeling, robotics, computer vision, chemistry, chemical engineering, and mechanical engineering . Locally convergent iterative methods such as quasi-Newton methods may diverge or fail to find all meaningful solutions of a polynomial system. Recently a homotopy algorithm has been proposed for polynomial systems that is guaranteed globally convergent (always converges from an arbitrary starting point) with probability one, finds all solutions to the polynomial system, and has a large amount of inherent parallelism. There are several ways the homotopy algorithms can be decomposed to run on a hypercube. The granularity of a decomposition has a profound effect on the performance of the algorithm. The results of decompositions with two different granularities are presented. The experiments were conducted on an iPSC-16 hypercube using actual industrial problems.
- The Granularity of Parallel Homotopy Algorithms for Polynomial Systems of EquationsAllison, Donald C. S.; Harimoto, S.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1988-05-01)Polynomial systems consist of n polynomial functions in n variables, with real or complex coefficients. Finding zeros of such systems is challenging because there may be a large number of solutions, and Newton-type methods can rarely be guaranteed to find the complete set of solutions. There are homotopy algorithms for polynomial systems of equations that are globally convergent from an arbitrary starting point with probability one, are guaranteed to find all the solutions, and are robust, accurate, and reasonably efficient. There is inherent parallelism at several levels in these algorithms. Several parallel homotopy algorithms with different granularities are studied on several different parallel machines, using actual industrial problems from chemical engineering and solid modeling.
- High Dimensional Homotopy Curve Tracking on a Shared Memory MultiprocessorAllison, Donald C. S.; Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T. (Department of Computer Science, Virginia Polytechnic Institute & State University, 1991)Results are reported for a series of experiments involving numerical curve tracking on a shared memory parallel computer. Several algorithms exist for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then the tracking of some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. A parallel version of HOMPACK is implemented on a shared memory parallel computer with various levels and degrees of parallelism (e.g., linear algebra, function and Jacobian matrix evaluation), and a detailed study is presented for each of these levels with respect to the speedup in execution time obtained with the parallelism, the time spent implementing the parallel code and the extra memory allocated by the parallel algorithm.
- Homotopy algorithms for H²/H∞ control analysis and synthesisGe, Yuzhen (Virginia Tech, 1993-12-02)The problem of finding a reduced order model, optimal in the H² sense, to a given system model is a fundamental one in control system analysis and design. The addition of a H∞ constraint to the H² optimal model reduction problem results in a more practical yet computationally more difficult problem. Without the global convergence of homotopy methods, both the H² optimal and the combined H²/H∞ model reduction problems are very difficult. For both problems homotopy algorithms based on several formulations—input normal form; Ly, Bryson, and Cannon's 2 X 2 block parametrization; a new nonminimal parametrization—are developed and compared here. For the H² optimal model order reduction problem, these numerical algorithms are also compared with that based on Hyland and Bernstein's optimal projection equations. Both the input normal form and Ly form are very efficient compared to the over parametrization formulation and the optimal projection equations approach, since they utilize the minimal number of possible degrees of freedom. However, they can fail to exist or be very ill conditioned. The conditions under which the input normal form and the Ly form become ill conditioned are examined. The over-parametrization formulation solves the ill conditioning issue, and usually is more efficient than the approach based on solving the optimal projection equations for the H² optimal model reduction problem. However, the over-parametrization formulation introduces a very high order singularity at the solution, and it is doubtful whether this singularity can be overcome by using interpolation or other existing methods. Although there are numerous algorithms for solving Riccati equations, there still remains a need for algorithms which can operate efficiently on large problems and on parallel machines and which can be generalized easily to solve variants of Riccati equations. This thesis gives a new homotopy-based algorithm for solving Riccati equations on a shared memory parallel computer. The central part of the algorithm is the computation of the kernel of the Jacobian matrix, which is essential for the corrector iterations along the homotopy zero curve. Using a Schur decomposition the tensor product structure of various matrices can be efficiently exploited. The algorithm allows for efficient parallelization on shared memory machines. The linear-quadratic-Gaussian (LQG) theory has engendered a systematic approach to synthesize high performance controllers for nominal models of complex, multi-input multioutput systems and hence it is a breakthrough in modern control theory. Homotopy algorithms for both full and reduced-order LQG controller design problems with an H∞ constraint on disturbance attenuation are developed. The H∞ constraint is enforced by replacing the covariance Lyapunov equation by a Riccati equation whose solution gives an upper bound on H² performance. The numerical algorithm, based on homotopy theory, solves the necessary conditions for a minimum of the upper bound on H² performance. The algorithms are based on two minimal parameter formulations: Ly, Bryson, and Cannon's 2 X 2 block parametrization and the input normal Riccati form parametrization. An over-parametrization formulation is also proposed. Numerical experiments suggest that the combination of a globally convergent homotopy method with a minimal parameter formulation applied to the upper bound minimization gives excellent results for mixed-norm synthesis.
- Key Management Techniques for Dynamic Secure MulticastingKoneni, Madhu (Virginia Tech, 2003-05-08)Most of the Internet applications today require multicasting. For example, software updates, multimedia content distribution, interacting gaming and stock data distribution require multicast services. All of these applications require privacy and authenticity of the participants. Most of the multicasting groups are dynamic and some of them are large in number. Only those users who belong to the multicasting group should receive the information and be able to decrypt it. New users joining the group should receive information immediately but should not understand the information that was released prior to their joining. Similarly, if users leave the group, they should not receive any further information and should not be able to decrypt it. Keys need to be distributed to the users belonging to the current session and hence some kind of key management is required. Existing schemes for secure multicasting are limited to small and static groups. To allow large and dynamic groups to use the services of multicasting, some protocols have been developed: Multicast Trees, Spanning Tree, Centralized Tree-Based Key Management, Flat-key Management and Distributed Key Management. Some of these schemes are better than others with respect to the speed, memory consumption, and amount of communication needed to distribute the keys. All these schemes are limited in performance with respect to the speed, memory consumption, and amount of communication needed in distributing the keys. In this thesis, a number of public and private key algorithms and key management techniques for secure and dynamic multicasting are studied and analyzed. The thesis is focused on the secure lock method developed by Chiou and Chen, using the Chinese Remainder Theorem. The protocol is implemented for a small group of users and its performance is studied. While, the secure lock method works well for a small group of users and the performance is degraded when the group grows in size. A protocol is proposed for a large and dynamic group, based on the idea of the Chinese Remainder Theorem. A performance study is carried out by comparing our proposed protocol with the existing multicasting protocols. The analysis shows that the proposed protocol works well for large and dynamic groups and gives significantly better performance.