Browsing by Author "Mili, Lamine M."
Now showing 1 - 20 of 155
Results Per Page
Sort Options
- Active, Regenerative Control of Civil StructuresScruggs, Jeffrey (Virginia Tech, 1999-05-10)An analysis is presented on the use of a proof-mass actuator as a regenerative force actuator for the mitigation of earthquake disturbances in civil structures. A proof-mass actuator is a machine which accelerates a mass along a linear path. Such actuators can facilitate two-way power flow. In regenerative force actuation, a bi- directional power-electronic drive is used to facilitate power flow both to and from the proof-mass actuator power supply. With proper control system design, this makes it possible to suppress a disturbance on a structure using mostly energy extracted from the disturbance itself, rather than from an external power source. In this study, three main objectives are accomplished. First, a new performance measure, called the "required energy capacity," is proposed as an assessment of the minimum size of the electric power supply necessary to facilitate the power flow required of the closed-loop system for a given disturbance. The relationship between the required energy capacity and the linear control system design, which is based on positive position feedback concepts, is developed. The dependency of the required energy capacity on hybrid realizations of the control law are discussed, and hybrid designs are found which minimize this quantity for specific disturbance characteristics. As the second objective, system identification and robust estimation methods are used to develop a stochastic approach to the performance assessment of structural control systems, which evaluates the average worst-case performance for all earthquakes "similar" to an actual data record. This technique is used to evaluate the required energy capacity for a control system design. In the third objective, a way is found to design a battery capacity which takes into account the velocity rating of the proof-mass actuator. Upon sizing this battery, two nonlinear controllers are proposed which automatically regulate the power flow in the closed-loop system to accommodate a power supply with a finite energy capacity, regardless of the disturbance size. Both controllers are based on a linear control system design. One includes a nonlinearity which limits power flow out of the battery supply. The other includes a nonlinearity which limits the magnitude of the proof-mass velocity. The latter of these is shown to yield superior performance.
- Adaptation and Installation of a Robust State Estimation Package in the Eef UtilityChapman, Michael Addison (Virginia Tech, 1999-02-08)Robust estimation methods have been successfully applied to the problem of power system state estimation in a real-time environment. The Schweppe-type GM-estimator with the Huber psi-function (SHGM) has been fully installed in conjunction with a topology processor in the EEF utility, headquartered in Fribourg, Switzerland. Some basic concepts of maximum likelihood estimation and robust analysis are reviewed, and applied to the development of the SHGM-estimator. The algorithms used by the topology processor and state estimator are presented, and the superior performance of the SHGM-estimator over the classic weighted least squares estimator is demonstrated on the EEF network. The measurement configuration of the EEF network has been evaluated, and suggestions for its reinforcement have been proposed.
- Adaptive Asymmetric Slot Allocation for Heterogeneous Traffic in WCDMA/TDD SystemsPark, JinSoo (Virginia Tech, 2004-07-28)Even if 3rd and 4th generation wireless systems aim to achieve multimedia services at high speed, it is rather difficult to have full-fledged multimedia services due to insufficient capacity of the systems. There are many technical challenges placed on us in order to realize the real multimedia services. One of those challenges is how efficiently to allocate resources to traffic as the wireless systems evolve. The review of the literature shows that strategic manipulation of traffic can lead to an efficient use of resources in both wire-line and wireless networks. This aspect brings our attention to the role of link layer protocols, which is to orchestrate the transmission of packets in an efficient way using given resources. Therefore, the Media Access Control (MAC) layer plays a very important role in this context. In this research, we investigate technical challenges involving resource control and management in the design of MAC protocols based on the characteristics of traffic, and provide some strategies to solve those challenges. The first and foremost matter in wireless MAC protocol research is to choose the type of multiple access schemes. Each scheme has advantages and disadvantages. We choose Wireless Code Division Multiple Access/Time Division Duplexing (WCDMA/TDD) systems since they are known to be efficient for bursty traffic. Most existing MAC protocols developed for WCDMA/TDD systems are interested in the performance of a unidirectional link, in particular in the uplink, assuming that the number of slots for each link is fixed a priori. That ignores the dynamic aspect of TDD systems. We believe that adaptive dynamic slot allocation can bring further benefits in terms of efficient resource management. Meanwhile, this adaptive slot allocation issue has been dealt with from a completely different angle. Related research works are focused on the adaptive slot allocation to minimize inter-cell interference under multi-cell environments. We believe that these two issues need to be handled together in order to enhance the performance of MAC protocols, and thus embark upon a study on the adaptive dynamic slot allocation for the MAC protocol. This research starts from the examination of key factors that affect the adaptive allocation strategy. Through the review of the literature, we conclude that traffic characterization can be an essential component for this research to achieve efficient resource control and management. So we identify appropriate traffic characteristics and metrics. The volume and burstiness of traffic are chosen as the characteristics for our adaptive dynamic slot allocation. Based on this examination, we propose four major adaptive dynamic slot allocation strategies: (i) a strategy based on the estimation of burstiness of traffic, (ii) a strategy based on the estimation of volume and burstiness of traffic, (iii) a strategy based on the parameter estimation of a distribution of traffic, and (iv) a strategy based on the exploitation of physical layer information. The first method estimates the burstiness in both links and assigns the number of slots for each link according to a ratio of these two estimates. The second method estimates the burstiness and volume of traffic in both links and assigns the number of slots for each link according to a ratio of weighted volumes in each link, where the weights are driven by the estimated burstiness in each link. For the estimation of burstiness, we propose a new burstiness measure that is based on a ratio between peak and median volume of traffic. This burstiness measure requires the determination of an observation window, with which the median and the peak are measured. We propose a dynamic method for the selection of the observation window, making use of statistical characteristics of traffic: Autocorrelation Function (ACF) and Partial ACF (PACF). For the third method, we develop several estimators to estimate the parameters of a traffic distribution and suggest two new slot allocation methods based on the estimated parameters. The last method exploits physical layer information as another way of allocating slot to enhance the performance of the system. The performance of our proposed strategies is evaluated in various scenarios. Major simulations are categorized as: simulation on data traffic, simulation on combined voice and data traffic, simulation on real trace data. The performance of each strategy is evaluated in terms of throughput and packet drop ratio. In addition, we consider the frequency of slot changes to assess the performance in terms of control overhead. We expect that this research work will add to the state of the knowledge in the field of link-layer protocol research for WCDMA/TDD systems.
- An Adaptive-Importance-Sampling-Enhanced Bayesian Approach for Topology Estimation in an Unbalanced Power Distribution SystemXu, Yijun; Valinejad, Jaber; Korkali, Mert; Mili, Lamine M.; Wang, Yajun; Chen, Xiao; Zheng, Zongsheng (IEEE, 2021-10-20)The reliable operation of a power distribution system relies on a good prior knowledge of its topology and its system state. Although crucial, due to the lack of direct monitoring devices on the switch statuses, the topology information is often unavailable or outdated for the distribution system operators for real-time applications. Apart from the limited observability of the power distribution system, other challenges are the nonlinearity of the model, the complicated, unbalanced structure of the distribution system, and the scale of the system. To overcome the above challenges, this paper proposes a Bayesian-inference framework that allows us to simultaneously estimate the topology and the state of a three-phase, unbalanced power distribution system. Specifically, by using the very limited number of measurements available that are associated with the forecast load data, we efficiently recover the full Bayesian posterior distributions of the system topology under both normal and outage operation conditions. This is performed through an adaptive importance sampling procedure that greatly alleviates the computational burden of the traditional Monte-Carlo (MC)-sampling-based approach while maintaining a good estimation accuracy. The simulations conducted on the IEEE 123-bus test system and an unbalanced 1282-bus system reveal the excellent performances of the proposed method.
- Aggregator-Assisted Residential Participation in Demand Response ProgramHasan, Mehedi (Virginia Tech, 2012-05-09)The demand for electricity of a particular location can vary significantly based on season, ambient temperature, time of the day etc. High demand can result in very high wholesale price of electricity. The reason for this is very short operating duration of peaking power plants which require large capital investments to establish. Those power plants remain idle for most of the time of a year except for some peak demand periods during hot summer days. This process is inherently inefficient but it is necessary to meet the uninterrupted power supply criterion. With the advantage of new technologies, demand response can be a preferable alternative, where peak reduction can be obtained during the short durations of peak demand by controlling loads. Some controllable loads are with thermal inertia and some loads are deferrable for a short duration without making any significant impact on users' lifestyle and comfort. Demand response can help to attain supply - demand balance without completely depending on expensive peaking power plants. In this research work, an incentive-based model is considered to determine the potential of peak demand reduction due to the participation of residential customers in a demand response program. Electric water heating and air-conditioning are two largest residential loads. In this work, hot water preheating and air-conditioning pre-cooling techniques are investigated with the help of developed mathematical models to find out demand response potentials of those loads. The developed water heater model is validated by comparing results of two test-case simulations with the expected outcomes. Additional energy loss possibility associated with water preheating is also investigated using the developed energy loss model. The preheating temperature set-point is mathematically determined to obtain maximum demand reduction by keeping thermal loss to a minimal level. Case studies are performed for 15 summer days to investigate the demand response potential of water preheating. Similarly, demand response potential associated with pre-cooling operation of air-conditioning is also investigated with the help of the developed mathematical model. The required temperature set-point modification is determined mathematically and validated with the help of known outdoor temperature profiles. Case studies are performed for 15 summer days to demonstrate effectiveness of this procedure. On the other hand, total load and demand response potential of a single house is usually too small to participate in an incentive-based demand response program. Thus, the scope of combining several houses together under a single platform is also investigated in this work. Monte Carlo procedure-based simulations are performed to get an insight about the best and the worst case demand response outcomes of a cluster of houses. In case of electrical water heater control, aggregate demand response potential of 25 houses is determined. Similarly, in case of air-conditioning control (pre-cooling), approximate values of maximum, minimum and mean demand reduction amounts are determined for a cluster of 25 houses. Expected increase in indoor temperature of a house is calculated. Afterwards, the air-conditioning demand scheduling algorithm is developed to keep aggregate air-conditioning power demand to a minimal level during a demand response event. Simulation results are provided to demonstrate the effectiveness of the proposed algorithm.
- An Analysis of the Financial Incentives Impact on the Utility Demand-Side Management ProgramsPrastawa, Andhika (Virginia Tech, 1998-07-10)Many utilities implement the financial incentive plans in promoting their Demand-Side Management (DSM) programs. The plans are intended to reduce the customer investment cost for a high efficiency equipment option, so that to make the investment more attractive. Despite its potential to increase customer participation, the financial incentives could cause a considerable increase in program cost to the utility. An analysis of financial incentive impact on the utility DSM program is conducted in this thesis. The analysis uses the combination of the customer participation modeling and the cost-benefit analysis of a DSM program. A modeling of customer participation by a discrete choice model is presented. The model uses the logistic probability functions. The benefit and cost of DSM programs are explored to develop the analysis methodology. Two typical energy conservation options of DSM programs are taken for case studies to demonstrate the analysis. The analysis is also conducted to see the effect of financial incentives on the performance of DSM programs in a fluctuating marginal energy cost. The result of this research shows that the financial incentive could induce the customer participation, thus provide an increase of benefit and costs. However, this research also reveals that, in certain circumstances, the financial incentive may result in a decrease of net benefit due to significant increase of cost. These imply that utilities must carefully evaluate the financial incentive plan in their DSM programs, before the programs are implemented.
- Analysis of time varying load for minimum loss distribution reconfigurationKhan, Asif H. (Virginia Tech, 1992-04-05)A reconfiguration algorithm for electrical distribution system to reduce system losses is presented. The algorithm determines the switching patterns as a function of time. Either seasonal or daily time studies may be performed. Both manual and automatic switches are used to reconfigure the system for seasonal studies, whereas only automatic switches are considered for daily studies. An algorithm for load estimation is developed. The load estimation algorithm provides load information for each time point to be analyzed. The load estimation algorithm can incorporate any or all of the following: spot loads, circuit measurements, and customer time-varying diversified load characteristics. Voltage dependency of loads is considered at the circuit level. It is shown that switching at the system peak can reduce losses but may cause a marginal increase in system peak. Voltage and current constraints are incorporated in the reconfiguration algorithm. Data base tables and data structures used in the algorithm are described. Example problems are provided to illustrate results.
- Anomaly Detection in Data-Driven Coherency Identification Using Cumulant TensorSun, Bo; Xu, Yijun; Wang, Qinling; Lu, Shuai; Yu, Ruizhi; Gu, Wei; Mili, Lamine M. (IEEE, 2023-12-04)As a model reduction tool, coherency identification has been extensively investigated by power researchers using various model-driven and data-driven approaches. Model-driven approaches typically lose their accuracy due to linear assumptions and parameter uncertainties, while data-driven approaches inevitably suffer from bad data issues. To overcome these weaknesses, we propose a data-driven cumulant tensor-based approach that can identify coherent generators and detect anomalies simultaneously. More specifically, it converts the angular velocities of generators into a fourth-order cumulant tensor that can be decomposed to reflect the coherent generators. Also, using co-kurtosis in the cumulant tensor, anomalies can be detected as well. The simulations reveal its excellent performance.
- APECS: A Polychrony based End-to-End Embedded System Design and Code SynthesisAnderson, Matthew Eric (Virginia Tech, 2015-05-19)The development of high integrity embedded systems remains an arduous and error-prone task, despite the efforts by researchers in inventing tools and techniques for design automation. Much of the problem arises from the fact that the semantics of the modeling languages for the various tools, are often distinct, and the semantics gaps are often filled manually through the engineer's understanding of one model or an abstraction. This provides an opportunity for bugs to creep in, other than standardizing software engineering errors germane to such complex system engineering. Since embedded systems applications such as avionics, automotive, or industrial automation are safety critical, it is very important to invent tools, and methodologies for safe and reliable system design. Much of the tools, and techniques deal with either the design of embedded platforms (hardware, networking, firmware etc), and software stack separately. The problem of the semantic gap between these two, as well as between models of computation used to capture semantics must be solved in order to design safer embedded systems. In this dissertation we propose a methodology for the end-to-end modeling and analysis of safety-critical embedded systems. Our approach consists of formal platform modeling, and analysis; formal application modeling; and 'correct-by-construction' code synthesis with the aim of bridging semantic gaps between the various abstractions and models required for the end-to-end system design. While the platform modeling language AADL has formal semantics, and analysis tools for real-time, and performance verification, the application behavior modeling in AADL is weak and part of an annex. In our work, we create the APECS (AADL and Polychrony based Embedded Computing Synthesis) methodology to allow an embedded system design specification all the way from platform architecture and platform components, the real-time behavior, non-functional properties, as well as the application software modeling. Our main contribution is to integrate a polychronous application software modeling language, and synthesis algorithms in order for synthesis of the embedded software running on the target platform, with the required constraints being met. We believe that a polychronous approach is particularly well suited for a multiprocessor/multi-controller distributed platform where different components often operate at independent rates and concurrently. Further, the use of a formal polychronous language will allow for formal validation of the software prior to code generation. We present a prototype framework that implements this approach, which we refer to as the AADL and Polychrony based Embedded Computing System (APECS). Our prototype utilizes an extended version of Ocarina to provide code generation for the AADL model. Our polychronous modeling language is MRICDF. Our prototype extends Ocarina to support software specification in MRICDF and generate multi-threaded software. Additionally, we implement an automated translation from Simulink to MRICDF, allowing designers to benefit from its formal semantics and exploit engineers' familiarity with Simulink tools, and legacy models. We present case studies utilizing APECS to implement safety critical systems both natively in MRICDF and in Simulink through automated translation.
- Application of Bifurcation Theory to Subsynchronous Resonance in Power SystemsHarb, Ahmad M. (Virginia Tech, 1996-12-16)A bifurcation analysis is used to investigate the complex dynamics of two heavily loaded single-machine-infinite-busbar power systems modeling the characteristics of the BOARDMAN generator with respect to the rest of the North-Western American Power System and the CHOLLA# generator with respect to the SOWARO station. In the BOARDMAN system, we show that there are three Hopf bifurcations at practical compensation values, while in the CHOLLA#4 system, we show that there is only one Hopf bifurcation. The results show that as the compensation level increases, the operating condition loses stability with a complex conjugate pair of eigenvalues of the Jacobian matrix crossing transversely from the left- to the right-half of the complex plane, signifying a Hopf bifurcation. As a result, the power system oscillates subsynchronously with a small limit-cycle attractor. As the compensation level increases, the limit cycle grows and then loses stability via a secondary Hopf bifurcation, resulting in the creation of a two-period quasiperiodic subsynchronous oscillation, a two-torus attractor. On further increases of the compensation level, the quasiperiodic attractor collides with its basin boundary, resulting in the destruction of the attractor and its basin boundary in a bluesky catastrophe. Consequently, there are no bounded motions. When a damper winding is placed either along the q-axis, or d-axis, or both axes of the BOARDMAN system and the machine saturation is considered in the CHOLLA#4 system, the study shows that, there is only one Hopf bifurcation and it occurs at a much lower level of compensation, indicating that the damper windings and the machine saturation destabilize the system by inducing subsynchronous resonance. Finally, we investigate the effect of linear and nonlinear controllers on mitigating subsynchronous resonance in the CHOLLA#4 system . The study shows that the linear controller increases the compensation level at which subsynchronous resonance occurs and the nonlinear controller does not affect the location and type of the Hopf bifurcation, but it reduces the amplitude of the limit cycle born as a result of the Hopf bifurcation.
- The application of phasor measurements for adaptive protection and controlHuang, Chiung-Yi (Virginia Tech, 1991-06-05)This thesis describes an adaptive protection scheme that performs the collection of the voltage and current phasors during post-fault period, tracking the power swing phenomena, identifying the onset of instability, and then issuing a stabilizing command. In this work, the protection system is to maintain the reliability! ensure the secure operation, and prevent total collapse of the power system. The work is based upon methods of clustering for meter placement in a bulk power system, and selecting the pilot points for installing the phasor measurement units (PMU) to measure the bus voltage phasors and associated branch current phasors. According to the network law, fast calculation of state estimation can be made from these measurements. Because the on-line assessment of transient stability has to provide a quick and approximate result, the direct method which determines stability without explicit integration techniques is applicable in this study. The results of the system stability prediction in real-time by digital computer simulation under stable and unstable operating conditions are presented.
- Application of Wavelets to Filtering and Analysis of Self-Similar SignalsWirsing, Karlton (Virginia Tech, 2014-03-21)Digital Signal Processing has been dominated by the Fourier transform since the Fast Fourier Transform (FFT) was developed in 1965 by Cooley and Tukey. In the 1980's a new transform was developed called the wavelet transform, even though the first wavelet goes back to 1910. With the Fourier transform, all information about localized changes in signal features are spread out across the entire signal space, making local features global in scope. Wavelets are able to retain localized information about the signal by applying a function of a limited duration, also called a wavelet, to the signal. As with the Fourier transform, the discrete wavelet transform has an inverse transform, which allows us to make changes in a signal in the wavelet domain and then transform it back in the time domain. In this thesis, we have investigated the filtering properties of this technique and analyzed its performance under various settings. Another popular application of wavelet transform is data compression, such as described in the JPEG 2000 standard and compressed digital storage of fingerprints developed by the FBI. Previous work on filtering has focused on the discrete wavelet transform. Here, we extended that method to the stationary wavelet transform and found that it gives a performance boost of as much as 9 dB over that of the discrete wavelet transform. We also found that the SNR of noise filtering decreases as a frequency of the base signal increases up to the Nyquist limit for both the discrete and stationary wavelet transforms. Besides filtering the signal, the discrete wavelet transform can also be used to estimate the standard deviation of the white noise present in the signal. We extended the developed estimator for the discrete wavelet transform to the stationary wavelet transform. As with filtering, it is found that the quality of the estimate decreases as the frequency of the base signal increases. Many interesting signals are self-similar, which means that one of their properties is invariant on many different scales. One popular example is strict self-similarity, where an exact copy of a signal is replicated on many scales, but the most common property is statistical self-similarity, where a random segment of a signal is replicated on many different scales. In this work, we investigated wavelet-based methods to detect statistical self-similarities in a signal and their performance on various types of self-similar signals. Specifically, we found that the quality of the estimate depends on the type of the units of the signal being investigated for low Hurst exponent and on the type of edge padding being used for high Hurst exponent.
- Applications of phasor measurements to the real-time monitoring of a power systemBarber, David Edward (Virginia Tech, 1994-03-05)This thesis discusses applications of phasor measurement units to power system monitoring and synchronous generator modeling. Adjustments to a previously developed PMU placement algorithm are described which observe generator and tie line flows explicitly and reduces the number of PMUs required for a system, still observing the major dynamic components of a system. This adjusted methodology leaves some buses unobserved. A method for estimating the state of the unobserved region is developed based on using constant admittance or constant current load models. These models are accurate for a small neighborhood around the operating point when they were calculated. To determine the maximum error expected for any given system estimate, an equation relating the maximum error in the voltages to the maximum change in load power is derived. Once the issue of power system monitoring has been presented, the application of PMUs to the synchronous generator modeling is explored. This thesis deals with the on-line identification of the generator transient model using a recursive version of the generalized least squares algorithm. Simulations have been performed to demonstrate the validity and difficulties with these methods.
- Assessment of direct methods in power system transient stability analysis for on-line applicationsLlamas, Armando (Virginia Polytechnic Institute and State University, 1992)The advent of synchronized phasor measurements allows the problem of real time prediction of instability and control to be considered. The use of direct methods for these on-line applications is assessed. The classical representation of a power system allows the use of two reference frames: Center of angle and one machine as reference. Formulae allowing transition between the two reference frames are derived. It is shown that the transient energy in both formulations is the same, and that line resistances do not dampen system oscillations. Examples illustrating the mathematical characterization of the region of attraction, exit point, closest u.e.p. and controlling u.e.p. methods are presented. Half-dimensional systems (reduced-order systems) are discussed. The general expression for the gradient system which accounts for transfer conductances is derived without making use of the infinite bus assumption. Examples illustrating the following items are presented: a) Effect of the linear ray approximation on the potential energy (inability to accurately locate the u.e.p.’s); b) Comparison of Kakimoto’s and Athay’s approach for PEBS crossing detection; c) BCU method and; d) One·parameter transversality condition. It is illustrated that if the assumption of the one-parameter transversality condition is not satisfied, the PEBS and BCU methods may give incorrect results for multi-swing stability. A procedure to determine if the u.e.p. found by the BCU method lies on the stability boundary of the original system is given. This procedure improves the BCU method for off~line applications when there is time for a hybrid approach (direct and conventional), but it does not improve it for on-line applications due to the following: a) It is time consuming and b) If it finds that the u.e.p. does not belong to the stability boundary it provides no information concerning the stability/instability of the system.
- Automated Detection of Surface Defects on Barked Hardwood Logs and Stems Using 3-D Laser Scanned DataThomas, Liya (Virginia Tech, 2006-09-08)This dissertation presents an automated detection algorithm that identifies severe external defects on the surfaces of barked hardwood logs and stems. The defects detected are at least 0.5 inch in height and at least 3 inches in diameter, which are severe, medium to large in size, and have external surface rises. Hundreds of real log defect samples were measured, photographed, and categorized to summarize the main defect features and to build a defect knowledge base. Three-dimensional laser-scanned range data capture the external log shapes and portray bark pattern, defective knobs, and depressions. The log data are extremely noisy, have missing data, and include severe outliers induced by loose bark that dangles from the log trunk. Because the circle model is nonlinear and presents both additive and non-additive errors, a new robust generalized M-estimator has been developed that is different from the ones proposed in the statistical literature for linear regression. Circle fitting is performed by standardizing the residuals via scale estimates calculated by means of projection statistics and incorporated in the Huber objective function to bound the influence of the outliers in the estimates. The projection statistics are based on 2-D radial-vector coordinates instead of the row vectors of the Jacobian matrix as proposed in the statistical literature dealing with linear regression. This approach proves effective in that it makes the GM-estimator to be influence bounded and thereby, robust against outliers. Severe defects are identified through the analysis of 3-D log data using decision rules obtained from analyzing the knowledge base. Contour curves are generated from radial distances, which are determined by robust 2-D circle fitting to the log-data cross sections. The algorithm detected 63 from a total of 68 severe defects. There were 10 non-defective regions falsely identified as defects. When these were calculated as areas, the algorithm locates 97.6% of the defect area, and falsely identifies 1.5% of the total clear area as defective.
- A Bayesian Approach for Estimating Uncertainty in Stochastic Economic Dispatch Considering Wind Power PenetrationHu, Zhixiong; Xu, Yijun; Korkali, Mert; Chen, Xiao; Mili, Lamine M.; Valinejad, Jaber (IEEE, 2020-08-10)The increasing penetration of renewable energy resources in power systems, represented as random processes, converts the traditional deterministic economic dispatch problem into a stochastic one. To estimate the uncertainty in the stochastic economic dispatch (SED) problem for the purpose of forecasting, the conventional Monte-Carlo (MC) method is prohibitively time-consuming for practical applications. To overcome this problem, we propose a novel Gaussian-process-emulator (GPE)-based approach to quantify the uncertainty in SED considering wind power penetration. Facing high-dimensional real-world data representing the correlated uncertainties from wind generation, a manifold-learning-based Isomap algorithm is proposed to efficiently represent the low-dimensional hidden probabilistic structure of the data. In this low-dimensional latent space, with Latin hypercube sampling (LHS) as the computer experimental design, a GPE is used, for the first time, to serve as a nonparametric, surrogate model for the original complicated SED model. This reduced-order representative allows us to evaluate the economic dispatch solver at sampled values with a negligible computational cost while maintaining a desirable accuracy. Simulation results conducted on the IEEE 118-bus test system reveal the impressive performance of the proposed method.
- Characterization and Modeling of Solar Flare Effects in the Ionosphere Observed by HF InstrumentsChakraborty, Shibaji (Virginia Tech, 2021-06-08)The ionosphere is the conducting part of the upper atmosphere that plays a significant role in trans-ionospheric high frequency (HF, 3-30 MHz) radiowave propagation. Solar activities, such as solar flares, radiation storms, coronal mass ejections (CMEs), alter the state of the ionosphere, a phenomenon known as Sudden Ionospheric Disturbance (SID), that can severely disrupt HF radio communication links by enhancing radiowave absorption and altering signal frequency and phase. The Super Dual Auroral Radar Network (SuperDARN) is an international network of low-power HF coherent scatter radars distributed across the globe to probe the ionosphere and its relation to solar activities. In this study, we used SuperDARN HF radar measurements with coordinated spacecraft and riometer observations to investigate statistical characteristics and the driving mechanisms of various manifestations of solar flare-driven SIDs in HF observations. We begin in Chapter 2 with a statistical characterization of various effects of solar flares on SuperDARN observations. Simultaneous observations from GOES spacecraft and SuperDARN radars confirmed flare-driven HF absorption depends on solar zenith angle, operating frequency, and intensity of the solar flare. The study found flare-driven SID also affects the SuperDARN backscatter signal frequency, which produces a sudden rise in Doppler velocity observation, referred to as the ``Doppler flash'', which occurs before the HF absorption effect. In Chapter 3, we further investigate the HF absorption effect during successive solar flares and those co-occurring with other geomagnetic disturbances during the 2017 solar storm. We found successive solar flares can extend the ionospheric relaxation time and the variation of HF absorption with latitude is different depending on the type of disturbance. In Chapter 4, we looked into an inertial property of the ionosphere, sluggishness, its variations with solar flare intensity, and made some inferences about D-region ion-chemistry using a simulation study. Specifically, we found solar flares alter the D-region chemistry by enhancing the electron detachment rate due to a sudden rise in molecular vibrational and rotational energy under the influence of enhanced solar radiation. In Chapter 5, we describe a model framework that reproduces HF absorption observed by riometers. This chapter compares different model formulations for estimating HF absorption and discusses different driving influences of HF absorption. In Chapter 6, we have investigated different driving mechanisms of the Doppler flash observed by SuperDARN radars. We note two particular findings: (i) the Doppler flash is predominantly driven by a change in the F-region refractive index and (ii) a combination of solar flare-driven enhancement in photoionization, and changes in the zonal electric field and(or) ionospheric conductivity reduces upward ion-drift, which produces a lowering effect in the F-region HF radiowave reflection height. Collectively, these research findings provide a statistical characterization of various solar flare effects on the ionosphere seen in the HF observations, and insights into their driving mechanisms and impacts on ionospheric dynamics.
- A Combined Koopman-Subgraph Method for a Secure Power System IslandingJlassi, Zahra; Ben Kilani, Khadija; Elleuch, Mohamed; Mili, Lamine M. (2020-03)This paper proposes a new methodology for power system islanding based on Koopman modal coherency (KMC) combined with subgraph theory. Initial system partitioning is determined from the Koopman bus-angle coherency matrix. Then, the technique of subgraph is used to modulate the partitions, yielding to viable islands. Balancing subgraphs are constructed from the clusters pairwise neighboring areas. Overload subgraphs swap to connected over-generation subgraphs. Stable islands satisfy nonlinear dynamic coherency, and minimal power mismatch. The search algorithm substantiates the required bands specified in the frequency operating standards. Under severe disturbances, over-frequency generation shedding and under-frequency load shedding schemes are implemented. The proposed islanding methodology is demonstrated on a realistic 151-bus, 24-machines power system. Results show that the proposed scheme can reduce failure regions in faulted power systems.
- Communication Infrastructure for the Smart Grid: A Co-Simulation Based Study on Techniques to Improve the Power Transmission System Functions with Efficient Data NetworksLin, Hua (Virginia Tech, 2012-09-27)The vision of the smart grid is predicated upon pervasive use of modern digital communication techniques in today's power system. As wide area measurements and control techniques are being developed and deployed for a more resilient power system, the role of communication networks is becoming prominent. Advanced communication infrastructure provides much wider system observability and enables globally optimal control schemes. Wide area measurement and monitoring with Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IED) is a growing trend in this context. However, the large amount of data collected by PMUs or IEDs needs to be transferred over the data network to control centers where real-time state estimation, protection, and control decisions are made. The volume and frequency of such data transfers, and real-time delivery requirements mandate that sufficient bandwidth and proper delay characteristics must be ensured for the correct operations. Power system dynamics get influenced by the underlying communication infrastructure. Therefore, extensive integration of power system and communication infrastructure mandates that the two systems be studied as a single distributed cyber-physical system. This dissertation proposes a global event-driven co-simulation framework, which is termed as GECO, for interconnected power system and communication network. GECO can be used as a design pattern for hybrid system simulation with continuous/discrete sub-components. An implementation of GECO is achieved by integrating two software packages: PSLF and NS2 into the framework. Besides, this dissertation proposes and studies a set of power system applications which can be only properly evaluated on a co-simulation framework like GECO, namely communication-based distance relay protection, all-PMU state estimation and PMU-based out-of-step protection. All of them take advantage of interplays between the power grid and the communication infrastructure. The GECO experiments described in this dissertation not only show the efficacy of the GECO framework, but also provide experience on how to go about using GECO in smart grid planning activities.
- Computational social science in smart power systems: Reliability, resilience, and restorationValinejad, Jaber; Mili, Lamine M.; Yu, Xinghuo; van der Wal, C. Natalie; Xu, Yijun (Institution of Engineering and Technology (IET), 2023-06-07)Smart grids are typically modelled as cyber–physical power systems, with limited consideration given to the social aspects. Specifically, traditional power system studies tend to overlook the behaviour of stakeholders, such as end‐users. However, the impact of end‐users and their behaviour on power system operation and response to disturbances is significant, particularly with respect to demand response and distributed energy resources. Therefore, it is essential to plan and operate smart grids by taking into account both the technical and social aspects, given the crucial role of active and passive end‐users, as well as the intermittency of renewable energy sources. In order to optimize system efficiency, reliability, and resilience, it is important to consider the level of cooperation, flexibility, and other social features of various stakeholders, including consumers, prosumers, and microgrids. This article aims to address the gaps and challenges associated with modelling social behaviour in power systems, as well as the human‐centred approach for future development and validation of socio‐technical power system models. As the cyber–physical–social system of energy emerges as an important topic, it is imperative to adopt a human‐centred approach in this domain. Considering the significance of computational social science for power system applications, this article proposes a list of research topics that must be addressed to improve the reliability and resilience of power systems in terms of both operation and planning. Solving these problems could have far‐reaching implications for power systems, energy markets, community usage, and energy strategies.