Current Search: Research Repository (x) » * (x) » Thesis (x) » Computer engineering (x)
Search results
Pages
- Title
- A Novel In-Situ Method for Inhibiting Surface Roughening during the Thermal Oxide Desorption Etching of Silicon and Gallium Arsenide.
- Creator
-
Pun, Arthur Fong-Yuen, Zheng, Jim. P., Gielisse, Peter J., Perry, Reginald J., Foo, Simon Y., Xin, Yan, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
A method inhibiting surface roughening of silicon and gallium arsenide wafers during the thermal desorption of their native oxide layers is proposed and tested experimentally, with silicon results indicating a 75% reduction in surface roughness from an averaged value of 2.20 nm to 0.56 nm, and gallium arsenide results indicating a 76% reduction from an averaged surface roughness of 1.6 nm to 0.4 nm. This method does not significantly alter the semiconductor crystalline surface, thus retaining...
Show moreA method inhibiting surface roughening of silicon and gallium arsenide wafers during the thermal desorption of their native oxide layers is proposed and tested experimentally, with silicon results indicating a 75% reduction in surface roughness from an averaged value of 2.20 nm to 0.56 nm, and gallium arsenide results indicating a 76% reduction from an averaged surface roughness of 1.6 nm to 0.4 nm. This method does not significantly alter the semiconductor crystalline surface, thus retaining suitability for subsequent epitaxial growth, as demonstrated experimentally. The method is readily implementable in currently utilized deposition systems, subject to the requirements of material growth, substrate heating, and producing a non-oxidizing environment, either inert atmosphere or reduced pressures. The technique involves depositing a thin sacrificial film directly onto the native oxide surface at lower temperatures, of which the thickness is dependent on both the native oxide thickness and the oxide stochiometry initially present within the oxide layer, but has been found experimentally to be on the order of 0.5 nm – 4 nm for a 2 nm to 4 nm air-formed native oxide layer. Upon heating this structure to high temperatures, the native oxide preferentially reacts with the sacrificial deposited film instead of the bulk wafer, resulting in the chemical reduction to volatile components, which are evaporated at these temperatures. This method is developed for silicon and gallium arsenide and examined experimentally utilizing atomic force microscopy and reflection high-energy electron diffraction. Different native oxide preparation techniques are theorized to yield varying chemical stochiometries, with experimental results elucidating information regarding these differences. Further, a modified tri-layer implementation, in which the deposited film is re-oxidized, is tested for applicability as a novel wafer pacification technique.
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-0474
- Format
- Thesis
- Title
- Study of Correlations Between Microwave Transmissions and Atmospheric Effects.
- Creator
-
Stringer, Andrew James, Foo, Simon Y., Yu, Ming, Harvey, Bruce A., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Understanding the effects of atmospheric conditions with respect to microwave propagation and performance is critical to the design and placement of microwave antennas for modern communication systems. Weather data acquisition in the state of Florida is underdeveloped and the published effects of weather on microwave communications are limited to general models based on large regional climate models. The goal of this research is to correlate atmospheric conditions and microwave transmission...
Show moreUnderstanding the effects of atmospheric conditions with respect to microwave propagation and performance is critical to the design and placement of microwave antennas for modern communication systems. Weather data acquisition in the state of Florida is underdeveloped and the published effects of weather on microwave communications are limited to general models based on large regional climate models. The goal of this research is to correlate atmospheric conditions and microwave transmission via the existing Florida Department of Transportation (FDOT) Road Weather Information System (RWIS) network, new Environmental Sensor Station (ESS) sites, and Harris Corporation network management software – Netboss. The microwave radios in the FDOT microwave infrastructure through powerful Netboss scripting tools and options are utilized to record the received signal level (RSL) output of the microwave radios for signal analysis. This RSL data is analyzed and correlated with the acquired ESS weather data to determine basic atmospheric effects on microwave propagation. Methods for analysis of correlated data include existing atmospheric attenuation models, such as the Global (Crane) and International Telecommunications Union (ITU) models, and empirical methods such as the Fast Fourier Transform (FFT), Short Time Fourier Transform (STFT), Discrete Wavelet Transform (DWT) and wavelet decomposition, and correlation analysis of each method used. The data is treated as a discrete non-stationary signal. Results do not show a clear correlation between receiver signal level (RSL) and weather parameters for several of the test methods. Testing the correlation and cross correlation of the raw data yielded weak correlation. The simulation of rain attenuation via the ITU model displayed weak insignificant results for the sets of RSL data. The FFT and STFT both incorporate too much noise and distortion to accurately compute a correlation. Wavelet decomposition shows a strong correlation between several weather parameters and a weak correlation for others. This result confirms the wavelet decomposition analysis and agrees with trends found in the RSL and weather parameters. Further analysis points to multipath fading and atmospheric ducting. During early hours of the morning, reflections from moist surfaces, such as tree foliage and other terrestrial objects, water vapor and dew will cause transmitted signals to reach the receive antenna out of phase, which will cause attenuation or gain while atmospheric ducting will cause gain in the RSL and is visible in the acquired data. It is concluded that weather conditions such as water vapor, mist, and rising fog have an effect on microwave propagation.
Show less - Date Issued
- 2010
- Identifier
- FSU_migr_etd-0396
- Format
- Thesis
- Title
- Evaluation and DSP Based Implementation of PWM Approaches for Single-Phase DC-AC Converters.
- Creator
-
Zhou, Lining, Chang, Jie J., Zheng, Jim P., Roberts, Rodney G., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Switching-mode single-phase DC-AC converters have been widely used in critical applications such as uninterrupted power supply systems and AC motor drivers. Among various control techniques, Pulse Width Modulation (PWM) technique is the most effective one that is commonly used to regulate the magnitude and frequency of the converter's output voltage. With recent revolution in the Digital Signal Processing (DSP) technology, the trend of converter control is moving to DSP based real-time...
Show moreSwitching-mode single-phase DC-AC converters have been widely used in critical applications such as uninterrupted power supply systems and AC motor drivers. Among various control techniques, Pulse Width Modulation (PWM) technique is the most effective one that is commonly used to regulate the magnitude and frequency of the converter's output voltage. With recent revolution in the Digital Signal Processing (DSP) technology, the trend of converter control is moving to DSP based real-time digital control system. Digital control has the advantage of low cost with increased flexibility and accuracy. In this thesis, three open-loop PWM control schemes are evaluated and compared in both time domain and frequency domain. Theoretical analysis and spectrum evaluation have been completed. Digital simulation is conducted for each of the control schemes to verify the theoretical analysis. Experimental implementation based on a TMS320F2812 DSP is presented and finally system experimental results are demonstrated.
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-0519
- Format
- Thesis
- Title
- A Framework for Implementing Independent Component Analysis Algorithms.
- Creator
-
Ejaz, Masood, Foo, Simon Y., Meyer-Baese, Anke, Liu, Xiuwen, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Independent Component Analysis (ICA) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals. ICA defines a generative model for the observed multivariate data, which is generally given as a large database of samples. In the model the data samples are assumed to be linear or non-linear mixture of some unknown latent variables (time dependent or independent), and the mixing system is also unknown. The latent...
Show moreIndependent Component Analysis (ICA) is a statistical and computational technique for revealing hidden factors that underlie sets of random variables, measurements, or signals. ICA defines a generative model for the observed multivariate data, which is generally given as a large database of samples. In the model the data samples are assumed to be linear or non-linear mixture of some unknown latent variables (time dependent or independent), and the mixing system is also unknown. The latent variables, if time-independent, are assumed to have a non-gaussian distribution. For variables that have a particular time structure, the non-gaussian distribution condition can be alleviated. Also, the latent variables are assumed to be mutually independent. These variables are called the independent components of the observed data and can be found, up to some degree of accuracy, using different algorithms based on ICA techniques. There are several algorithms based on different approaches for ICA widely in use for all sort of applications. These algorithms include, but not limited to, the popular FastICA, FOBI (Fourth-Order Blind Identification) & JADE (Joint Approximate Diagonalization of Eigen-Matrices), Maximum Likelihood & Infomax, Kernel based algorithms, SOBI (Second-Order Blind Identification) etc. All the algorithms except SOBI are used for time-independent data. The main purpose of this research is to create a framework for using different ICA algorithms. In other words to analyze the statistical properties of the data to estimate which ICA algorithm will be best suited for that type of data or which ICA algorithm will converge for the specific type of data. The data to be analyzed can come from any application or source, although for our research we have generated a large number of different datasets with random mixtures of different number of random variables that follow a number of different distributions. The idea is to make a system that takes the data and yields some characteristics or specifications of the data that correlates maximally to some specific type of ICA algorithm or algorithms. Four different ICA algorithms have been used for this research: FastICA based on the optimization of negentropy of the datasets, Infomax based on the maximum likelihood of the datasets, Joint Approximate Diagonalization of Eigenvalues (JADE) based on the fourth-order cumulant tensor of the input data, and finally Kernel ICA based on the optimization of canonical correlation of the mapped values of the input datasets in the kernel space. We used hundreds of datasets to study the errors generated by all the methods and the correlation between the datasets and the methods and found out some very interesting results to show that for some specific parameters of ICA algorithms, one can estimate, with high probability, the relationship between the statistics of the datasets and the approach to be used to find the independent components. The statistics, easy to employ, can predict with high accuracy the ICA method or methods to be used for some specific dataset without actually dealing with all the ICA methods and thus saving quite a bit of time and processing resources, hence increasing the efficiency of the researcher.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-0584
- Format
- Thesis
- Title
- College of Engineering Microwave Noise Temperature Measurement Uncertainty Analysis Utilizing Monte Carlo Simulations.
- Creator
-
Smith, Ronald Joseph, Weatherspoon, Mark H., Arora, Rajendra K., Roberts, Rodney G., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The ability to quantify device performance characteristics is a concern shared by developers, manufacturers, and consumers alike. From a subatomic perspective, the fluctuations found in repeated measurements can be attributed to the random nature of the charge carriers – the electrons. This limitation is also present in any receiver noise measurement set-up. The uncertainty of a noise measurement should be reported with the measurement, but assessing it can be problematic. The receiver system...
Show moreThe ability to quantify device performance characteristics is a concern shared by developers, manufacturers, and consumers alike. From a subatomic perspective, the fluctuations found in repeated measurements can be attributed to the random nature of the charge carriers – the electrons. This limitation is also present in any receiver noise measurement set-up. The uncertainty of a noise measurement should be reported with the measurement, but assessing it can be problematic. The receiver system noise equation, which describes a measurement system, possesses non-linear parameter dependencies. Because of this, an intuitive or quantitative assessment of the measurement uncertainties would be very difficult, if not impossible, to obtain. This research work analyzes the measurement uncertainty inherent to a receiver noise measurement set-up utilizing Monte Carlo simulations. The algorithm used to assess the uncertainty incorporates a random number generator, a non-linear least squares fitting routine, and an uncertainty extraction routine. The random number generation depends on the behavior of noise sources; consequently, it produces either a normal or uniform distribution of data. Normalizing the generator allows the spread to be centered about a desired mean with a desired variance. The variance is a function of the underlying uncertainties associated with the test equipment employed. These values are given in the equipment specification sheets. The spread of real measurement data taken in a testing environment arises from perfectly uncorrelated, partially correlated, and perfectly correlated noise sources. The extreme cases (perfectly uncorrelated and perfectly correlated) are utilized to determine the effect of the erratic behavior of the charge carriers at the extremes. To simulate correlated noise sources, the random numbers are generated with the same random number generator. For the uncorrelated noise sources, the random numbers are generated by separate random processes. Once the random numbers are created, they are used to generate a spread of noise parameter simulated data. Due to the non-linear dependencies of these noise parameters, the effects of the random deviants on measurement uncertainty can not be predicted. An over-determined system of equations allows the receiver parameters of interest to be solved for. The over-determined system of equations can be created because one of the underlying noise parameters has multiple states. Over-determining the system allows for statistical smoothing of the data points. As mentioned previously, the noise parameters have non-linear dependencies and the system noise equation can not easily be transformed into a linear form. Consequently, a non-linear fitting routine is employed. The number of solutions the routine could find for one over-determined set of equations is endless, therefore the acceptable solutions are confined to values close to the true values – "true values" being a set of values actually measured in the testing environment. This confinement simply entails setting the value used as the initial guess for the fitting routine to that of the true value. Once a set of values is found for the receiver parameters, the process is repeated N times (N being the number of simulated data points desired). For each receiver parameter, there are N values that deviate about some mean value. The spread in values is a function of the underlying random process, but the behavior can not be predicted due to the non-linear dependencies. The only assumption that can be made is that the spread should exhibit a Gaussian distribution since all of the random data (except ambient temperature) is created based on this normal distribution. The overall uncertainty in the noise temperature for several devices is determined and compared with the value estimated for a simulated system. Several frequencies are selected for the analysis. The results show good agreement for calculations performed within either 1 or 2 standard deviations of the mean value for the hot and ambient loads. The estimated uncertainty for the simulated receiver system offers explanation as to why the cold load noise temperature measurement uncertainty diverges from the values found for the other DUTs.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-0348
- Format
- Thesis
- Title
- A Comparison of Wi-Fi and WiMAX with Case Studies.
- Creator
-
Wu, Ming-Chieh, Harvey, Bruce A., Yu, Ming, Foo, Simon Y., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Currently over 50% of the world's population check their e-mails everyday. Collecting information from the Internet is a routine. In the early 21st century, wireless communication has become a hot topic in IT (Information Technology) and CT (Communication Technology), as evidenced by the growth of wireless technologies such as 3G, Wi-Fi and WiMax. 3G is a cellular technology developed in conjunction with the cellular phone network. Wi-Fi is a wireless local area network technology. WiMax is...
Show moreCurrently over 50% of the world's population check their e-mails everyday. Collecting information from the Internet is a routine. In the early 21st century, wireless communication has become a hot topic in IT (Information Technology) and CT (Communication Technology), as evidenced by the growth of wireless technologies such as 3G, Wi-Fi and WiMax. 3G is a cellular technology developed in conjunction with the cellular phone network. Wi-Fi is a wireless local area network technology. WiMax is designed for the wireless metropolitan area network. Today, people not only want the fixed wireless access to the Internet, but also want the mobile wireless access as well. They want a ubiquitous connection, even when in a train, a cab, or the subway. This demand is resulting in increasing competition between the leading wireless technologies. 3G, Wi-Fi and WiMax all appear to have the potential to feed the demand, but still have issues that need to be addressed. The future direction of wireless Internet access is uncertain, including whether these three technologies will operate cooperatively or competitively. This thesis is going to predict the future direction by analysis of 3G, Wi-Fi and WiMax technologies and the evaluation of three wireless access case studies. This thesis will begin with an introduction to the history of Internet and will then continue with a discussion of the technical aspects of 3G, Wi-Fi and WiMax. After the technology introduction, this thesis will evaluate three current implementations of wireless Internet access as case studies to verify the capabilities of Wi-Fi and WiMax, and to discuss the feasibility of building a city-wide wireless network. Finally, a reasonable prediction of the future implementation of a city-wide wireless Internet structure will be presented.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-0701
- Format
- Thesis
- Title
- A Stochastic Approach to Digital Control Design and Implementation in Power Electronics.
- Creator
-
Zhang, Da, Li, Hui, Collins, Emmanuel G., Foo, Simon Y., Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
This dissertation uses the theory of stochastic arithmetic as a solution for the FPGA implementation of complex control algorithms for power electronics applications. Compared with the traditional digital implementation, the stochastic approach simplifies the computation involved and saves digital resources. The implementation of stochastic arithmetic is also compatible with modern VLSI design and manufacturing technology and enhances the ability of FPGA devices New anti-windup PI controllers...
Show moreThis dissertation uses the theory of stochastic arithmetic as a solution for the FPGA implementation of complex control algorithms for power electronics applications. Compared with the traditional digital implementation, the stochastic approach simplifies the computation involved and saves digital resources. The implementation of stochastic arithmetic is also compatible with modern VLSI design and manufacturing technology and enhances the ability of FPGA devices New anti-windup PI controllers are proposed and implemented in a FPGA device using stochastic arithmetic. The developed designs provide solutions to enhance the computational capability of FPGA and offer several advantages: large dynamic range, easy digital design, minimization of the scale of digital circuits, reconfigurability, and direct hardware implementation, while maintaining the high control performance of traditional anti-windup techniques. A stochastic neural network (NN) structure is also proposed for FPGA implementation. Typically NNs are characterized as highly parallel algorithms that usually occupy enormous digital resources and are restricted to low cost digital hardware devices which do not have enough digital resource. The stochastic arithmetic simplifies the computation of NNs and significantly reduces the number of logic gates required for the proposed the NN estimator. In this work, the proposed stochastic anti-windup PI controller and stochastic neural network theory are applied to design and implement the field-oriented control of an induction motor drive. The controller is implemented on a single field-programmable gate array (FPGA) device with integrated neural network algorithms. The new proposed stochastic PI controllers are also developed as motor speed controllers with anti-windup function. An alternative stochastic NN structure is proposed for an FPGA implementation of a feed-forward NN to estimate the feedback signals in an induction motor drive. Compared with the conventional digital control of motor drives, the proposed stochastic based algorithm has many advantages. It simplifies the arithmetic computations of FPGA and allows the neural network algorithms and classical control algorithms to be easily implemented into a single FPGA. The control and estimation performances have been verified successfully using hardware in the loop test setup. Besides the motor drive applications, the proposed stochastic neural network structure is also applied to a neural network based wind speed sensorless control for wind turbine driven systems. The proposed stochastic neural network wind speed estimator has considered the optimized usage of FPGA resource and the trade-off between the accuracy and the number of employed digital logic elements. Compared with the traditional approach, the proposed estimator uses minimum digital logic resources and enables large parallel neural network structures to be implemented in low-cost FPGA devices with high-fault tolerance capability. The neural network wind speed estimator has been verified successfully with a wind turbine test bed installed in CAPS (Center for Advanced Power Systems). Given that a low-cost and high-performance implementation can be achieved, it is believed that such stochastic control ICs will be extended to many other industry applications involving complex algorithms.
Show less - Date Issued
- 2006
- Identifier
- FSU_migr_etd-0543
- Format
- Thesis
- Title
- A Design Methodology for the Implementation of Fuzzy Logic Traffic Controller Using Field Programmable Gate Array.
- Creator
-
Ambre, Mandar Shriram, Kwan, Bing, Meyer-Baese, Uwe, Foo, Simon, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
In this thesis, an approach is proposed for the design and implementation of fuzzy traffic controllers using Field Programmable Gate Arrays (FPGAs).The focus of this study is to develop an effective traffic signaling strategy to be implemented at a typical intersection with four approaches. Adaptive traffic control using fuzzy principles has been demonstrated and reported by the authors in the literature. Here a high-level design approach is suggested, which involves VHDL-based logic...
Show moreIn this thesis, an approach is proposed for the design and implementation of fuzzy traffic controllers using Field Programmable Gate Arrays (FPGAs).The focus of this study is to develop an effective traffic signaling strategy to be implemented at a typical intersection with four approaches. Adaptive traffic control using fuzzy principles has been demonstrated and reported by the authors in the literature. Here a high-level design approach is suggested, which involves VHDL-based logic synthesis and the use of state diagrams with a VHDL backend for graphical design description. The operations of the fuzzifier and the defuzzifier of the fuzzy controller are described in VHDL. The fuzzy rule base for the controller is described using the state diagrams. Specifically, the fuzzy inference based on the fuzzy rules is implemented using MATLAB code. The output of the MATLAB program is stored in a ROM for use in the VHDL code. Once VHDL code is obtained then the hardware is implemented using the UP1 Education board. After the design was tested by using UP1 board the next step was to design a printed circuit board for this system. This was done by using Protel Design Explorer where the input to the circuit board comes from traffic sensors in the field and the output of the circuit board is given to the traffic controller.
Show less - Date Issued
- 2004
- Identifier
- FSU_migr_etd-0028
- Format
- Thesis
- Title
- Modeling Potentials in Dionflagellate Noctiluca Miliaris.
- Creator
-
Aarons, Richard, Weatherspoon, Mark H., Andrei, Petru, Meyer-Baese, Anke, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Noctiluca Miliaris is a single cell, multi-membrane organism that has bioluminescent capabilities. This luminescent ability is closely associated with the flash triggering potential produced by an excitable membrane under active conditions such as the movement of ions through channels against the concentration gradient. External energy in the form of an electrical or mechanical stimulus is required for this type of ionic movement that can result in all-or-none spikes in the transmembrane...
Show moreNoctiluca Miliaris is a single cell, multi-membrane organism that has bioluminescent capabilities. This luminescent ability is closely associated with the flash triggering potential produced by an excitable membrane under active conditions such as the movement of ions through channels against the concentration gradient. External energy in the form of an electrical or mechanical stimulus is required for this type of ionic movement that can result in all-or-none spikes in the transmembrane potential if a certain threshold voltage is exceeded. Further examination of this strong nonlinear relationship between the transmembrane voltage and rate of ion flow due to an applied stimulating source provides valuable insight into the action potential that leads to the luminescence, and it also allows for the development of models of the vacuolar potential of Noctiluca Miliaris due to an applied current. An electric circuit model based upon a two-membrane, spherical cell consists of the series combination of a parallel R-C circuit representing the non-excitable, passive outer membrane and a parallel R-C-variable source resistor circuit representing the excitable, inner membrane. The variable source resistor represents a series combination of a dependent voltage source with a variable resistor. The dependent voltage source models the ionic gradient, and the variable resistor models the voltage-time dependent conductance of the ion channel of the active membrane. The variable resistor is also known as the active resistance, and a model of this resistance is primarily governed by the active membrane voltage. With knowledge of the values of each circuit element extracted from experimental measurements, the transmembrane potential is simulated using rectangular current pulses as the stimulating source for both sub-threshold and threshold conditions. A model of a spherical cell model with an external electric field incident on it is evaluated by solving Poisson's equation. This model requires knowledge of the cell radius, membrane thickness, and the conductivity and permittivity of the bath, membrane, and vacuole. With knowledge of these values from experimental measurements, the transmembrane potential is calculated for a stimulating source of known external electric field intensity under sub-threshold conditions.
Show less - Date Issued
- 2010
- Identifier
- FSU_migr_etd-0101
- Format
- Thesis
- Title
- Analysis and Implementation of Grid-Connected Solar PV with Harmonic Compensation.
- Creator
-
Cao, Jianwu, Edrington, Chris S., Foo, Simon Y., DeBrunner, Linda, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
A grid-connected photovoltaic (PV) system with the functionality of harmonic compensation is introduced in this thesis. Based on this, a test bed is built up to validate the practicability of the proposed scheme. Increasing interest and investment in renewable energy give rise to rapid development of high penetration solar energy. There are multiple ways to interface PV arrays with the power grid. The topology of a multi-string two-stage PV module with a centralized inverter is developed in...
Show moreA grid-connected photovoltaic (PV) system with the functionality of harmonic compensation is introduced in this thesis. Based on this, a test bed is built up to validate the practicability of the proposed scheme. Increasing interest and investment in renewable energy give rise to rapid development of high penetration solar energy. There are multiple ways to interface PV arrays with the power grid. The topology of a multi-string two-stage PV module with a centralized inverter is developed in the thesis, which is more suitable for medium power applications. However, the output of solar arrays varies due to change of solar irradiation and weather conditions. Therefore, the maximum power point tracking algorithm is implemented in DC/DC converter to enable PV arrays to operate at maximum power point. The incremental conductance algorithm is employed to control the boost converter. Then the central inverter is controlled by decoupled current control algorithm and interfaced with the utility grid via the distribution network. Besides, the current control of the inverter is independent of maximum power point control of the DC/DC converter. Finally, system performance and transient responses are analyzed under the disturbance conditions. And system stability is evaluated when solar irradiation change or system fault happens. The system is simulated in MATLAB. More and more use of static power converter and switched mode power supplies injects harmonic current into the power system. It's advisable that PV can be controlled to compensate the harmonic current as well as supply the active power. The harmonic current is extracted by using time-domain current detection method, which is much easier to implement and doesn't need any transformation comparing with the instantaneous power theory method. The system simulation is accomplished and validated by using PSCAD/EMTDC. Meanwhile, experimental test bed is also established to verify the proposed algorithm. Eventually, the total harmonic distortion (THD) of the grid current after compensation is analyzed and compared with the standard of IEEE 519-1992.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-0091
- Format
- Thesis
- Title
- Expansion and Implementation of the Wave Variable Method in Multiple Degree-of-Freedom Systems.
- Creator
-
Alise, Marc T., Roberts, Rodney G., Moore, Carl, Foo, Simon, Repperger, Daniel, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Adding force feedback to a teleoperation system can greatly improve a user's ability to complete tasks. However, operating in the presence of time delay can cause serious problems for bilateral teleoperation systems. Even a small amount of time delay in a bilateral teleoperation system will generally degrade the system's performance and can cause instability. An important approach that guarantees stability for any fixed time delay is the wave variable method. In this thesis some recent...
Show moreAdding force feedback to a teleoperation system can greatly improve a user's ability to complete tasks. However, operating in the presence of time delay can cause serious problems for bilateral teleoperation systems. Even a small amount of time delay in a bilateral teleoperation system will generally degrade the system's performance and can cause instability. An important approach that guarantees stability for any fixed time delay is the wave variable method. In this thesis some recent material dealing with teleoperation systems using wave variables is presented. In particular, we describe a wave variable scheme based on a family of scaling matrices for a multiple degree-of-freedom bilateral teleoperation system. We include a derivation of a larger and more complete family of scaling matrices that will guarantee the system remains stable for a fixed time delay. The validity of the complete family of scaling matrices is verified through simulations and experiments. A multiple degree-of-freedom bilateral teleoperation system using the new wave variable method is simulated using a SIMULINK model. In addition, the new derivation was implemented in hardware using two different systems: an Immersion joystick with a C++ program and a PHANToM Omni haptic device with a virtual environment. Finally, an experiment was constructed using the PHANToM Omni haptic device as both the master and slave of the teleoperation system. Using Matlab and SIMULINK we added time delay to the communication channel and implemented the wave variable method with the complete family of scaling matrices. Human subjects were used to determine the best set of parameters for the system.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-0167
- Format
- Thesis
- Title
- Mechanism and Robot Design: Compliance Synthesis and Optimal Fault Tolerant Manipulator Design.
- Creator
-
Yu, Hyun Geun, Roberts, Rodney G., III, Carl D. Crane, Park, Young-Bin, Meyer-Baese, Anke, Foo, Simon Y., Department of Electrical and Computer Engineering, Florida State...
Show moreYu, Hyun Geun, Roberts, Rodney G., III, Carl D. Crane, Park, Young-Bin, Meyer-Baese, Anke, Foo, Simon Y., Department of Electrical and Computer Engineering, Florida State University
Show less - Abstract/Description
-
In this research, two important concepts concerning parallel robots are investigated: compliance and fault tolerance. First, we address the issue of synthesizing a suitable compliance. This is an important problem since a well-designed compliance/stiffness mechanism can provide proper force regulation and compensate for the inevitable inaccuracy of traditional control systems. Mathematically, the compliance/stiffness of a robotic mechanism is usually modeled by a 6 by 6 symmetric positive...
Show moreIn this research, two important concepts concerning parallel robots are investigated: compliance and fault tolerance. First, we address the issue of synthesizing a suitable compliance. This is an important problem since a well-designed compliance/stiffness mechanism can provide proper force regulation and compensate for the inevitable inaccuracy of traditional control systems. Mathematically, the compliance/stiffness of a robotic mechanism is usually modeled by a 6 by 6 symmetric positive definite matrix at an equilibrium point using screw theory. Synthesis of unloaded spatial stiffness problems has attracted some attention recently and several techniques have been developed to systemically synthesize compliance mechanisms with a given symmetric positive definite spatial stiffness matrix. However, when an external wrench is exerted on the mechanism and the mechanism moves away from its unloaded equilibrium, the modeled compliance/stiffness matrix becomes non-symmetric. In this study, the non-symmetric stiffness matrix for a robotic mechanism is derived and converted into a particularly simple form using matrix algebra. Based on the canonical form of the stiffness matrix, two novel procedures are presented for the first time for synthesizing a desired non-symmetric stiffness matrix for a planar structure when there is an external load that puts the system in a loaded equilibrium. The second part of the dissertation focuses on the problem of designing nominal manipulator Jacobians that are optimally fault tolerant to one or more joint failures. In this work, optimality is defined in terms of the worst case relative manipulability index. While this approach is applicable to both serial and parallel mechanisms, it is especially applicable to parallel mechanisms with a limited workspace. It is shown that a previously derived inequality for the worst case relative manipulability index is generally not achieved for fully spatial manipulators and that the concept of optimal fault tolerance to multiple failures is more subtle than previously indicated. The final goal of this work is to identify the class of eight degree-of-freedom Gough-Stewart platforms that are optimally fault tolerant to up to two locked joint failures. Configurations of serial and parallel robots that achieve optimal fault tolerance for a give Jacobian are presented as results of this study.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-0797
- Format
- Thesis
- Title
- Morphological Image Segmentation for Co-Aligned Multiple Images Using Watersheds Transformation.
- Creator
-
Yu, Hyun Geun, Roberts, Rodney G., Foo, Simon Y., Meyer-Baese, Anke, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Image segmentation is one of the most important categories of image processing. The purpose of image segmentation is to divide an original image into homogeneous regions. It can be applied as a pre-processing stage for other image processing methods. There exist several approaches for image segmentation methods for image processing. The watersheds transformation is studied in this thesis as a particular method of a region-based approach to the segmentation of an image. The complete...
Show moreImage segmentation is one of the most important categories of image processing. The purpose of image segmentation is to divide an original image into homogeneous regions. It can be applied as a pre-processing stage for other image processing methods. There exist several approaches for image segmentation methods for image processing. The watersheds transformation is studied in this thesis as a particular method of a region-based approach to the segmentation of an image. The complete transformation incorporates a pre-processing and post-processing stage that deals with embedded problems such as edge ambiguity and the output of a large number of regions. Multiscale Morphological Gradient (MMG) and Region Adjacency Graph (RAG) are two methods that are pre-processing and post-processing stages, respectively. RAG incorporates dissimilarity criteria to merge adjacent homogeneous regions. In this thesis, the proposed system has been applied to a set of co-aligned images, which include a pair of intensity and range images. It is expected that the hidden edges within the intensity image can be detected by observing range data or vice versa. Also it is expected that the contribution of the range image in region merging can compensate for the dominance of shadows within the intensity image regardless of the original intensity of the object.
Show less - Date Issued
- 2004
- Identifier
- FSU_migr_etd-0785
- Format
- Thesis
- Title
- Modeling and Application of Effective Channel Utilization in Wireless Networks.
- Creator
-
Ng, Jonathan, Yu, Ming (Professor of scientific computing), Zhang, Zhenghao, Harvey, Bruce A., Andrei, Petru, Florida State University, College of Engineering, Department of...
Show moreNg, Jonathan, Yu, Ming (Professor of scientific computing), Zhang, Zhenghao, Harvey, Bruce A., Andrei, Petru, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less - Abstract/Description
-
As a natural scarcity in wireless networks, radio spectrum becomes a major investment in network deployment. How to improve the channel utilization (CU) of the spectrum is a challenging topic in recent research. In a network environment, the utilization of a channel is measured by the effective CU (ECU), i.e., the effective time for transmission or when the medium being sensed busy over its total operation time. However, existing work does not provide a valid model for ECU. We investigate the...
Show moreAs a natural scarcity in wireless networks, radio spectrum becomes a major investment in network deployment. How to improve the channel utilization (CU) of the spectrum is a challenging topic in recent research. In a network environment, the utilization of a channel is measured by the effective CU (ECU), i.e., the effective time for transmission or when the medium being sensed busy over its total operation time. However, existing work does not provide a valid model for ECU. We investigate the relationship between ECU and the interference from other wireless transmission nodes in a wireless network, as well as from potential malicious attacking interfering sources. By examining the relationship between their transmission time and co-transmission time ratios between two or more interferers, we propose a new model based on the channel occupation time of all nodes in a network. The model finds its mathematical foundation on the set theory. By eliminating the overlapping transmission time intervals instead of simply adding the transmission time of all interferers together, the model can obtain the expected total interference time by properly combining the transmission time of all individual nodes along with the time when two or more nodes transmit simultaneously. Through dividing the interferers into groups according to the strength levels of their received interference power at the interested node, less significant interfering signals can be ignored to reduce the complexity when investigating real scenarios. The model provides an approach to a new detection method for jamming attacks in wireless networks based on a criterion with combined operations of ECU and CU. In the experiments, we find a strong connection between ECU and the received interference power and time. In many cases, strong and frequent interference is accompanied by a declination of ECU. The descending slope though may be steep or flat. When the decrease of ECU is not significant, CU can be observed with a sharp drop instead. Therefore, the two metrics, ECU and CU when properly combined together, demonstrate to be an effective measurement for judging strong interference. In addition, relating to other jamming detection methods in the literature, we build a mathematical connection between the new jamming detection conditions and PDR, the Packet Delivery Ratio, which has been proved effective by previous researchers. Thus, the correlation between the new criteria and PDR guarantees the validity of the former by relating itself to a tested mechanism. Both the ECU model and the jamming detection method are thoroughly verified with OPNET through simulation scenarios. The experiment scenarios are depicted with configuration data and collected statistical results. Especially, the radio jamming detection experiments simulate a dynamic radio channel allocation (RCA) module with a user-friendly graphical interface, through which the interference, the jamming state, and the channel switching process can be monitored. The model can further be applied to other applications such as global performance optimization based on the total ECU of all nodes in a wireless communications environment because ECU relates one node's transmission as the interference for others using the same channel for its global attribute, which is our work planned for the next step. We would also like to compare its effectiveness with other jamming detection methods by exploring more extensive experiment research.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Ng_fsu_0071E_14083
- Format
- Thesis
- Title
- Identification of the Inertial Parameters of Manipulator Payloads.
- Creator
-
Reyes, Ryan-David, Department of Electrical and Computer Engineering
- Abstract/Description
-
Momentum based motion planning allows small and lightweight manipulators to lift loads that exceed their rated load capacity. One such planner, Sampling Based Model Predictive Optimization (SBMPO) developed at the Center for Intelligent Systems, Control, and Robotics (CISCOR), uses dynamic and kinematic models to produce trajectories that take advantage of momentum. However, the inertial parameters of the payload must be known before the trajectory can be generated. This research utilizes a...
Show moreMomentum based motion planning allows small and lightweight manipulators to lift loads that exceed their rated load capacity. One such planner, Sampling Based Model Predictive Optimization (SBMPO) developed at the Center for Intelligent Systems, Control, and Robotics (CISCOR), uses dynamic and kinematic models to produce trajectories that take advantage of momentum. However, the inertial parameters of the payload must be known before the trajectory can be generated. This research utilizes a method based on least squares techniques for determining the inertial parameters of a manipulator payload. It is applied specifically to a two degree of freedom manipulator. A set of exciting trajectories, i.e., trajectories that sufficiently excite the manipulator dynamics, in task space will be commanded to the system. Inverse kinematics are then used to determine the desired angle, angular velocity, and angular acceleration for the manipulator joints. Using the sampled torque, joint position, velocity, and acceleration data, the least squares technique produces an estimate of the inertial parameters of the payload. This paper focuses on determining which trajectories produce sufficient excitation so that an adequate estimate can be obtained.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_uhm-0418
- Format
- Thesis
- Title
- Combined Electrical and Thermal Models for Integrated Cryogenic Systems of Multiple Superconducting Power Devices.
- Creator
-
Satyanarayana, Sharath R. (Sharath Raghav), Pamidi, Sastry V., Foo, Simon Y., Bernadin, Shonda, Florida State University, College of Engineering, Department of Electrical and...
Show moreSatyanarayana, Sharath R. (Sharath Raghav), Pamidi, Sastry V., Foo, Simon Y., Bernadin, Shonda, Florida State University, College of Engineering, Department of Electrical and Computer Engineering
Show less - Abstract/Description
-
High Temperature Superconducting (HTS) technology is a potential option for applications that require high power densities for lightweight and compact solutions for transportation systems such as electric aircrafts and all-electric Navy ships. Several individual HTS power devices have been successfully demonstrated for these systems. However, the real benefit lies in providing the system level design flexibility and operational advantages with an integrated cryogenic system. A centralized...
Show moreHigh Temperature Superconducting (HTS) technology is a potential option for applications that require high power densities for lightweight and compact solutions for transportation systems such as electric aircrafts and all-electric Navy ships. Several individual HTS power devices have been successfully demonstrated for these systems. However, the real benefit lies in providing the system level design flexibility and operational advantages with an integrated cryogenic system. A centralized cryogenic cooling technology is being explored to serve multiple HTS devices in a closed loop system. This provides high efficiency and permits directing the cooling power to where it is needed depending on the mission at hand which provides operational flexibility. Design optimization, risk mitigation and the operational characteristics under various conditions need to be studied to increase the confidence level in HTS technology. Development of simpler and cost-efficient cryogenic systems are essential to make HTS systems attractive. Detailed electrical and cryogenic thermal models of the devices are also necessary to understand the of risks in HTS power systems and to devise mitigation techniques for all the potential failure modes. As the thermal and electrical characteristics of HTS devices are intertwined, coupled thermal and electrical models are necessary to perform system level studies. To enable versatile and fast models, the thermal network method is introduced for cryogenic systems. The effectiveness of the modelling technology was demonstrated using case studies of multiple HTS devices in a closed loop cryogenic helium circulation system connected in different configurations to access the relative merits of each configuration. Studies of transient behavior of HTS systems are also important to understand the response of a large HTS system after one of the cryogenic cooling components fails. These studies are essential to understand the risks and potential options in the design or in operations to mitigate some of the risks. Thermal network models developed in this study are also useful to study the temperature evolution along the whole system as a function of time after a component fails. The models are useful in exploring the design options to extend the time of operation of a device such as a HTS cable after the failure of the cryogenic system.
Show less - Date Issued
- 2018
- Identifier
- 2018_Su_Satyanarayana_fsu_0071N_14787
- Format
- Thesis
- Title
- Learning and Motion Planning for Gait-Based Legged Robots.
- Creator
-
Harper, Mario Yuuji, Erlebacher, Gordon, Collins, E., Beaumont, Paul M., Clark, Jonathan E., Shanbhag, Sachin, Meyer-Bäse, Anke, Florida State University, College of Arts and...
Show moreHarper, Mario Yuuji, Erlebacher, Gordon, Collins, E., Beaumont, Paul M., Clark, Jonathan E., Shanbhag, Sachin, Meyer-Bäse, Anke, Florida State University, College of Arts and Sciences, Department of Scientific Computing
Show less - Abstract/Description
-
Animals have demonstrated the capacity to traverse many complex unstructured terrains at high speeds by utilizing effective locomotion regimes. Motion in difficult and uncertain environments have only seen partial success on traditional wheeled or track-based robots and is limited to slow deliberative maneuvers on legged robots, which are focused on maintaining continuous stability through proper foothold selection. While legged robots have demonstrated successful navigation across many...
Show moreAnimals have demonstrated the capacity to traverse many complex unstructured terrains at high speeds by utilizing effective locomotion regimes. Motion in difficult and uncertain environments have only seen partial success on traditional wheeled or track-based robots and is limited to slow deliberative maneuvers on legged robots, which are focused on maintaining continuous stability through proper foothold selection. While legged robots have demonstrated successful navigation across many complex surfaces, motion planning algorithms currently fail to consider the unique mobility characteristics that honor the natural self-stabilizing dynamics of gait-based locomotion such as running and climbing. This dissertation outlines some of the specific motion planning challenges faced when attempting to plan for legged systems with dynamic gaits, with specific instances of these demonstrated by four robots, the dynamic running platforms: XRL, LLAMA, Minitaur and the dynamic climbing platform TAILS. Using a unique implementation of Sampling Based Model Predictive Optimization (SBMPO) designed expressly for dynamic legged robots, we demonstrate the ability to learn kinodynamic models, motion plan through obstacles on varied terrains and demonstrate navigation on vertical walls. This research has pioneered the technique which allows dynamic legged robots to navigate while honoring the natural dynamics of robot gait. Further, this document will describe to the reader the methods and algorithms that enabled Florida State University to be the first in the world to demonstrate motion planning on a dynamic climbing robot. This work is demonstrated in simulation and verified through hardware experiments on canonical motion planning scenarios, controlled laboratory settings and in unstructured terrains. Finally, this work has opened the field of dynamic legged robot intelligence for future researchers by enabling fundamental navigation and planning, efficient real-time algorithms for onboard computing, and the development of techniques to account for complex constrained motions unique to individual robots and terrains.
Show less - Date Issued
- 2018
- Identifier
- 2018_Fall_Harper_fsu_0071E_14735
- Format
- Thesis
- Title
- Inferences in Shape Spaces with Applications to Image Analysis and Computer Vision.
- Creator
-
Joshi, Shantanu H., Srivastava, Anuj, Meyer-Baese, Anke, Klassen, Eric, Roberts, Rodney, Foo, Simon Y., Fisher, John W., Department of Electrical and Computer Engineering,...
Show moreJoshi, Shantanu H., Srivastava, Anuj, Meyer-Baese, Anke, Klassen, Eric, Roberts, Rodney, Foo, Simon Y., Fisher, John W., Department of Electrical and Computer Engineering, Florida State University
Show less - Abstract/Description
-
Shapes of boundaries can play an important role in characterizing objects in images. Shape analysis involves choosing mathematical representations of shapes, deriving tools for quantifying shape differences, and characterizing imaged objects according to the shapes of their boundaries. We describe an approach for statistical analysis of shapes of closed curves using ideas from differential geometry. In this thesis, we initially focus on characterizing shapes of continuous curves, both open...
Show moreShapes of boundaries can play an important role in characterizing objects in images. Shape analysis involves choosing mathematical representations of shapes, deriving tools for quantifying shape differences, and characterizing imaged objects according to the shapes of their boundaries. We describe an approach for statistical analysis of shapes of closed curves using ideas from differential geometry. In this thesis, we initially focus on characterizing shapes of continuous curves, both open and closed, in R^2 and then propose extensions to more general elastic curves in R^n. Under appropriate constraints that remove shape-preserving transformations, these curves form infinite-dimensional, non-linear spaces, called shape spaces. We impose a Riemannian structure on the shape space and construct geodesic paths under different metrics. Geodesic paths are used to accomplish a variety of tasks, including the definition of a metric to compare shapes, the computation of intrinsic statistics for a set of shapes, and the definition of intrinsic probability models on shape spaces. Riemannian metrics allow for the development of a set of tools for computing intrinsic statistics for a set of shapes and clustering them hierarchically for efficient retrieval. Pursuing this idea, we also present algorithms to compute simple shape statistics --- means and covariances, --- and derive probability models on shape spaces using local principal component analysis (PCA), called tangent PCA (TPCA). These concepts are demonstrated using a number of applications: (i) unsupervised clustering of imaged objects according to their shapes, (ii) developing statistical shape models of human silhouettes in infrared surveillance images, (iii) interpolation of endo- and epicardial boundaries in echocardiographic image sequences, and (iv) using shape statistics to test phylogenetic hypotheses. Finally, we present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate prior shape knowledge also in form of gradient fields on curves. Through experimental results, we demonstrate the use of prior shape models in estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. This Bayesian shape extraction approach is found to yield a significant improvement in detection of objects in presence of occlusions or obscurations.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-3697
- Format
- Thesis
- Title
- SAS Yaw Motion Compensation Using Along-Track Phase Filtering.
- Creator
-
Joshi, Shantanu H., Gross, Frank B., Arora, Krishna R., Roberts, Rodney R., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
In order to image or map targets on an ocean floor, a synthetic aperture sonar platform is moved underwater over the ocean floor. The platform pings or transmits acoustic signals, which reflect off the target back to the receiver. A target image is generated after applying a focusing or a beamforming algorithm on the processed received signal. However the moving platform, when pinging, undergoes motions like yaw, sway, surge, which produce distortions in the final target image. The main...
Show moreIn order to image or map targets on an ocean floor, a synthetic aperture sonar platform is moved underwater over the ocean floor. The platform pings or transmits acoustic signals, which reflect off the target back to the receiver. A target image is generated after applying a focusing or a beamforming algorithm on the processed received signal. However the moving platform, when pinging, undergoes motions like yaw, sway, surge, which produce distortions in the final target image. The main objective of this thesis is to geometrically model yaw motion and apply the motion compensation scheme to correct for the yaw motion causing target image distortion. The compensation scheme makes use of phase filtering of the received signals to improve the target image quality. The results obtained, demonstrate effectiveness of the method to compensate for the target image distortion due to yaw motion.
Show less - Date Issued
- 2002
- Identifier
- FSU_migr_etd-3691
- Format
- Thesis
- Title
- Luminous Intensity Measurements for LED Related Traffic Signals and Signs.
- Creator
-
Jiang, Zhaoning, Zheng, Jim P., Tung, Leonard J., Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The proper intensity and chromaticity of traffic signals and signs play a key role in the safe management of the traffic environment. Light Emitting Diode (LED) becomes the most important light emitting device for traffic signals and signs. This thesis describes an experimental measurement system which will measure the luminous intensity of several types of traffic signals and signs, which are made of LEDs. Although chromaticity measurement will be mentioned, the thesis is focused on luminous...
Show moreThe proper intensity and chromaticity of traffic signals and signs play a key role in the safe management of the traffic environment. Light Emitting Diode (LED) becomes the most important light emitting device for traffic signals and signs. This thesis describes an experimental measurement system which will measure the luminous intensity of several types of traffic signals and signs, which are made of LEDs. Although chromaticity measurement will be mentioned, the thesis is focused on luminous intensity measurement. While there are many different types of traffic signals, this thesis will focus on the current measurement procedure of the 12-inch traffic signal and the improvement of the procedure. The measurement procedure for other types of LED-related signals and future development are also discussed.
Show less - Date Issued
- 2004
- Identifier
- FSU_migr_etd-3516
- Format
- Thesis
- Title
- Improving the Wireless Link Reliability of a Flight Termination System.
- Creator
-
McCabe, Garrett, Brooks, Geoffrey, Harvey, Bruce, Kwan, Bing, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
A proposed alternative wireless communication technique is developed and compared to the current radio frequency command link in a Flight Termination System. The command link in a Flight Termination System requires a reliable signal in a radio frequency environment that is becoming overcrowded and is susceptible to various sources of interference. The proposed wireless command link implements direct-sequence spread spectrum modulation (DS/SS) to provide additional interference rejection. The...
Show moreA proposed alternative wireless communication technique is developed and compared to the current radio frequency command link in a Flight Termination System. The command link in a Flight Termination System requires a reliable signal in a radio frequency environment that is becoming overcrowded and is susceptible to various sources of interference. The proposed wireless command link implements direct-sequence spread spectrum modulation (DS/SS) to provide additional interference rejection. The DS/SS modulation is paired with minimum shift keying (MSK) which is a spectrally efficient constant envelope digital modulation technique. MSK also benefits from bit error rates that are competitive with other common modulation schemes such as binary phase shift keying in an additive white Gaussian noise wireless channel. The proposed MSK-SS modulation scheme is simulated alongside the current digital modulation scheme, continuous phase frequency shift keying (CPFSK), against narrowband, co-channel and multipath sources of interference to measure its effectiveness of rejecting interference. The MSK-SS system is able to provide bit error rates less than ã10ã^(-6) at 11 dB E_b/N_0, while the CPFSK system requires at least 14 dB E_b/N_0. When subjected to narrowband interference, the MSK-SS scheme benefits from 18 dB of interference rejection to CPFSK. Co-channel interference rejection of MSK-SS is observed to be 12 dB greater than CPFSK. The MSK-SS system is also able to suppress the effects of multipath interference while CPFSK error rates are extremely affected both destructively and constructively. The proposed MSK-SS modulation scheme is able to greatly improve error rates while offering interference rejection with a 99% power bandwidth less than 200 KHz and a minimal impact on the overall acquisition time. This proposed MSK-SS solution could enhance the reliability of the safety critical wireless communication link for a future generation of Flight Termination Systems.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5398
- Format
- Thesis
- Title
- Analysis and Design of Optimally Fault Tolerant Robots.
- Creator
-
Siddiqui, Salman A., Roberts, Rodney G., Moore, Carl A., Foo, Simon Y., Tung, Leonard J., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Robot manipulators can be used to navigate and perform tasks in unstructured and hazardous environments where human safety is a primary concern. For example, they are used for nuclear waste disposal, space exploration, nuclear power industry, military surveillance, etc. A number of such robot manipulators are being used but the concern is that these robots should be able to complete their critical tasks in the event of failures that they encounter working in such environments. One of the most...
Show moreRobot manipulators can be used to navigate and perform tasks in unstructured and hazardous environments where human safety is a primary concern. For example, they are used for nuclear waste disposal, space exploration, nuclear power industry, military surveillance, etc. A number of such robot manipulators are being used but the concern is that these robots should be able to complete their critical tasks in the event of failures that they encounter working in such environments. One of the most common failures of field robots is an actuator failure. This type of failure affects the joints of the robots inducing failures like locked-joint failures and free-swinging joint failures. To design a fault tolerant system the robot has to rely on the incorporation of redundancy into its system. This redundancy takes several forms: sensor redundancy, analytical redundancy, and kinematic redundancy. This work focuses on using kinematic redundancy to deal with the issue of multiple locked-joint failures in the robotic systems. The goal of this work was to analyze and design fault-tolerant manipulators. The robots designed are able to finish their required task in spite of a failure in one or more of its joints. In order to design optimally fault tolerant manipulators, it is necessary to quantify fault tolerance. The approach taken here was to define fault tolerance in terms of a suitable objective function based on the robot's manipulator Jacobian. In the case of the relative manipulability index, local fault tolerance is characterized by the null space of the manipulator Jacobian. Since the null space can be used to identify locally fault tolerant manipulator configurations, one goal of this work was to develop procedures for designing fault tolerant manipulators based on obtaining a suitable null space for the manipulator Jacobian. In this work, optimally fault tolerant serial manipulators are designed that are fault tolerant to two locked-joint failures simultaneously. Furthermore, the symmetry of the manipulators is studied using positional and orientational Jacobians; and examples are presented for condition number and dynamic manipulability index to study the behavior of different fault tolerance measures. Lastly, a methodology for designing an optimally fault tolerant 4-DOF spherical wrist type mechanism was presented. It was shown that the orientational Jacobian must have a certain form for the manipulator to have the best possible relative manipulability index value. An optimal configuration along with the corresponding DH parameters was presented. Furthermore, it was pointed out that isotropic configurations of a 4-DOF spherical wrist type mechanism are fault tolerant and optimal in the sense that they have the largest possible manipulability index prior to a failure. An example of an orientational Jacobian was presented for a 6-DOF spherical wrist that is equally fault tolerant for any two joint failures.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5433
- Format
- Thesis
- Title
- Case Study of Islanded Microgrid Control.
- Creator
-
Mukherjee, Abhisek, Harvey, Bruce, Edrington, Chris, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
This paper investigates the stability issues in an islanded microgrid. A microgrid, once disconnected from the main grid, has to entirely depend on the Distributed Generators (DG), which are mostly intermittent renewable sources (e.g. PV, Wind Turbine etc.). This makes it necessary to achieve proper sharing of power, as it is not possible to supply the entire microgrid by a single source. Frequency and angle droop along with supplementary and adaptive control methods are analyzed and compared...
Show moreThis paper investigates the stability issues in an islanded microgrid. A microgrid, once disconnected from the main grid, has to entirely depend on the Distributed Generators (DG), which are mostly intermittent renewable sources (e.g. PV, Wind Turbine etc.). This makes it necessary to achieve proper sharing of power, as it is not possible to supply the entire microgrid by a single source. Frequency and angle droop along with supplementary and adaptive control methods are analyzed and compared to identify the better method for accurate load sharing. However, the conventional droop methods, which are designed for inductive microgrids, allow an error in reactive power sharing when applied in a resistive microgrid. Therefore, a secondary control is proposed for improving the accuracy of reactive power sharing. The droop method alone is not enough in situations of severe power outages, like loss of a DG unit. Use of an Energy Storage System (e.g. Battery) is proposed to serve both as a storage unit for the intermittent sources and also to prevent voltage collapse by supplying the required voltage to the load bus. In addition to that, an advanced load shedding scheme is proposed to sustain the important loads, in times of extreme power crisis. Voltage unbalance caused by harmonic distortion, due to the presence of unbalanced/non-linear loads may result in voltage collapse. A selective harmonic compensation method along with local droop controller illustrates an effective way of restoring voltage balance, even with the harmonic polluted loads connected to the network. In addition to this, the role of a programmable resistance with shunt harmonic impedance (PR-SHI) in harmonic compensation is investigated in this paper. This method is shown to allow a reduced harmonic current and achieve an accurate sharing of the harmonic compensation effort among the DG units. Lastly, a scenario of excess generation, very uncommon to the conventional grid, has been discussed in this paper. Charging of battery unit and generation of heat energy, by using Smart Loads is proposed to be the most effective way of utilizing the excess generated power. This thesis presents a unique work of bringing together different control techniques used for stability of microgrid and analyzing and comparing them in order to find the best fit for each of the possible cases in an islanded microgrid. Lastly, it recommends solutions for each case.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5412
- Format
- Thesis
- Title
- Optimization of Microstructure of Buckypaper-Based Proton Exchange Membrane Fuel Cell by Using Electrochemical Impedance Spectroscopy.
- Creator
-
Hagen, Mark, Zheng, Jim P., Zhu, Wei, Andrei, Petru, Foo, Simon, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The microstructure of the catalyst layer in proton exchange membrane fuel cells (PEMFCs) greatly influences the utilization of the catalyst (Pt) and the overall cell performance. Research focused on two key components in optimizing the microstructure of PEMFCs, primarily the platinum loading and Nafion loading. An electrochemical impedance spectroscopy (EIS) experiment was performed, in order to study the effects of varying these two components and come to a conclusion on the optimal amount...
Show moreThe microstructure of the catalyst layer in proton exchange membrane fuel cells (PEMFCs) greatly influences the utilization of the catalyst (Pt) and the overall cell performance. Research focused on two key components in optimizing the microstructure of PEMFCs, primarily the platinum loading and Nafion loading. An electrochemical impedance spectroscopy (EIS) experiment was performed, in order to study the effects of varying these two components and come to a conclusion on the optimal amount required to maximize cell performance of PEMFC's that are based on double-layered supported Pt (Pt/DLBP) catalyst[1].
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5358
- Format
- Thesis
- Title
- A Study of Despread-Respread Multitarget Adaptive Algorithms in an AWGN Channel.
- Creator
-
Connor, Jeffrey D., Gross, Frank B., Foo, Simon, Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Typical adaptive algorithms attempt to exploit some characteristic of a desired mobile user's signal incident upon an array of antenna elements to form a blind estimate of the user's signal, wherein this estimate is used to update weights added to each element of the array in order to perform beamsteering. Generally, when mobile user's operate in a CDMA mobile environment two particular characteristics are exploited: 1.) Minimizing the Mean Square Error (MSE) between the array output and the...
Show moreTypical adaptive algorithms attempt to exploit some characteristic of a desired mobile user's signal incident upon an array of antenna elements to form a blind estimate of the user's signal, wherein this estimate is used to update weights added to each element of the array in order to perform beamsteering. Generally, when mobile user's operate in a CDMA mobile environment two particular characteristics are exploited: 1.) Minimizing the Mean Square Error (MSE) between the array output and the blind estimate of the desired user. 2.) Restoring the constant modulus to the output of the adaptive array corrupted by noise in the channel. These typical adaptive algorithms do not utilize knowledge of the spreading sequences used in a CDMA system, which separate users occupying the same frequency and time channels. However, this knowledge is exploited by Despread-Respread Multitarget Arrays (DRMTA). The four DRMTA algorithms which currently exist are: 1.) Least Squares Despread-Respread Multitarget Constant Modulus Array (LS-DRMTCMA) 2.) Least Squares Despread-Respread Multitarget Array (LS-DRMTA) 3.) Block Based RLS Despread-Respread Multitarget Array (BRLS-DRMTA) 4.) Despread-Respread Kalman Predictor Multitarget Array (DR-KPMTA) The objective of this thesis is to develop a comparison between these four algorithms for a stationary, additive white Gaussian noise (AWGN) channel in a CDMA mobile environment using MATLAB computer simulations for the following metrics: 1.) Analyzing Array Factor Patterns (Beampatterns) 2.) Signal-to-Interference-plus-Noise Ratio (SINR) 3.) Convergence Degree of Weight (CDW) 4.) Bit Error Rate (BER) These comparisons are performed for several different scenarios: - Highly corruptive AWGN channel. - Low SINR environment. - Response to poor initial conditions. - Measuring Convergence characteristics. - Number of users greater than or equal to number of elements in array. - Response to a sudden increase in total number of users in environment - Reduced orthogonality of spreading sequences. - Minimizing MSE by maximizing CDW.
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-3454
- Format
- Thesis
- Title
- Antenna Array Synthesis Using the Cross Entropy Method.
- Creator
-
Connor, Jeffrey D. (Jeffrey David), Foo, Simon Y., Weatherspoon, Mark H., Chan-Hilton, Amy, Meyer-Baese, Anke, Department of Electrical and Computer Engineering, Florida State...
Show moreConnor, Jeffrey D. (Jeffrey David), Foo, Simon Y., Weatherspoon, Mark H., Chan-Hilton, Amy, Meyer-Baese, Anke, Department of Electrical and Computer Engineering, Florida State University
Show less - Abstract/Description
-
This dissertation addresses the synthesis of antenna arrays using the Cross-Entropy (CE) method, marking the first application of the CE method for solving electromagnetic optimization problems. The CE method is a general stochastic optimization technique for solving both continuous and discrete multi-extremal, multi-objective optimization problems. The CE method is an adaptive importance sampling derived from an associated stochastic problem (ASP) for estimating the probability of a rare...
Show moreThis dissertation addresses the synthesis of antenna arrays using the Cross-Entropy (CE) method, marking the first application of the CE method for solving electromagnetic optimization problems. The CE method is a general stochastic optimization technique for solving both continuous and discrete multi-extremal, multi-objective optimization problems. The CE method is an adaptive importance sampling derived from an associated stochastic problem (ASP) for estimating the probability of a rare-event occurrence. The estimation of this probability is determined using a log-likelihood estimator governed by a parameterized probability distribution. The CE method adaptively estimates the parameters of the probability distribution to produce a random variable solution in the neighborhood of the globally best outcome by minimizing cross entropy. In this work, single and multi-objective optimization using both continuous and combinatorial forms of the CE method are performed to shape the sidelobe power, mainlobe beamwidth, null depths and locations as well as number of active elements of linear array antennas by controlling the spacings and complex array excitations of each element in the array. Specifically, aperiodic arrays are designed through both non-uniform element spacings and thinning active array elements, while phased array antennas are designed by controlling the complex excitation applied to each element of the array. The performance of the CE method is demonstrated by considering different scenarios adopted from literature addressing more popular stochastic optimization techniques such as the Genetic Algorithm (GA) or Particle Swarm Optimization. The primary technical contributions of this dissertation are the simulation results computed using the Cross Entropy method for the different scenarios adopted from literature. Cursory comparisons are made to the results from the literature, but the overall goal of this work is to expose the tendencies of the Cross Entropy method for array synthesis problems and help the reader to make an educated decision when considering the Cross Entropy method for their own problems. Overall, the CE method is a competitive alternative to these more popular techniques, possessing attractive convergence properties, but requiring larger population sizes.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-3444
- Format
- Thesis
- Title
- Gallium Arsenide Mesfet Small-Signal Modeling Using Backpropagation & RBF Neural Networks.
- Creator
-
Langoni, Diego, Weatherspoon, Mark H., Meyer-Bäse, Anke, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The small-signal intrinsic ECPs (equivalent circuit parameters) of a 4x50 µm gate width, 0.25 µm gate length GaAs (gallium arsenide) MESFET (metal semiconductor field-effect transistor) were modeled versus bias (voltage and current) and temperature using backpropagation and RBF (radial basis function) ANNs (artificial neural networks). The resulting ANNs consisted of 3-input, 8-output models of the MESFET ECPs and were compared to each other in terms of memory usage, convergence speed, and...
Show moreThe small-signal intrinsic ECPs (equivalent circuit parameters) of a 4x50 µm gate width, 0.25 µm gate length GaAs (gallium arsenide) MESFET (metal semiconductor field-effect transistor) were modeled versus bias (voltage and current) and temperature using backpropagation and RBF (radial basis function) ANNs (artificial neural networks). The resulting ANNs consisted of 3-input, 8-output models of the MESFET ECPs and were compared to each other in terms of memory usage, convergence speed, and accuracy. Also, each network's performance was evaluated under "normal" training conditions (75% training data with a uniform distribution) and "stressed" training conditions (50% and 25% training data with a uniform distribution, 75%, 50%, and 25% training data with a skewed distribution). The results showed that for the RBF network, much better overall convergence speed as well as better accuracy under both "normal" and "moderately stressed" training conditions were obtained. However, the backpropagation network yielded better accuracy for the "extremely stressed" training conditions and better overall memory usage.
Show less - Date Issued
- 2005
- Identifier
- FSU_migr_etd-3286
- Format
- Thesis
- Title
- Testing of Actuated Signal Controllers for NTCIP Compliance.
- Creator
-
Vollmer, Derek James, Tung, Leonard J., Harvey, Bruce, Zheng, Jianping, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
From an agency's perspective, having an Actuated Signal Controller that is interchangeable with a controller from a different manufacturer provides the agency with more options and does not get the agency locked with one specific manufacturer. The National Transportation Communication for Intelligent Transportation Systems Protocol (NTCIP) was created to provide a communication standard that, when implemented correctly, should provide the interchangeability the agency desires. Unfortunately,...
Show moreFrom an agency's perspective, having an Actuated Signal Controller that is interchangeable with a controller from a different manufacturer provides the agency with more options and does not get the agency locked with one specific manufacturer. The National Transportation Communication for Intelligent Transportation Systems Protocol (NTCIP) was created to provide a communication standard that, when implemented correctly, should provide the interchangeability the agency desires. Unfortunately, the standard for Actuated Signal Controllers is still immature and not much testing has been done to verify the manufacturers are compliant to the standards. To remedy this, an agency must first determine which aspects of the standards they should require for the device. Once this is done, a testing tool must be selected, and test procedures and scripts can be developed to verify the devices are compliant to the agencies requirements or not. The initial testing of NTCIP compliance for Actuated Signal Controllers will be discussed in detail.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-4546
- Format
- Thesis
- Title
- Security in Wireless Communications and Wireless Local Area Networks.
- Creator
-
Billapati, Venkata R., Foo, Simon, Arora, Krishna, Harvey, Bruce, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Wireless technology has been advancing at a rapid pace. There is an urgent need to secure the wireless local area networks from the hackers. The major goals of this thesis are to provide a survey of the state-of-the-art wireless networking and to investigate ways to secure the wireless local area networks. This thesis begins with the review of important wireless technologies and information networks that are being used around the world ranging from 3G, 4G and wireless local area networks....
Show moreWireless technology has been advancing at a rapid pace. There is an urgent need to secure the wireless local area networks from the hackers. The major goals of this thesis are to provide a survey of the state-of-the-art wireless networking and to investigate ways to secure the wireless local area networks. This thesis begins with the review of important wireless technologies and information networks that are being used around the world ranging from 3G, 4G and wireless local area networks. Specific topics include history of wireless communications, status of wireless technologies in the second and third generations, multiple access communications, modulation techniques such as CDMA, TDMA, FDMA and switching and routing techniques involved with a primary focus on security of the wireless systems. Various types of security protocols are analyzed and their advantages and disadvantages discussed.
Show less - Date Issued
- 2004
- Identifier
- FSU_migr_etd-4538
- Format
- Thesis
- Title
- Improved NMR Field Mapping in Powered Magnets.
- Creator
-
Vemaraju, Satish, Andrei, Petru, Brey, William W., Arora, Rajendra K., Zheng, Jim P., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Nuclear Magnetic Resonance (NMR) field mapping is a highly accurate tool that can be used to determine the homogeneity of the magnetic field of a strong laboratory magnet. Typically, an NMR signal is observed from a small experimental sample as it traces a helical path inside the magnet with the help of a stepper motor. However, high field powered magnets such as those used at the National High Magnetic Field Laboratory are subject to noise from power supplies and cooling water. To the effect...
Show moreNuclear Magnetic Resonance (NMR) field mapping is a highly accurate tool that can be used to determine the homogeneity of the magnetic field of a strong laboratory magnet. Typically, an NMR signal is observed from a small experimental sample as it traces a helical path inside the magnet with the help of a stepper motor. However, high field powered magnets such as those used at the National High Magnetic Field Laboratory are subject to noise from power supplies and cooling water. To the effect of noise on the field map, an NMR reference probe that remains at a fixed location is added to reduce the effect of temporal fluctuation on the field map. The NMR mapping system is interfaced to a spectrometer having dual digital receivers and the tests are carried out on a 7.1 Tesla superconducting magnet. This research is an important step toward effectively utilizing powered magnets for high resolution NMR experiments.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-4575
- Format
- Thesis
- Title
- Optimal Space Time Trellis Code Design for Fast Fading Channels.
- Creator
-
Liu, Xiaoyu, Kwan, Bing W., Magnan, Jerry F., Zheng, Jim P., Meyer-Baese, Anke, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The demand for high-speed mobile wireless communications is rapidly growing. MIMO technology promises to be a key technique for achieving the high date capacity and spectral efficiency for wireless communication systems of the near future. This dissertation presents an overview of Multiple Input Multiple Output (MIMO) systems. A simple mathematical model is used to describe MIMO systems. Systems' capacity improvement is derived. Various Benefits achievable by MIMO system are discussed....
Show moreThe demand for high-speed mobile wireless communications is rapidly growing. MIMO technology promises to be a key technique for achieving the high date capacity and spectral efficiency for wireless communication systems of the near future. This dissertation presents an overview of Multiple Input Multiple Output (MIMO) systems. A simple mathematical model is used to describe MIMO systems. Systems' capacity improvement is derived. Various Benefits achievable by MIMO system are discussed. History and open problems in this field are presented as well. An investigation of Space Time Trellis Code (STTC) design criteria is presented.There are four sets of design criteria which apply to different channel models. Design Criteria Set I and Criteria Set II apply to slow fading channels and design Criteria Set III and Criteria Set VI apply to fast fading channels. If a code has small diversity order, design Criteria Set I and Criteria Set III will apply and in the case of large diversity order, Criteria Set II and Criteria Set VI apply. Based on these design criteria, this dissertation developed a construction method of optimal STTC codes for fast fading channels.when the diversity order is large,trace criteria is applicable. Set partition idea is extended to symbol group partition. A set of design rules are developed to construct optimal STTC. The optimality is proved as well. This dissertation also provides simulation results of optimal STTC over other claimed STTC. The simulation results showed that the optimal STTC outperformed all other codes. And at the same times, using the simulation results, the effect of number of transmitting antennas and number of receivering antennas on the performance of STTC are also studied. Finally, open problems for future research are discussed.
Show less - Date Issued
- 2007
- Identifier
- FSU_migr_etd-4528
- Format
- Thesis
- Title
- Development and Implementation of a 25 kVA Phasor-Based Virtual Machine.
- Creator
-
Fleming, Fletcher, Edrington, Chris S., Steurer, Mischa, DeBrunner, Linda, Weatherspoon, Mark H., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
A motor drive system should be reliable, efficient, and robust under numerous applications, loads, and control schemes. To ensure such characteristics and fully test the range of operation for a motor drive, manufacturers and developers traditionally must have a wide assortment of test bed equipment to recreate various machine load combinations. During motor drive development stages, expensive hardware is exposed to faulting and instability risks prior to completely debugging the product. The...
Show moreA motor drive system should be reliable, efficient, and robust under numerous applications, loads, and control schemes. To ensure such characteristics and fully test the range of operation for a motor drive, manufacturers and developers traditionally must have a wide assortment of test bed equipment to recreate various machine load combinations. During motor drive development stages, expensive hardware is exposed to faulting and instability risks prior to completely debugging the product. The associated risks and expenditures increase with the power level. Therefore, this thesis provides a motor drive testing method, deemed the "virtual machine," (VM) that removes a great deal of the risk and cost associated with motor drive development and validation. The technique used to accomplish the VM exploits the Power Hardware in the Loop (PHIL) concept to replace equipment; particularly a voltage amplifier is used to recreate the terminal characteristics of various machine loading scenarios that a motor drive is conventionally tested against. A unique transformer coupling network is proposed between the amplifier and motor drive to provide decoupling and properly step voltages. By implementing the VM concept on such a transformer coupled PHIL test bed, potential pitfalls and non-linearities due to the transformer can be assessed; thus, providing the field with a new PHIL filter structure and de risking the design prior any future increments in power level. When validating, the VM shows matching, consistent results compared against both simulations and a physical induction machine (IM) energized via the same motor drive. Note that multiple counter torque loads were provided via a DC machine. Although the proposed amplifier control method is based on a steady state phasor system, it also proves adequate for recreating the transient terminal characteristics of an IM across the line start. The conclusion proves the VM concept a viable solution for removing cost and risk in drive development as well as verifies the PHIL transformer coupling network. The concept of controlling active and reactive power flow between parallel connected LCL coupled converters is established then applied as a PHIL technique; thus, opening the field to use this approach for evaluating more complex, simulated systems. The limitations of the proposed method are discussed as well as future work areas to address such constrictions and improve the fidelity of the VM. Finally, after polishing the limitations, a future direction of increasing the VM power level is established and some derivatives of the PHIL load emulation concept given.
Show less - Date Issued
- 2010
- Identifier
- FSU_migr_etd-4438
- Format
- Thesis
- Title
- Securing SDLC & NTCIP Type Messages for Actuated Signal Controllers.
- Creator
-
Walton, Timothy L., Yu, Ming, Tung, Leonard, Harvey, Bruce A., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
This thesis presents how vulnerable our traffic intersections currently are, the technical details on areas of interest for automated traffic control systems, and a practical strategy for securing such systems. Traffic intersections are plenteous in this world in which we live, while the efforts to secure them are not. It will show that without any interference from authorities, one with basic knowledge in the Department of Transportation could easily take control of an intersection for a...
Show moreThis thesis presents how vulnerable our traffic intersections currently are, the technical details on areas of interest for automated traffic control systems, and a practical strategy for securing such systems. Traffic intersections are plenteous in this world in which we live, while the efforts to secure them are not. It will show that without any interference from authorities, one with basic knowledge in the Department of Transportation could easily take control of an intersection for a significant period of time. The work proposes a practical strategy to encrypt two of the most commonly used messaging protocols controlling our intersections today, SDLC broadband messaging and NTCIP type configurations. The analysis and simulation results have demonstrated the ability to apply some well-known encryption algorithms to each method while maintaining real-time functionality.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-5255
- Format
- Thesis
- Title
- Real-Time High Speed Generator System Emulation with Hardware-in the-Loop Application.
- Creator
-
Stroupe, Nicholas, Edrington, Chris S., Foo, Simon Y., Li, Helen, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in...
Show moreThe emerging emphasis and benefits of distributed generation on smaller scale networks has prompted much attention and focus to research in this field. Much of the research that has grown in distributed generation has also stimulated the development of simulation software and techniques. Testing and verification of these distributed power networks is a complex task and real hardware testing is often desired. This is where simulation methods such as hardware-in-the-loop become important in which an actual hardware unit can be interfaced with a software simulated environment to verify proper functionality. In this thesis, a simulation technique is taken one step further by utilizing a hardware-in-the-loop technique to emulate the output voltage of a generator system interfaced to a scaled hardware distributed power system for testing. The purpose of this thesis is to demonstrate a new method of testing a virtually simulated generation system supplying a scaled distributed power system in hardware. This task is performed by using the Non-Linear Loads Test Bed developed by the Energy Conversion and Integration Thrust at the Center for Advanced Power Systems. This test bed consists of a series of real hardware developed converters consistent with the Navy's All-Electric-Ship proposed power system to perform various tests on controls and stability under the expected non-linear load environment of the Navy weaponry. This test bed can also explore other distributed power system research topics and serves as a flexible hardware unit for a variety of tests. In this thesis, the test bed will be utilized to perform and validate this newly developed method of generator system emulation. In this thesis, the dynamics of a high speed permanent magnet generator directly coupled with a micro turbine are virtually simulated on an FPGA in real-time. The calculated output stator voltage will then serve as a reference for a controllable three phase inverter at the input of the test bed that will emulate and reproduce these voltages on real hardware. The output of the inverter is then connected with the rest of the test bed and can consist of a variety of distributed system topologies for many testing scenarios. The idea is that the distributed power system under test in hardware can also integrate real generator system dynamics without physically involving an actual generator system. The benefits of successful generator system emulation are vast and lead to much more detailed system studies without the draw backs of needing physical generator units. Some of these advantages are safety, reduced costs, and the ability of scaling while still preserving the appropriate system dynamics. This thesis will introduce the ideas behind generator emulation and explain the process and necessary steps to obtaining such an objective. It will also demonstrate real results and verification of numerical values in real-time. The final goal of this thesis is to introduce this new idea and show that it is in fact obtainable and can prove to be a highly useful tool in the simulation and verification of distributed power systems.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5208
- Format
- Thesis
- Title
- Usability of Command Strategies for the Phantom Omni/Schunk 5DOF Teleoperation Setup.
- Creator
-
Peters, Brandon Allen Charles, Moore, Carl A., Roberts, Rodney G., Weatherspoon, Mark H., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Teleoperation is the control, operation, and manipulation of a robot or other device over distance. The way in which the master controller dictates how the slave moves in its workspace is called a command strategy. Here, two command strategies are implemented for a teleoperation setup consisting of a Phantom Omni as the master controller, and a 5-DOF serial manipulator as the remote slave. The forward and inverse kinematics, the Jacobian and inverse Jacobian, for the master controller are...
Show moreTeleoperation is the control, operation, and manipulation of a robot or other device over distance. The way in which the master controller dictates how the slave moves in its workspace is called a command strategy. Here, two command strategies are implemented for a teleoperation setup consisting of a Phantom Omni as the master controller, and a 5-DOF serial manipulator as the remote slave. The forward and inverse kinematics, the Jacobian and inverse Jacobian, for the master controller are shown. The forward and inverse kinematics, and the Jacobian are developed for the serial manipulator which serves as the slave. An overview of the control program with its various components is given. The position-position command strategy and position-speed command strategy and how they are implement for the teleoperation setup is detailed. The smoothing of input values and its importance is discussed and shown in detail. The need for self-collision avoidance and ensuring the remote arm does not strike the table top is discussed, along with the algorithms that govern each type of collision avoidance. The experiment setup and results are present next, and the result are discussed. Finally, this work concludes with a discussion of future work and considerations.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5109
- Format
- Thesis
- Title
- Multi-Rate Co-Simulation Interfaces Between the RTDS and the Opal-RT.
- Creator
-
Rentachintala, Kavya S., Edrington, Chris S., DeBrunner, Linda S., Steurer, Michael, Foo, Simon Y., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
ABSTRACT The aim of the thesis is to design efficient and accurate co-simulation interfaces between off-the-shelf heterogeneous real-time simulators. Co-simulation in this context refers to the distribution of subsystems of a complex power system among the different simulator platforms. The design of an efficient communication interface is the key step of the co-simulation process. The Internet is a popular option used for coupling different real-time simulator platforms. But, factors such as...
Show moreABSTRACT The aim of the thesis is to design efficient and accurate co-simulation interfaces between off-the-shelf heterogeneous real-time simulators. Co-simulation in this context refers to the distribution of subsystems of a complex power system among the different simulator platforms. The design of an efficient communication interface is the key step of the co-simulation process. The Internet is a popular option used for coupling different real-time simulator platforms. But, factors such as delay, throughput and LAN congestion are disadvantages of the internet based communication. Therefore, co-simulation interfaces that overcome the data loss during communication and that increase the computational efficiency of the simulators need to be developed. In this thesis, the two real-time electro-magnetic transient simulators used in the co-simulation are the Real-Time Digital Simulator (RTDS) and the Opal-Real-Time (Opal-RT). The work establishes analog and digital multi-rate co-simulation interfaces between these simulator platforms. The analog interface is designed using the peripheral I/O ports of the simulators and is demonstrated with the notional IPS MVDC ship model. The digital interface is established using the FPGA based reconfigurable I/O platforms of the simulators. To further analyze the co-simulation interfaces, round-trip latency and accuracy tests are conducted, and the stability of the co-simulation is investigated. From these tests, it is observed that the latency of the analog interface is decreased to 480 μs compared to 0.2 s observed when the internet is used as the communication medium. And, the results from the 11-bit parallel digital interface indicate that the design is feasible. Since it uses FPGAs for the data exchange, it overcomes the noise due to traditional A/D and D/A converters. In summary, this thesis presents a low latency analog interface and a high speed FPGA based digital interface for the real-time simulators. The importance of the co-simulation interfaces to multi-rate co-simulation techniques is studied and analyzed.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5134
- Format
- Thesis
- Title
- Real-Time Hardware Design for Improving Laser Detection and Ranging Accuracy.
- Creator
-
Brown, Jarrod, DeBrunner, Linda, Hughes, Clay, Brooks, Geoffrey, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Digital signal processing (DSP) algorithms for estimating target range and backscatter intensity from sampled laser detection and ranging (LADAR) systems are limited by the sampling rate of data collected and computation time requirements. An interpolating matched filter DSP algorithm is presented to improve range accuracy while maintaining a relatively low sampling rate. The algorithm interpolates sampled data and applies a matched filter with a high resolution reference waveform to recover...
Show moreDigital signal processing (DSP) algorithms for estimating target range and backscatter intensity from sampled laser detection and ranging (LADAR) systems are limited by the sampling rate of data collected and computation time requirements. An interpolating matched filter DSP algorithm is presented to improve range accuracy while maintaining a relatively low sampling rate. The algorithm interpolates sampled data and applies a matched filter with a high resolution reference waveform to recover super-sample positions of the transmitted and backscattered pulses. A custom computer architecture utilizing parallel processing is designed and synthesized on a field programmable gate array (FPGA) to optimize the DSP algorithm to operate in real-time. Research and simulation results comparing the effectiveness of different sampling rates, reference waveform models, and interpolation factors used to determine target range from LADAR data are presented. The FPGA hardware design was realized and tested with a LADAR system. A matched filter with zero padding interpolation design using a Gaussian shape reference waveform and an interpolation factor of 32 showed an 87% improvement in range accuracy over the peak detector design currently used in real-time LADAR systems.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4743
- Format
- Thesis
- Title
- Synthesizable SystemC to VHDL Compiler Design.
- Creator
-
Chen, Rui, Meyer-Baese, Uwe, Foo, Simon, Yu, Ming, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Efficient hardware/software codesign framework will greatly facilitate not only the design but also the verification early in the embedded system deign cycle. With these electric design automation (EDA) tools, the hardware can be concisely modeled at a higher abstraction level better than with the more traditional hardware description languages. SystemC is such a hardware/software codesign language, which allows hardware models to be written in C++. Consequently, it is naturally needed to...
Show moreEfficient hardware/software codesign framework will greatly facilitate not only the design but also the verification early in the embedded system deign cycle. With these electric design automation (EDA) tools, the hardware can be concisely modeled at a higher abstraction level better than with the more traditional hardware description languages. SystemC is such a hardware/software codesign language, which allows hardware models to be written in C++. Consequently, it is naturally needed to consider a hardware implementation for the verified SystemC models. To solve this bottleneck problem, two design flow methodologies, namely direct synthesis method and indirect synthesis method, are proposed in the literatures. Since the indirect synthesis method applies the traditional hardware design language (HDL) synthesis technology as part of its design flow, it is considered as framework solution. As result, the compiler tool, which translates SystemC to VHDL, is presented in the thesis. The details of equivalent constructs between these two languages are illustrated. Besides performing obvious translation of many SystemC elements, the compiler focuses on the automated translation of control structures and expressions. Finally, a series of digital application modules in SystemC were tested. Simulations of the translated VHDL models yielded the expected results.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-4767
- Format
- Thesis
- Title
- Information-Theoretic Characterization of Dynamic Energy Systems.
- Creator
-
Bevis, Troy Lawson, Edrington, Chris S., Cartes, Dave, Foo, Simon Y., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The latter half of the 20th century saw tremendous growth in nearly every aspect of civilization. From the internet to transportation, the various infrastructures relied upon by society has become exponentially more complex. Energy systems are no exception, and today the power grid is one of the largest infrastructures in the history of the world. The growing infrastructure has led to an increase in not only the amount of energy produced, but also an increase in the expectations of the energy...
Show moreThe latter half of the 20th century saw tremendous growth in nearly every aspect of civilization. From the internet to transportation, the various infrastructures relied upon by society has become exponentially more complex. Energy systems are no exception, and today the power grid is one of the largest infrastructures in the history of the world. The growing infrastructure has led to an increase in not only the amount of energy produced, but also an increase in the expectations of the energy systems themselves. The need for a power grid that is reliable, secure, and efficient is apparent, and there have been several initiatives to provide such a system. These increases in expectations have led to a growth in the renewable energy sources that are being integrated into the grid, a change that increases efficiency and disperses the generation throughout the system. Although this change in the grid infrastructure is beneficial, it leads to grand challenges in system level control and operation. As the number of sources increases and becomes geographically distributed, the control systems are no longer local to the system. This means that communication networks must be enhanced to support multiple devices that must communicate reliably. A common solution to these new systems is to use wide area networks for the communication network, as opposed to point-to-point communication. Although the wide area network will support a large number of devices, it generally comes with a compromise in the form of latency in the communication system. Now the device controller has latency injected into the feedback loop of the system. Also, renewable energy sources are largely non-dispatchable generation. That is, they are never guaranteed to be online and supplying the demanded energy. As renewable generation is typically modeled as stochastic process, it would useful to include this behavior in the control system algorithms. The combination of communication latency and stochastic sources are compounded by the dynamics of the grid itself. Loads are constantly changing, as well as the sources; this can sometimes lead to a quick change in system states. There is a need for a metric to be able to take into consideration all of the factors detailed above; it needs to be able to take into consideration the amount of information that is available in the system and the rate that the information is losing its value. In a dynamic system, the information is only valid for a length of time, and the controller must be able to take into account the decay of currently held information. This thesis will present the information theory metrics in a way that is useful for application to dynamic energy systems. A test case involving synchronization of several generators is presented for analysis and application of the theory. The objective is to synchronize all the generators and connect them to a common bus. As the phase shift of each generator is a random process, the effects of latency and information decay can be directly observed. The results of the experiments clearly show that the expected outcomes are observed and that entropy and information theory is a valid metric for timing requirement extraction.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4719
- Format
- Thesis
- Title
- Super-High Efficiency Multijunction Photovoltaics for Solar Energy Harvesting.
- Creator
-
Bhattacharya, Indranil, Foo, Simon, Li, Hui, Meyer-Baese, Anke, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Energy harvesting and alternative renewable energy techniques are currently some of the most sought after research topics for engineers and scientists. Global warming has forced the researchers to abandon the rampant use of coal technology and find out alternative ways to harvest energy whether it maybe solar, wind, water, tides, geothermal heat, ocean waves, bio-fuel etc. Sunlight is the most abundant renewable energy source with an intensity of approximately 0.1W/cm² and over 1.5×10²²J (15...
Show moreEnergy harvesting and alternative renewable energy techniques are currently some of the most sought after research topics for engineers and scientists. Global warming has forced the researchers to abandon the rampant use of coal technology and find out alternative ways to harvest energy whether it maybe solar, wind, water, tides, geothermal heat, ocean waves, bio-fuel etc. Sunlight is the most abundant renewable energy source with an intensity of approximately 0.1W/cm² and over 1.5×10²²J (15,000 Exajoules) reaching earth's surface everyday. This enormous energy is 10,000 times greater than the daily consumption of 1.3EJ of the world. The single junction solar PV cells have produced very small solar conversion efficiency. The normally available photovoltaics have the conversion efficiency in the range of 8% -12%. This limitation has led to cutting edge researches in the photovoltaic area giving rise to the concept of multijunction solar photovoltaic cells. Multijunction solar cells direct the sunlight towards matched spectral sensitivity by splitting the spectrum into smaller slices. The main challenge in the photovoltaic industry is to make the modules more cost effective. The high efficiency multijunction photovoltaics have played a very significant role in reducing the cost through concentrator photovoltaic systems being implemented around the world. National Renewable Energy Laboratory (NREL) and US Department of Energy have funded several III-IV multijunction solar cell projects. In our research we have introduced a new three layer multijunction photovoltaic material based on InP/InGaAs/InGaSb and four-layers PV comprised of AlGaAs/GaAs/InGaAs/InGaSb and AlGaAs/InP/InGaAs/InGaSb and have drawn a comparison of solar energy absorption, reflection and transmission with existing single-junction and multijunction cells. We discovered that the inclusion of InGaSb layer in the design has made a significant difference in absorption in the spectral range of 598nm-800nm, contributing to a higher efficiency of the solar cell.
Show less - Date Issued
- 2009
- Identifier
- FSU_migr_etd-4641
- Format
- Thesis
- Title
- Torque Per Amoere Strategies in Switched Reluctance Machines at Low Speeds.
- Creator
-
Akar, Furkan, Edrington, Chris S., Andrei, Petru, Weatherspoon, Mark H., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The Switched Reluctance Machine (SRM) is one of the oldest members of the electric machine family; it is known for its simple structure, ruggedness, and inexpensive manufacturability. Despite its numerous advantages, at low speeds the SRM suffers from the torque ripple which is not significant at high speeds due to the fact that it is filtered by the moment of inertia of the rotor. Therefore, its torque control at low speeds is crucial. The SRM is a highly nonlinear machine; in the case of...
Show moreThe Switched Reluctance Machine (SRM) is one of the oldest members of the electric machine family; it is known for its simple structure, ruggedness, and inexpensive manufacturability. Despite its numerous advantages, at low speeds the SRM suffers from the torque ripple which is not significant at high speeds due to the fact that it is filtered by the moment of inertia of the rotor. Therefore, its torque control at low speeds is crucial. The SRM is a highly nonlinear machine; in the case of multiphase excitation, it becomes more nonlinear due to the mutual coupling effects; ignoring these effects may cause error in anticipating the electromagnetic torque, and increase the torque ripple. In addition, the minimization of the copper losses is another important task; because high copper losses reduce the available phase voltage, decrease the service life of the battery in case it feeds the SRM. In short, the copper losses affect the efficiency of the SRM control system. Finally, it can be claimed that the objectives of an effective and efficient SRM control should be to reduce the torque ripple while considering the nonlinear characteristics of the machine as well as the mutual coupling effects, and to increase the efficiency by minimizing the copper losses. This thesis proposes a new maximum torque per ampere (MTA) strategy, which aims to reduce not only the torque ripple, but also the copper losses, for SRMs at low speeds, while mutual effects between adjacent phases are taken into consideration. The Finite Element Analysis (FEA) is conducted to obtain the electromagnetic torque and flux-linkage characteristics of a 4-phase 8/6 SRM under both single and multiphase excitation. After this step, the optimum phase current profiles are determined by the proposed method based on the Particle Swarm Optimization (PSO). This new methodology is then validated by comparing it with the conventional control method and the `balanced commutator' method, which is an efficient MTA strategy. These comparisons are realized via the static and dynamic SRM models which utilize the data retrieved from the FEA. Real hardware results are taken from another 4-phase 8/6 SRM under both the singlephase and multiphase excitations so as to derive its torque-angle characteristic. Subsequently, demanded current values for different torque levels are computed according to the conventional, `balanced commutator', and PSO methods. Finally, the PSO technique is judged against the other two methods at some selected rotor positions in terms of the torque ripple, copper losses and torque per ampere ratios.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4685
- Format
- Thesis
- Title
- Fabrication and Evaluation of Polyvinylidene Flouride/Polyvinyl Alcohol (PVA/PVDF) Hybrid Membranes for Lithium-Air Battery Applications.
- Creator
-
Akpanekong, Emem, Liu, Tao, Zheng, Jim P., Andrei, Petru, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
ABSTRACT A new type of hybrid hydrophobic/hydrophilic membrane is proposed in this thesis to improve the electrochemical performance of Lithium-Air battery operated with dual electrolytes. The dual electrolytes include an organic and aqueous electrolyte which is separated from one another by this solid-state hybrid polymer membrane so that they do not intermix. Also, this solid-state hybrid polymer membrane is conductive to facilitate the ionic charge carriers transport between the dual...
Show moreABSTRACT A new type of hybrid hydrophobic/hydrophilic membrane is proposed in this thesis to improve the electrochemical performance of Lithium-Air battery operated with dual electrolytes. The dual electrolytes include an organic and aqueous electrolyte which is separated from one another by this solid-state hybrid polymer membrane so that they do not intermix. Also, this solid-state hybrid polymer membrane is conductive to facilitate the ionic charge carriers transport between the dual electrolytes. With polyvinylidene fluoride (PVDF) used as a hydrophobic polymer while polyvinyl alcohol (PVA) used as the hydrophilic polymer, the hybrid membranes were prepared by phase inversion and polymer solution casting processes to test the novel concept being proposed in this study. Moreover, the ionic conductivity, electrochemical stability, permeability, and morphology of the prepared PVDF/PVA hybrid membranes have been investigated to examine their suitability for Lithium-Air battery applications. The experimental results suggest that the PVDF/PVA hydrophobic/hydrophilic hybrid membrane is stable and potentially suitable to improve the performance of Lithium-Air battery operated with dual electrolytes.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4686
- Format
- Thesis
- Title
- Interleaved Multi-Phase Isolated Bidirectional DC-DC Converter and Its Extension.
- Creator
-
Wang, Zhan, Li, Hui, Meyer-Baese, Anke, Foo, Simon Y., Zheng, Jim P., Andrei, Petru, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Recently, along with the development of renewable energy technology and threat of energy shortage, hybrid electric vehicles (HEV) and DC microgrid system are increasingly attracting attentions in industry and academia. By combining internal combustion engine (ICE) and high-performance energy storage such as battery, fuel-cell and ultracapacitors, HEVs can achieve twice the fuel economy of conventional vehicles. DC microgrid is an appropriate system to interconnect dc energy sources and to...
Show moreRecently, along with the development of renewable energy technology and threat of energy shortage, hybrid electric vehicles (HEV) and DC microgrid system are increasingly attracting attentions in industry and academia. By combining internal combustion engine (ICE) and high-performance energy storage such as battery, fuel-cell and ultracapacitors, HEVs can achieve twice the fuel economy of conventional vehicles. DC microgrid is an appropriate system to interconnect dc energy sources and to supply high quality power. To enable the usage of dc energy storage, developing power converters as the power electronics interface of dc energy sources becomes imminent. Currently, the voltage type bidirectional converter is dominating topology in dc-dc applications, which is not suitable for renewable energy source with wide voltage range. Although some current type converters have been proposed, most of them are dealing with unidirectional or low power system. For a high power energy storage system, high voltage conversion ratios, high input currents and wide input voltage range are the major barriers to achieving high-efficiency power conversions. This thesis proposes a novel three-phase current-fed dual-active-bridge (DAB) dc-dc converter with transformer isolation to overcome these obstacles. The major features of the proposed converter are the following: (1) Increase converter power rating by paralleling phases; (2) Reduce the size of input dc inductors and dc link capacitor with interleaved control; (3) Achieve Zero-Voltage Switching (ZVS) over a wide load range and wide input voltage rage without auxiliary circuitry; High conversion efficiency above 94% is verified with wide input voltage range. The detailed operation analysis and experimental results to prove the proposed converter are given in chapter 3. For integrating primary sources and energy storage, this thesis also proposes an integrated three-port bidirectional dc-dc converter based on three-phase current-fed DAB converter. Comparing to the individual dc-dc converters, the major advantages of the proposed integrated three-port converter include: (1) Higher power density due to multiport and multiphase interleaved structure; (2) Easier implementation of centralized control; (3) Decoupled power management control due to natural decoupled control variables; (4) Zero voltage switching in different operation mode. Comparing to two-port dc-dc converter, there are much more operation modes for integrated three-port dc-dc converter. Based on the integrated three-port converter, a photovoltaic (PV) system with battery backup is discussed. PV panel with wide voltage variation is connected to the current-fed port and battery as an energy storage with small voltage variation is connected to the dc link. By controlling the phase shift angle, the power exchange can be controlled between low voltage side (LVS) and high voltage side (HVS). By controlling the duty cycle, the PV can realize maximum power point tracking control (MPPT). The battery as an energy buffer is charging or discharging to wipe off the gap between generated power and required power. For the grid-connected mode, it is also helpful to manage the SOC of battery. The detailed analysis, simulation and experimental verification are provided in chapter 4. Finally, the last chapter concludes the previous work and gives the scope of the future work.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-6050
- Format
- Thesis
- Title
- Phase-Shift DC-DC Converters' Digital Control, Control Hardware in the Loop and Hardware Real Time Simulation Study.
- Creator
-
Mirtalebi, Brian Mohsen, Li, Hui, Andrei, Petru, Meyer-Baese, Uwe H., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The backbone of simulation is nothing but mathematical equations from simple linearization method to solving complex differential equations. Although simulating a real system with realistic outcomes is a hard task to come by but minimizing the amount of rework required after design makes it a very desirable method to see results before building a device. The scope of this thesis paper is to bring to the light the concept of simulating power electronics devices purely in form of equations to...
Show moreThe backbone of simulation is nothing but mathematical equations from simple linearization method to solving complex differential equations. Although simulating a real system with realistic outcomes is a hard task to come by but minimizing the amount of rework required after design makes it a very desirable method to see results before building a device. The scope of this thesis paper is to bring to the light the concept of simulating power electronics devices purely in form of equations to reduce the cost of development, to speed up the time to market of the product and to establish design benchmarks to guide the designer to the right direction. The thesis topic started as a simple application of converting an analog-based control of DC-DC converter to a DSP-based digital control, but it quickly developed into control hardware in the loop project. This shows how powerful and effective a DSP-based simulation can perform. This and a demand from the application engineering perspective made the author to explore different methods that can be even more effective. As the author researched around the idea, he realized there has been some work done in the past but nothing to the effect of utilizing a DSP to simulate DC-DC converters. Therefore the last part of this thesis is to study feasibility of utilizing DSP as a real time simulator of power electronics devices especially high frequency DC-DC converters. Although the results on the last part of the project are not definitive but there are very strong signs showing that in the future there is very strong potential of DSP-based real time simulation for power electronics devices. This along other advantages that real time simulation can bring about such as, speed of simulation in opposed to a slow, off-line, and computer-based simulation, significant cost reduction, portability, and creating a guided-design are a few advantages to name.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5045
- Format
- Thesis
- Title
- Independent Component Analysis Algorithm FPGA Design to Perform Real-Time Blind Source Separation.
- Creator
-
Odom, Crispin, Meyer-Baese, Uwe, Foo, Simon, Roberts, Rodney, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms....
Show moreThe conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this thesis was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-5079
- Format
- Thesis
- Title
- Application of Artificial Intelligence to Rotating Machine Condition Monitoring.
- Creator
-
Nyanteh, Yaw Dwamena, Edrington, Chris S., Cartes, David A., Oates, William, Roberts, Rodney, Andrei, Petru, Srivastava, Sanjeev K., Department of Electrical and Computer...
Show moreNyanteh, Yaw Dwamena, Edrington, Chris S., Cartes, David A., Oates, William, Roberts, Rodney, Andrei, Petru, Srivastava, Sanjeev K., Department of Electrical and Computer Engineering, Florida State University
Show less - Abstract/Description
-
Systems with critical functionality and are prone to damage due to excessive stress level from operation conditions and working environment requires health monitoring. Condition or health monitoring involves acquiring data that can be analyzed to determine the occurrence of faults, determine the type of fault, determine the severity of a fault and determine when the next fault would occur. This research has considered new fault analysis techniques for rotating electrical machines using...
Show moreSystems with critical functionality and are prone to damage due to excessive stress level from operation conditions and working environment requires health monitoring. Condition or health monitoring involves acquiring data that can be analyzed to determine the occurrence of faults, determine the type of fault, determine the severity of a fault and determine when the next fault would occur. This research has considered new fault analysis techniques for rotating electrical machines using Artificial Intelligence (AI) techniques. The analysis has been carried out in three sections: fault diagnosis, fault detection and fault prognosis. By way of fault diagnosis, Finite Element Analysis (FEA) has been used to model different faults in a Permanent Magnet Synchronous Machine (PMSM) which has been analyzed by way of classification using five Artificial Intelligence Techniques. The original large dimensional dataset is first used in the classification process and the different fault classifiers compared based on their performance using different fault classifiers from the FEA model. The dimensions of the dataset are reduced, using four different manifold reduction techniques. Manifold reduction is carried out to reduce the computational burden of fault classification on high dimensionality data. Two new techniques for fault detection using AI is presented and applied to PMSMs by way of computer simulations and experimental data from an actual PMSM. One technique called the Peak-to-Peak technique uses an Artificial Neural Network (ANN) trained using PSO and can distinguish short circuit faults from loading transients. In the second method, called Turn-to-Turn method, the zero current components is used to determine the number of shorted turns in the stator windings using an ANN trained using the Extended Kalman Filter (EKF) method. Finally a new method of determining the time-to-breakdown of insulation systems is presented as a fault prognosis approach. Also a new micro simulation model is presented for simulating the breakdown of dielectric materials. The new prognostics method is based on a macro model developed in conjunction with ANNs. The prognosis approach is based on associating the breakdown characteristics of dielectrics to Partial Discharge (PD) that take place during dielectric breakdown.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-8713
- Format
- Thesis
- Title
- Applications of Compressive Sampling for Reconstruction in Side-Scan Sonar Imagery.
- Creator
-
Skinner, Dana E., Foo, Simon, Meyer-Baese, Anke, Hilton, Amy Chan, Meyer-Baese, Uwe, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The Nyquist-Shannon sampling theorem states that in order to reconstruct a compressed image, the number of samples needed must match the desired resolution. In other words the signal must be sampled at a rate at least twice the largest bandwidth in order to avoid aliasing. However it is possible and mathematically proven that compressive sampling (CS) can defy this theorem. The basic idea behind compressive sampling is to transform the image or signal into a suitable basis function and then...
Show moreThe Nyquist-Shannon sampling theorem states that in order to reconstruct a compressed image, the number of samples needed must match the desired resolution. In other words the signal must be sampled at a rate at least twice the largest bandwidth in order to avoid aliasing. However it is possible and mathematically proven that compressive sampling (CS) can defy this theorem. The basic idea behind compressive sampling is to transform the image or signal into a suitable basis function and then carry out only the important expansion coefficients. CS is incomplete without a suitable method for reconstruction. Theoretically, reconstruction relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The majority of the research remains in the medical imaging area. In our proposed work, we test current methods of reconstruction versus an alternative optimization algorithm that has yet to be used in compressive sampling, specifically the cross entropy method. Our focused application is on maintaining pertinent information, such as mine-like objects or tumor-like areas, in Side-scan sonar (SSS) images, magnetic resonance imaging (MRI) and mammography, respectively. Currently the JPEG-2000, a multi-level discrete wavelet transform, is the industry standard in image compression but this post processing could be avoidable with compressive sampling. A more efficient process of compression during the sampling stage and post processing reconstruction for SSS images has remained untested. Hence the motivation is to test current and new methods on SSS images, along with medical images for a baseline. The proposed methods can be a competitive alternative to the industry standard of post processing using the wavelet based JPEG-2000 and JPEG on SSS images, MRI and mammography. The final proposed work introduces a new method to denoise SSS images. This method solves the popular Total Variation (TV) problem by splitting the energies of the l1 and l2-norms. The resulting image will have a higher resolution and, depending on the parameters chosen, will be smoother.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-8717
- Format
- Thesis
- Title
- A 'Proton-Free' Coil for Magnetic Resonance Imaging of Porous Media.
- Creator
-
Seshadhri, Madhumitha, Foo, Simon Y., Brey, William W., Andrei, Petru, Arora, Rajendra K., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Nuclear Magnetic Resonance Imaging or MRI is a non invasive imaging technique which exploits the inherent magnetic field produced by nuclei of atoms with a nonzero spin. Since hydrogen is the most abundant atom, it forms the basis of NMR imaging techniques. The transparency of many materials to RF irradiation coupled with the access to a large variety of contrast parameters and the non-destructiveness of the method may make it highly useful in materials imaging. In a number of situations...
Show moreNuclear Magnetic Resonance Imaging or MRI is a non invasive imaging technique which exploits the inherent magnetic field produced by nuclei of atoms with a nonzero spin. Since hydrogen is the most abundant atom, it forms the basis of NMR imaging techniques. The transparency of many materials to RF irradiation coupled with the access to a large variety of contrast parameters and the non-destructiveness of the method may make it highly useful in materials imaging. In a number of situations there is a critical need to evaluate the distribution of small amounts of water adsorbed throughout a solid sample. One of these pertains to Spray-On Foam Insulation (SOFI), a thermal insulation material used on liquid hydrogen and oxygen tanks on space shuttles. The basic components of an NMR spectrometer are the magnet, amplifiers, transceiver and imaging coils. In MRI, imaging coils are radio-frequency coils that serve two purposes: the excitation of nuclear spins and the detection of nuclear precession. This thesis aims to successfully design a RF coil for 1H imaging of foam and the water trapped within it. The single turn solenoid is probably the most simple and efficient RF coil design. This type was selected as it has high sensitivity and uniform homogeneity throughout the volume of the coil. The coil has been optimized in terms of dimension, feasibility, strategies for tuning and matching and performance.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-1806
- Format
- Thesis
- Title
- Spectrum Management in Wireless Networks.
- Creator
-
Ma, Xiaoguang, Yu, Ming, Duan, Zhenhai, Harvey, Bruce A., Kwan, Bing W., Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
The limited spectrum provided by the IEEE 802.11 standard is not efficiently utilized in the existing wireless networks. The inefficiency comes from three issues in spectrum management. First, the utilization of the available non-overlapping channels is not evenly distributed, that is, closely deployed users tend to congregate in the same or interfering channels. This issue incurs an excessive amount of co-channel interference (CCI), causing collisions, and thus decreases network throughput....
Show moreThe limited spectrum provided by the IEEE 802.11 standard is not efficiently utilized in the existing wireless networks. The inefficiency comes from three issues in spectrum management. First, the utilization of the available non-overlapping channels is not evenly distributed, that is, closely deployed users tend to congregate in the same or interfering channels. This issue incurs an excessive amount of co-channel interference (CCI), causing collisions, and thus decreases network throughput. Second, the dynamic radio channel allocation (RCA) problem is non-deterministic polynomial-time hard (NP-hard). The employed heuristic optimization methods can not efficiently find a global optimum, including simple minimization or maximization processes, or certain slow learning processes. Third, the default transmission power of a user reserves unnecessarily large deference areas, in which the collision avoidance (CA) mechanisms prohibit simultaneous transmissions in a given channel. Consequently, the spatial channel reuse is significantly reduced. For the first issue, many RCA algorithms have been proposed. The objective is to minimize CCI among co-channel users while increasing network throughput. Most RCA algorithms use heuristic optimization methods, which have restricted performance limited by one or more of the following aspects. 1)Their evaluation variables may not properly reflect the CCI levels in a network, e.g., the number of co-channel users, the local energy levels, etc.. 2)The dynamic RCA problem is non-deterministic polynomial-time hard (NP-hard). The employed heuristic optimization methods can not efficiently find a global optimum, e.g., simple minimization or maximization processes, or certain slow learning processes. 3)The information gathering and processing approaches in these RCA algorithms require prohibitive overheads, such as a common control channel or a central controller. 4)Some unrealistic premises are used, e.g., all users in the same channel can hear each other. 5)Most RCA algorithms are designed for some specific networks. For example, an algorithm designed for organized-or-information sharing (OIS) networks does not work properly in non-organized-nor-information-Sharing (NOIS) networks. For the second issue, it is worth pointing out that the complexity of the existing distributed RCA algorithms has not been studied. For the third issue, various power control algorithms, including courtesy algorithms and opportunistic algorithms, have been introduced to restrain transmission power and thus to minimize deference areas, which in turn to maximize the spatial channel reuse. The courtesy algorithms assign a node with a specific power level according to the link length, which is the distance between the transmitter and the receiver, and the noise and interference power level. These algorithms can be further classified into linear power assignment algorithms and non-linear power assignment algorithms. The linear power assignment algorithms are so aggressive that they may introduce extra hidden terminals, which cause additional unregulated collisions. However, the non-linear algorithms are too conservative to maximize the power control benefits. The opportunistic power control algorithms allow conditional violations of the CA mechanisms, i.e., a deferring node can initiate a transmission with a deliberately calculated transmission power so that the ongoing transmission will not be affected. However, the power calculation is based on the constants that are only valid in certain wireless scenarios. Related to this issue, a more difficult problem is how to improve network throughput when the demanded data rate within a certain area exceeds the limit of throughput density, which is defined as the upper limit of the total throughput constrained by the modulation techniques and CA mechanisms in the area. Note that no existing algorithm, neither RCA nor the power control, is able to solve this problem. In this work, we focus our study on the above issues in the spectrum management of wireless networks. Our contributions can be summarized as follows. Firstly, to solve the first issue, we propose an annealing Gibbs sampling (AGS) based distributive RCA (ADRCA) algorithm. The ADRCA algorithm has the following advantages: 1)It uses average effective channel utilization (AECU) to evaluate the channel condition. AECU has a simple relationship with CCI and can accurately reflect the channel congestion conditions. 2)It employs the AGS optimization method, which divides a global optimization problem into a set of distributed local optimization problems. Each of those problems can be solved by simulating a Markov chain. The stationary distribution of the Markov chains is a globally optimized solution. 3)It includes three different cases, namely AGS1, AGS2 and AGS3, which adapt to various types of wireless networks with different optimization objectives. AGS1 is designed to search for a global optimal channel assignment in OIS networks; AGS2 is proposed to work in NOIS networks and pursue maximum individual performance. Added with a prerequisite for RCA procedures, AGS3 focuses on cost-effectiveness, reduces channel reallocation attempts, and enhances system stability without significantly downgrading its optimization performance. To further study the cost-effectiveness of ADRCA, an upper limit of the computational scale (CS) is found for AGS3 based on an innovative neighboring relationship model in a practical network scenario. Secondly, to solve the second issue, we propose a hybrid approach to study the computational scale (CS), which is defined as the number of channel reallocations until a network reaches a convergent state. First, we propose a simple relationship model to describe the interference relation between an AP and its neighboring APs. Second, for one of the simplest cases in the relationship model, we find an analytical solution for the CS and validate it by simulations. Third, for more general cases, we combine the cases with a similar CS means by using one-way analysis of variance (ANOVA) and find the upper bound of the CS with extensive simulations. The simulation results demonstrate that the hybrid approach is simple and accurate as compared to traditional intuitive comparison methods. Based on the aforementioned hybrid approach, an upper limit of the CS is found for AGS3 in a practical network scenario. Thirdly, to solve the third issue and also raise the limit of throughput density, we propose the channel allocation with power-control (CAP) strategy which integrates the ADRCA algorithm and the digitized adaptive power control (DAPC) algorithm, to achieve a synergetic benefit between power control and RCA, which is not considered by the existing RCA algorithms. The synergy comes from the following two aspects: • By reducing the transmission power of each node, DAPC can lower CCI levels, allow more simultaneous transmissions within a certain area, increase spatial reuse, and raise the limit of the throughput density. It also reduces the number of nodes competing for a given channel, and thus significantly decreases the CS of ADRCA. • By striving to assign interfering neighbors to non-overlapping channels, ADRCA minimizes the number of hidden terminals introduced by the power control processes. The integration causes two potential problems. First, since most RCA algorithms are heuristic, after a system converges, any change in transmission power may trigger unnecessary channel reallocation processes, which then would lead to extra computational costs. Second, channel reallocations can also invalidate current transmission power assignments. The above two problems significantly impair system stability. The CAP strategy overcomes the following two problems as follows: to mitigate the impact of the first problem, a node must estimate the conditions of a new channel and use adaptive transmission power accordingly; for the second problem, the node must calculate the transmission power using linear power control algorithms and round it up to the next larger level in a given set of predetermined power levels. There are several statistical methods applied in our study, including the Markov chain Monte Carlo (MCMC) method, distribution model fitting, paired t-test, and the ANOVA test. They are more accurate and efficient than traditional intuitive comparison methods, making this study an important cornerstone for further research. In this work, we have conducted extensive simulations to demonstrate the effectiveness of the proposed methods. Our simulation results show that AGS1 can achieve a global optimum in most OIS network scenarios. With a 95% confidence level, it achieves 99.75% of the global maximum throughput. AGS2 performs on par with AGS1 in NOIS networks. AGS3 reduces the CS by as much as 98% compared to AGS1 and AGS2. The simulation results also demonstrate that compared with the standard MAC protocol, CAP increases the overall throughput by up to 9.5 times and shortens the end-to-end delay by up to 80% for UDP traffic.
Show less - Date Issued
- 2010
- Identifier
- FSU_migr_etd-2818
- Format
- Thesis
- Title
- Dynamic Resource Management in Wireless Networks.
- Creator
-
Malvankar, Aniket A. (Aniket Ashok), Yu, Ming, Duan, Zhenhai, Harvey, Bruce, Foo, Simon, Department of Electrical and Computer Engineering, Florida State University
- Abstract/Description
-
Wireless Communication has been a rapidly growing industry over the past decade. The mobile and portable device market has boomed with the advent of new data, multi-media and voice technologies. The technical advances in mobile and personalized computer fields have accelerated wireless communication into a crucial segment of the communication industry. Introduction of smart phones and hand held devices with internet browsing, email, and multi-media services, has made it essential to add...
Show moreWireless Communication has been a rapidly growing industry over the past decade. The mobile and portable device market has boomed with the advent of new data, multi-media and voice technologies. The technical advances in mobile and personalized computer fields have accelerated wireless communication into a crucial segment of the communication industry. Introduction of smart phones and hand held devices with internet browsing, email, and multi-media services, has made it essential to add features like security, reliability etc over the wireless network. Wireless sensor networks which are a subset of the wireless ad hoc networks have been deployed in various military and defense applications. The popularity of 802.11 technologies have led to large scale manufacturing of 802.11 chipsets and reduced the cost drastically. Thus enabling deployment of large scale wifi networks resembling sensor environment. As wireless communication uses air interface it is challenging to support such advanced QoS (Quality Of Service), features due to external interference. Some of the typical interference encountered is from other electronic devices like microwaves, environmental interference like rain, and from physical structures like buildings. Also it is a known fact that battery technology hasn't kept pace with the electronics industry. Consequently to add portability to these wireless devices it has become essential to cut down on energy sources embedded within these devices. Hence wireless equipment designers have to combat interference with minimal power expenditure. To best utilize the limited resources of these wireless devices and guarantee QoS it is essential to design specialized algorithms spanning across all layers of the network. These algorithms should not only take into account the network parameters but also dynamically adapt to the changes in the network configurations, traffic etc. The complete set of such techniques constitutes what can be described as Dynamic Resource Management In Wireless Networks. The proposed research was aimed to design techniques such as dynamic channel allocation, energy efficient clustering and reliable power aware routing. Clustering is one of the energy efficient architecture in wireless ad hoc networks and more specifically used in sensor network like environment. Clustering is achieved by grouping devices together based on location, traffic generation etc. Clustering not only limits energy spent by devices in communication, but also aids in better utilization of channel by avoiding collisions. Clustering makes sure that devices communicate with their respective cluster head with minimal required power thereby causing very less interference to the devices in the neighboring cluster. It also makes it possible to combine and compress the information at the CH (Cluster Head) before relaying it to the central collection or base station
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-2762
- Format
- Thesis