Current Search: Okten, Giray (x)
Search results
Pages
- Title
- First Semester in Numerical Analysis with Julia.
- Creator
-
Ökten, Giray
- Abstract/Description
-
First Semester in Numerical Analysis with Julia presents the theory and methods, together with the implementation of the algorithms using the Julia programming language (version 1.1.0). The book covers computer arithmetic, root-finding, numerical quadrature and differentiation, and approximation theory. The reader is expected to have studied calculus and linear algebra. Some familiarity with a programming language is beneficial, but not required. The programming language Julia will be...
Show moreFirst Semester in Numerical Analysis with Julia presents the theory and methods, together with the implementation of the algorithms using the Julia programming language (version 1.1.0). The book covers computer arithmetic, root-finding, numerical quadrature and differentiation, and approximation theory. The reader is expected to have studied calculus and linear algebra. Some familiarity with a programming language is beneficial, but not required. The programming language Julia will be introduced in the book. The simplicity of Julia allows bypassing the pseudocode and writing a computer code directly after the description of a method while minimizing the distraction the presentation of a computer code might cause to the flow of the main narrative. This document will be corrected as errors are found; refer to the Notes section of this record for the most recent version.
Show less - Date Issued
- 2019-04-23
- Identifier
- FSU_libsubv1_scholarship_submission_1556028278_15938059, 10.33009/jul
- Format
- Citation
- Title
- The Emergence of Collective Phenomena in Systems with Random Interactions.
- Creator
-
Abramkina, Volha, Volya, Alexander, Okten, Giray, Capstick, Simon, Rogachev, Grigory, Rikvold, Per Arne, Department of Physics, Florida State University
- Abstract/Description
-
Emergent phenomena are one of the most profound topics in modern science, addressing the ways that collectivities and complex patterns appear due to multiplicity of components and simple interactions. Ensembles of random Hamiltonians allow one to explore emergent phenomena in a statistical way. In this work we adopt a shell model approach with a two-body interaction Hamiltonian. The sets of the two-body interaction strengths are selected at random, resulting in the two-body random ensemble ...
Show moreEmergent phenomena are one of the most profound topics in modern science, addressing the ways that collectivities and complex patterns appear due to multiplicity of components and simple interactions. Ensembles of random Hamiltonians allow one to explore emergent phenomena in a statistical way. In this work we adopt a shell model approach with a two-body interaction Hamiltonian. The sets of the two-body interaction strengths are selected at random, resulting in the two-body random ensemble (TBRE). Symmetries such as angular momentum, isospin, and parity entangled with complex many-body dynamics result in surprising order discovered in the spectrum of low-lying excitations. The statistical patterns exhibited in the TBRE are remarkably similar to those observed in real nuclei. Signs of almost every collective feature seen in nuclei, namely, pairing superconductivity, deformation, and vibration, have been observed in random ensembles. In what follows a systematic investigation of nuclear shape collectivities in random ensembles is conducted. The development of the mean field, its geometry, multipole collectivities and their dependence on the underlying two-body interaction are explored. Apart from the role of static symmetries such as SU(2) angular momentum and isospin groups, the emergence of dynamical symmetries including the seniority SU(2), rotational symmetry, as well as the Elliot SU(3) is shown to be an important precursor for the existence of geometric collectivities.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-0104
- Format
- Thesis
- Title
- Adaptive Spectral Element Methods to Price American Options.
- Creator
-
Willyard, Matthew, Kopriva, David, Eugenio, Paul, Case, Bettye Anne, Gallivan, Kyle, Nolder, Craig, Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
We develop an adaptive spectral element method to price American options, whose solutions contain a moving singularity, automatically and to within prescribed errors. The adaptive algorithm uses an error estimator to determine where refinement or de-refinement is needed and a work estimator to decide whether to change the element size or the polynomial order. We derive two local error estimators and a global error estimator. The local error estimators are derived from the Legendre...
Show moreWe develop an adaptive spectral element method to price American options, whose solutions contain a moving singularity, automatically and to within prescribed errors. The adaptive algorithm uses an error estimator to determine where refinement or de-refinement is needed and a work estimator to decide whether to change the element size or the polynomial order. We derive two local error estimators and a global error estimator. The local error estimators are derived from the Legendre coefficients and the global error estimator is based on the adjoint problem. One local error estimator uses the rate of decay of the Legendre coefficients to estimate the error. The other local error estimator compares the solution to an estimated solution using fewer Legendre coefficients found by the Tau method. The global error estimator solves the adjoint problem to weight local error estimates to approximate a terminal error functional. Both types of error estimators produce meshes that match expectations by being fine near the early exercise boundary and strike price and coarse elsewhere. The produced meshes also adapt as expected by de-refining near the strike price as the solution smooths and staying fine near the moving early exercise boundary. Both types of error estimators also give solutions whose error is within prescribed tolerances. The adjoint-based error estimator is more flexible, but costs up to three times as much as using the local error estimate alone. The global error estimator has the advantages of tracking the accumulation of error in time and being able to discount large local errors that do not affect the chosen terminal error functional. The local error estimator is cheaper to compute because the global error estimator has the added cost of solving the adjoint problem.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-0892
- Format
- Thesis
- Title
- On the Multidimensional Default Threshold Model for Credit Risk.
- Creator
-
Zhou, Chenchen, Kercheval, Alec N., Wu, Wei, Ökten, Giray, Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
This dissertation is based on the structural model framework for default risk that was first introduced by garreau2016structural (henceforth: the "G-K model"). In this approach, the time of default is defined as the first time the log-return of the firm's stock price jumps below a (possibly stochastic) "default threshold'' level. The stock price is assumed to follow an exponential L\'evy process and, in the multidimensional case, a multidimensional L\'evy process. This new structural model is...
Show moreThis dissertation is based on the structural model framework for default risk that was first introduced by garreau2016structural (henceforth: the "G-K model"). In this approach, the time of default is defined as the first time the log-return of the firm's stock price jumps below a (possibly stochastic) "default threshold'' level. The stock price is assumed to follow an exponential L\'evy process and, in the multidimensional case, a multidimensional L\'evy process. This new structural model is mathematically equivalent to an intensity-based model where the intensity is parameterized by a L\'evy measure. The dependence between the default times of firms within a basket is the result of the jump dependence of their respective stock prices and described by a L\'evy copula. To extend the previous work, we focus on generalizing the joint survival probability and related results to the d-dimensional case. Using the link between L\'evy processes and multivariate exponential distributions, we derive the joint survival probability and characterize correlated default risk using L\'evy copulas. In addition, we extend our results to include stochastic interest rates. Moreover, we describe how to use the default threshold as the interface for incorporating additional exogenous economic factors, and still derive basket credit default swap (CDS) prices in terms of expectations. If we make some additional modeling assumptions such that the default intensities become affine processes, we obtain explicit formulas for the single name and first-to-default (FtD) basket CDS prices, up to quadrature.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Zhou_fsu_0071E_14012
- Format
- Thesis
- Title
- Modeling Order Book Dynamics Using Queues and Point Processes.
- Creator
-
Huang, He, Kercheval, Alec, Marquis, Milton, Nolder, Craig, Okten, Giray, Ewald, Brian, Department of Mathematics, Florida State University
- Abstract/Description
-
The objective of this dissertation is to study the queuing and point process models that try to capture as many features as possible of the high-frequency data of a limit order book. First, we use a generalized birth-death stochastic process to model the high-frequency dynamics of the limit order book, and illustrate it using parameters estimated from Level II data for a stock on the London Stock Exchange. A new feature of this model is that limit orders are allowed to arrive in multiple...
Show moreThe objective of this dissertation is to study the queuing and point process models that try to capture as many features as possible of the high-frequency data of a limit order book. First, we use a generalized birth-death stochastic process to model the high-frequency dynamics of the limit order book, and illustrate it using parameters estimated from Level II data for a stock on the London Stock Exchange. A new feature of this model is that limit orders are allowed to arrive in multiple sizes, an important empirical feature of the order book. We can compute various quantities of interest without resorting to simulation, conditional on the state of the order book, such as the probability that the next move of the mid-price will be upward, or the probability, as a function of order size, that a limit ask order will be executed before a downward move in the mid-price. Furthermore, univariate-bivariate Hawkes' processes are developed and calibrated to capture the ``clustering'' and "mutually exciting'' features of the order arrivals in a limit order book. Although due to technical reasons, probabilities of interest such as those of prices going up for the next move are not shown for this model, a Monte Carlo simulation algorithm of point processes called emph{thinning algorithm} is successfully modified to derive the cumulative distribution functions of some first-passage times in the order book.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4919
- Format
- Thesis
- Title
- Sparse Factor Auto-Regression for Forecasting Macroeconomic Time Series with Very Many Predictors.
- Creator
-
Galvis, Oliver Kurt, She, Yiyuan, Okten, Giray, Beaumont, Paul, Huffer, Fred, Tao, Minjing, Department of Statistics, Florida State University
- Abstract/Description
-
Forecasting a univariate target time series in high dimensions with very many predictors poses challenges in statistical learning and modeling. First, many nuisance time series exist and need to be removed. Second, from economic theories, a macroeconomic target series is typically driven by few latent factors constructed from some macroeconomic indices. Consequently, a high dimensional problem arises where deleting junk time series and constructing predictive factors simultaneously, are...
Show moreForecasting a univariate target time series in high dimensions with very many predictors poses challenges in statistical learning and modeling. First, many nuisance time series exist and need to be removed. Second, from economic theories, a macroeconomic target series is typically driven by few latent factors constructed from some macroeconomic indices. Consequently, a high dimensional problem arises where deleting junk time series and constructing predictive factors simultaneously, are meaningful and advantageous for accuracy of the forecasting task. In macroeconomics, multiple categories are available with the target series belonging to one of them. With all series available we advocate constructing category level factors to enhance the performance of the forecasting task. We introduce a novel methodology, the Sparse Factor Auto-Regression (SFAR) methodology, to construct predictive factors from a reduced set of relevant time series. SFAR attains dimension reduction via joint variable selection and rank reduction in high dimensional time series data. A multivariate setting is used to achieve simultaneous low rank and cardinality control on the matrix of coefficients where $ell_{0}$-constraint regulates the number of useful series and the rank constrain elucidates the upper bound for constructed factors. The doubly-constrained matrix is a nonconvex mathematical problem optimized via an efficient iterative algorithm with a theoretical guarantee of convergence. SFAR fits factors using a sparse low rank matrix in response to a target category series. Forecasting is then performed using lagged observations and shrinkage methods. We generate a finite sample data to verify our theoretical findings via a comparative study of the SFAR. We also analyze real-world macroeconomic time series data to demonstrate the usage of the SFAR in practice.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-8990
- Format
- Thesis
- Title
- Time Parallelization Methods for the Solution of Initial Value Problems.
- Creator
-
Yu, Yanan, Srinvasan, Ashok, Okten, Giray, Kumar, Piyush, Tyson, Gary, Yuan, Xin, Department of Computer Science, Florida State University
- Abstract/Description
-
Many scientific problems are posed as Ordinary differential Equations (ODEs). A large subset of these are initial value problems, which are typically solved numerically. The solution starts by using a known state-space of the ODE system to determine the state at a subsequent point it time. This process is repeated several times. When the computational demand is high due to large state space, parallel computers can be used efficiently to reduce the time of solution. Conventional...
Show moreMany scientific problems are posed as Ordinary differential Equations (ODEs). A large subset of these are initial value problems, which are typically solved numerically. The solution starts by using a known state-space of the ODE system to determine the state at a subsequent point it time. This process is repeated several times. When the computational demand is high due to large state space, parallel computers can be used efficiently to reduce the time of solution. Conventional parallelization strategies distribute the state space of the problem amongst processors and distribute the task of computing for a single time step amongst the processors. They are not effective when the computational problems have fine granularity, for example, when the state space is relatively small and the computational effort arises largely from the long time span of the initial value problem. The above limitation is increasingly becoming a bottleneck for important applications, in particular due to a couple of architectural trends. One is the increase in number of cores on massively parallel machines. The high end systems of today have hundreds of thousands of cores, and machines of the near future are expected to support the order of a million simultaneous threads. Computations that were coarse grained on earlier machines are often fine grained on these. Another trend is the increased number of cores on a chip. This has provided desktop access to parallel computing for the average user. A typical low-end user requiring the solution of an ODE with a small state space would earlier not consider a parallel system. However, such a system is now available to the user. Users of both the above environments need to deal with the problem of parallelizing ODEs with small state space. Parallelization of the time domain appears promising to deal with this problem. The idea behind this is to divide the entire time span of the initial value problem into smaller intervals and have each processor compute one interval at a time, instead of dividing the state space. The difficulty lies in that time is an intrinsically sequential quantity and one time interval can only start after its preceding interval completes, since we are solving an initial value problem. Earlier attempts at parallelizing the time domain were not very successful. This thesis proposes two different time parallelization strategies, and demonstrates their effectiveness in dealing with the bottleneck described above. The thesis first proposes a hybrid dynamic iterations method which combines conventional sequential ODE solvers with dynamic iterations. Empirical results demonstrate a factor of two to three improvement in performance of the hybrid dynamic iterations method over a sequential solver on an $8$ core processor, while conventional state-space decomposition is not useful due to the communication overhead. Compared to Picard iterations (also parallelized in the time domain), the proposed method shows better convergence and speedup results when high accuracy is required. The second proposed method is a data-driven time parallelization algorithm. The idea is to use results from related prior computations to predict the states in a new computation, and then parallelize the new computation in the time domain. The effectiveness of the this method is demonstrated on Molecular Dynamics (MD) simulations of Carbon Nanotube (CNT) tensile tests. MD simulation is a special application of initial value problems. Empirical results show that the data-driven time parallelization method scales to two to three orders of magnitude larger numbers of processors than conventional state-space decomposition methods. This approach achieves the highest scalability for MD on general purpose computers. The time parallel method can also be combined with state space decomposition methods to improve the scalability and efficiency of the conventional parallelization method. This thesis presents a combined data-driven time parallelization and state space decomposition method and adapts it in MD simulations of soft-matter, which is typically seen in computational biology. Since MD method is an important atomistic simulation technique and widely used in computational chemistry, biology and materials, the data-driven time parallel method also suggests a promising approach for realistic simulations with long time span.
Show less - Date Issued
- 2010
- Identifier
- FSU_migr_etd-0911
- Format
- Thesis
- Title
- From Songs to Synapses, Ion Channels and Mathematical Modeling.
- Creator
-
Daou, Arij, Bertram, Richard, Ryan, Pamela, Johnson, Frank, Hyson, Richard, Wu, Wei, Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
Since the scientific study of birdsong began in the late 1950s, songbirds have emerged as impressive neurobiological models for aspects of human verbal communication because they learn to sequence their song elements, analogous, in many ways, to how humans learn to produce spoken sequences with syntactic structure. Thus, determining how spoken language evolved is more likely to become clearer with concerted efforts in researching songbirds. Some of the most fundamental questions in...
Show moreSince the scientific study of birdsong began in the late 1950s, songbirds have emerged as impressive neurobiological models for aspects of human verbal communication because they learn to sequence their song elements, analogous, in many ways, to how humans learn to produce spoken sequences with syntactic structure. Thus, determining how spoken language evolved is more likely to become clearer with concerted efforts in researching songbirds. Some of the most fundamental questions in neuroscience are pursued through the study of songbirds. How does the brain generate complex sequential behaviors? How do we learn to speak? How do humans learn various behaviors by observing and imitating others? Where are the "prime movers" that control behavior? Which circuits in the brain control the order in which motor gestures of a learned behavior are generated? Among all these questions, of particular interest to us is the question of sequential behavior. Understanding the neural mechanisms that underlie sequential behavior and imitative learning is the holy grail of the field. The birdsong provided us with a uniquely powerful model for tackling this question in a system where the brain structures responsible for its generation are well known. We pursued the study of sequential neural activity in songbirds on three levels: behavioral, cellular and network. On the behavioral level, we developed a computational tool for automated, quantitative syllable-level analysis of bird song syntax. This tool aids songbird researchers and fanciers in comparing and quantifying the syntactic structure of songs produced by a bird prior to and after a manipulation such as ablation of brain region or infusion of pharmacological agents, in addition to several other purposes. As we will discuss later, this syntactic structure is highly stereotyped in songbirds and driven by neurons firing in sequential order in particular regions of the songbird's brain. On the cellular level, the telencephalic nucleus HVC (proper name) within the songbird analogue of the mammalian pre-motor cortex is situated at a critical point in the pattern-generating premotor brain circuitry of oscine songbirds. This nucleus is of extreme importance to the songbird and produces stereotyped instructions through the motor pathway leading to precise, learned vocalization by songbirds. HVC contains three populations of neurons that are interconnected, with specific patterns of excitatory and inhibitory connectivity. Characterizing the neurons in HVC is a very important requirement for decoding the neural code of the birdsong. We performed whole-cell current clamp recordings on HVC neurons within brain slices to examine their intrinsic firing properties and determine which ionic currents are responsible for their characteristic firing patterns. We also developed conductance-based models for the different neurons and calibrated the models using data from our brain slice work. These models were then used to generate predictions about the makeup of the ionic currents that are responsible for the different responses to stimuli. These predictions were then tested and verified in the slice using pharmacological manipulations. Our results are an improved characterization of the HVC neurons responsible for song production in the songbird which are the key ingredients in understanding the HVC network. We then developed prototype neural architectures of the HVC that can produce the patterns of sequential neural activity exhibited by the three types of HVC neurons during singing. Our networks consist of microcircuits of interconnected neurons which are active during different syllables of the song. The various networks that we consider assign different roles to each of the HVC neurons types in the production of the sequential activity pattern, and show great flexibility in the connectivity patterns among the neuron types. The model networks developed provide key insights into how the different types of HVC neurons can be used for sequence generation. The significance of the work presented in this dissertation is that it helps elucidate the neural mechanisms behind HVC activity. The in vitro studies we performed in brain slices and the models we developed provide critical pieces to the puzzle of sequential behavior.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7347
- Format
- Thesis
- Title
- Theories on Group Variable Selection in Multivariate Regression Models.
- Creator
-
Ha, Seung-Yeon, She, Yiyuan, Okten, Giray, Huffer, Fred, Sinha, Debajyoti, Department of Statistics, Florida State University
- Abstract/Description
-
We study group variable selection on multivariate regression model. Group variable selection is equivalent to select the non-zero rows of coefficient matrix, since there are multiple response variables and thus if one predictor is irrelevant to estimation then the corresponding row must be zero. In high dimensional setup, shrinkage estimation methods are applicable and guarantee smaller MSE than OLS according to James-Stein phenomenon (1961). As one of shrinkage methods, we study penalized...
Show moreWe study group variable selection on multivariate regression model. Group variable selection is equivalent to select the non-zero rows of coefficient matrix, since there are multiple response variables and thus if one predictor is irrelevant to estimation then the corresponding row must be zero. In high dimensional setup, shrinkage estimation methods are applicable and guarantee smaller MSE than OLS according to James-Stein phenomenon (1961). As one of shrinkage methods, we study penalized least square estimation for a group variable selection. Among them, we study L0 regularization and L0 + L2 regularization with the purpose of obtaining accurate prediction and consistent feature selection, and use the corresponding computational procedure Hard TISP and Hard-Ridge TISP (She, 2009) to solve the numerical difficulties. These regularization methods show better performance both on prediction and selection than Lasso (L1 regularization), which is one of popular penalized least square method. L0 acheives the same optimal rate of prediction loss and estimation loss as Lasso, but it requires no restriction on design matrix or sparsity for controlling the prediction error and a relaxed condition than Lasso for controlling the estimation error. Also, for selection consistency, it requires much relaxed incoherence condition, which is correlation between the relevant subset and irrelevant subset of predictors. Therefore L0 can work better than Lasso both on prediction and sparsity recovery, in practical cases such that correlation is high or sparsity is not low. We study another method, L0 + L2 regularization which uses the combined penalty of L0 and L2. For the corresponding procedure Hard-Ridge TISP, two parameters work independently for selection and shrinkage (to enhance prediction) respectively, and therefore it gives better performance on some cases (such as low signal strength) than L0 regularization. For L0 regularization, λ works for selection but it is tuned in terms of prediction accuracy. L0 + L2 regularization gives the optimal rate of prediction and estimation errors without any restriction, when the coefficient of l2 penalty is appropriately assigned. Furthermore, it can achieve a better rate of estimation error with an ideal choice of block-wise weight to l2 penalty.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7404
- Format
- Thesis
- Title
- Non-Intrusive Methods for Probablistic Uncertainty Quantification and Global Sensitivity Analysis in Nonlinea Stochastic Phenomena.
- Creator
-
Liu, Yaning, Hussaini, M. Yousuff, Okten, Giray, Srivastava, Anuj, Sussman, Mark, Department of Mathematics, Florida State University
- Abstract/Description
-
The objective of this work is to quantify uncertainty and perform global sensitivity analysis for nonlinear models with a moderate or large number of stochastic parameters. We implement non-intrusive methods that do not require modification of the programming code of the underlying deterministic model. To avoid the curse of dimensionality, two methods, namely sampling methods and high dimensional model representation are employed to propagate uncertainty and compute global sensitivity indices...
Show moreThe objective of this work is to quantify uncertainty and perform global sensitivity analysis for nonlinear models with a moderate or large number of stochastic parameters. We implement non-intrusive methods that do not require modification of the programming code of the underlying deterministic model. To avoid the curse of dimensionality, two methods, namely sampling methods and high dimensional model representation are employed to propagate uncertainty and compute global sensitivity indices. Variance-based global sensitivity analysis identifies significant and insignificant model parameters. It also provides basis for reducing a model's stochastic dimension by freezing identified insignificant model parameters at their nominal values. The dimension-reduced model can then be analyzed efficiently. We use uncertainty quantification and global sensitivity analysis in three applications. The first application is to the Rothermel wildland surface fire spread model, which consists of around 80 nonlinear algebraic equations and 24 parameters. We find the reduced models for the selected model outputs and apply efficient sampling methods to quantify the uncertainty. High dimensional model representation is also applied for the Rothermel model for comparison. The second application is to a recently developed biological model that describes inflammatory host response to a bacterial infection. The model involves four nonlinear coupled ordinary differential equations and the dimension of the stochastic space is 16. We compute global sensitivity indices for all parameters and build a dimension-reduced model. The sensitivity results, combined with experiments, can improve the validity of the model. The third application quantifies the uncertainty of weather derivative models and investigates model robustness based on global sensitivity analysis. Three commonly used weather derivative models for the daily average temperature are considered. The one which is least influenced by an increase of parametric uncertainty level is identified as robust. In summary, the following contributions are made in this dissertation: 1. The optimization of sensitivity derivative enhanced sampling that guarantees variance reduction and improved estimation of stochastic moments. 2. The combination of optimized sensitivity derivative enhanced sampling with randomized quasi-Monte Carlo sampling, and adaptive Monte Carlo sampling, to achieve higher convergence rates. 3. The construction of cut-HDMR component functions based on Gauss quadrature points which results in a more accurate surrogate model, derivation of an integral form of low order partial variances based on cut-HDMR, and efficient computation of global sensitivity analysis based on cut-HDMR. 4. The application of efficient sampling methods, RS-HDMR and cut-HDMR for the quantification of Rothermel's wildland fire surface spread model. 5. The uncertainty quantification and global sensitivity analysis of a newly developed immune response model with parametric uncertainty. 6. The uncertainty quantification of weather derivative models and the analysis of model robustness based on global sensitivity analysis.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-8681
- Format
- Thesis
- Title
- Jump Dependence and Multidimensional Default Risk: A New Class of Structural Models with Stochastic Intensities.
- Creator
-
Garreau, Pierre, Kercheval, Alec N., Marquis, Milton H., Beaumont, Paul M., Kopriva, David A., Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
This thesis presents a new structural framework for multidimensional default risk. The time of default is the first jump of the log-returns of the stock price of a firm below a stochastic default level. When the stock price is an exponential Levy process, this new formulation is equivalent to a default model with stochastic intensity where the intensity process is parametrized by a Levy measure. This framework calibrates well to various term structures of credit default swaps. Furthermore,...
Show moreThis thesis presents a new structural framework for multidimensional default risk. The time of default is the first jump of the log-returns of the stock price of a firm below a stochastic default level. When the stock price is an exponential Levy process, this new formulation is equivalent to a default model with stochastic intensity where the intensity process is parametrized by a Levy measure. This framework calibrates well to various term structures of credit default swaps. Furthermore, the dependence between the default times of firms within a basket of credit securities is the result of the jump dependence of their respective stock prices: this class of models makes the link between the Equity and Credit markets. As an application, we show the valuation of a first-to-default swaps. To motivate this new framework, we compute the default probability in a traditional structural model of default where the firm value follows a general Levy processes. This is made possible via the resolution of a partial integro-differential equation (PIDE). We solve this equation numerically using a spectral element method based on the approximation of the solution with high order polynomials described in (Garreau & Korpiva, 2013). This method is able to handle the sharp kernels in the integral term. It is faster than the competing numerical Laplace transform methods used for first passage time problems, and can be used to compute the price of exotic options with barriers. This PIDE approach does not however extend well in higher dimensions. To understand the joint default of our new framework, we investigate the dependence structures of Levy processes. We show that for two one dimensional Levy processes to form a two dimensional Levy process, their joint survival times need to satisfy a two dimensional version of the memoryless property. We make the link with bivariate exponential random variables and the Marshall-Olkin copula. This result yields a necessary construction of dependent Levy processes, a characterization theorem for Poisson random measures and has important ramification for default models with jointly conditionally Poisson processes.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-8555
- Format
- Thesis
- Title
- Optimization Algorithms on Riemannian Manifolds with Applications.
- Creator
-
Huang, Wen, Gallivan, Kyle A., Absil, Pierre-Antoine, Duke, Dennis, Okten, Giray, Klassen, Eric P., Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation generalizes three well-known unconstrained optimization approaches for Rn to solve optimization problems with constraints that can be viewed as a d-dimensional Riemannian manifold to obtain the Riemannian Broyden family of methods, the Riemannian symmetric rank-one trust region method, and Riemannian gradient sampling method. The generalization relies on basic differential geometric concepts, such as tangent spaces, Riemannian metrics, and the Riemannian gradient, as well as...
Show moreThis dissertation generalizes three well-known unconstrained optimization approaches for Rn to solve optimization problems with constraints that can be viewed as a d-dimensional Riemannian manifold to obtain the Riemannian Broyden family of methods, the Riemannian symmetric rank-one trust region method, and Riemannian gradient sampling method. The generalization relies on basic differential geometric concepts, such as tangent spaces, Riemannian metrics, and the Riemannian gradient, as well as on the more recent notions of (first-order) retraction and vector transport. The effectiveness of the methods and techniques for their efficient implementation are derived and evaluated. Basic experiments and applications are used to illustrate the value of the proposed methods. Both the Riemannian symmetric rank-one trust region method and the RBroyden family of methods are generalized from Euclidean quasi-Newton optimization methods, in which a Hessian approximation exploits the well-known secant condition. The generalization of the secant condition and the associated update formulas that define quasi-Newton methods to the Riemannian setting is a key result of this dissertation. The dissertation also contains convergence theory for these methods. The Riemannian symmetric rank-one trust region method is shown to converge globally to a stationary point and d+1-step q-superlinearly to a minimizer of the objective function. The RBroyden family of methods is shown to converge globally and q-superlinearly to a minimizer of a retraction-convex objective function. A condition, called the locking condition, on vector transport and retraction that guarantees convergence for the RBroyden family method and facilitates efficient computation is derived and analyzed. The Dennis Mor\'e sufficient and necessary conditions for superlinear convergence, can be generalized to the Riemannian setting in multiple ways. This dissertation generalizes them in a novel manner that is applicable to both Riemannian optimization problems and root finding for a vector field on a Riemannian manifold. The convergence analyses of Riemannian symmetric rank-one trust region method and RBroyden family methods assume a smooth objective function. For partly smooth Lipschitz continuous objective functions, a variation of one of the RBroyden family methods, RBFGS, is shown to be work well empirically. In addition, the Riemannian gradient sampling method is shown to work well empirically for both a Lipschitz continuous and a non-Lipschitz continuous objective function associated with the important application nonlinear dimension reduction. Efficient and effective implementations for a manifold in Rn, a quotient manifold of total manifold in Rn and a product of manifolds, are presented. Results include efficient representations and operations of elements in a manifold, tangent vectors, linear operators, retractions and vector transports. Novel techniques for constructing and computing multiple kinds of vector transports are derived. In addition, the implementation details of all required objects for optimization on four manifolds, the Stiefel manifold, the sphere, the orthogonal group and the Grassmann manifold, are presented. Basic numerical experiments for the Brockett cost function on the Stiefel manifold, the Rayleigh quotient on the Grassmann manifold and the minmax problem on the sphere (Lipschitz and non-Lipschitz forms), are used to illustrate the performance of the proposed methods and compare with existing optimization methods on manifolds. Three applications, Riemannian optimization for elastic shape analysis, a joint diagonalization problem for independent component analysis and a synchronization of rotation problem, that have smooth cost functions are used to show the advantages of the proposed methods. A secant-based nonlinear dimension reduction problem with a partly smooth function is used to show the advantages of the Riemannian gradient sampling method.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-8809
- Format
- Thesis
- Title
- Pricing and Hedging Derivatives with Sharp Profiles Using Tuned High Resolution Finite Difference Schemes.
- Creator
-
Islim, Ahmed Derar, Kopriva, David A., Winn, Alice, Kercheval, Alec N., Ewald, Brian, Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
We price and hedge different financial derivatives with sharp profiles by solving the corresponding advection-diffusion-reaction partial differential equation using new high resolution finite difference schemes, which show superior numerical advantages over standard finite difference methods. High order finite difference methods, which are commonly used techniques in the computational finance literature, fail to handle the discontinuities in the payoff functions of derivatives with...
Show moreWe price and hedge different financial derivatives with sharp profiles by solving the corresponding advection-diffusion-reaction partial differential equation using new high resolution finite difference schemes, which show superior numerical advantages over standard finite difference methods. High order finite difference methods, which are commonly used techniques in the computational finance literature, fail to handle the discontinuities in the payoff functions of derivatives with discontinuous payoff functions, like digital options. Their numerical solutions produce spurious oscillations in the neighborhood of the discontinuities, which make the numerical derivatives prices and hedges impractical. Hence, we extend the linear finite difference methods to overcome these difficulties by developing high resolution non-linear schemes that resolve these discontinuities and facilitate pricing and hedging these options with higher accuracy. These approximations detect the discontinuous profiles automatically using non-linear functions, called limiters, and smooth discontinuities minimally and locally to produce non-oscillatory prices and Greeks with high resolution. These limiters are modified and more relaxed versions of standard limiting functions in fluid dynamics area to accommodate for the extra physical diffusion (volatility) in financial problems. We prove that this family of new schemes is total variation diminishing (TVD), which guarantees the non oscillatory solutions. Also, we deduce and illustrate the limiting functions ranges and characteristics that allow the TVD condition to hold. We test these methods to price and hedge financial derivatives with digital-like profiles under Black-Scholes-Merton (BSM), constant elasticity of variance (CEV) and Heath-Jarrow-Morton (HJM) models. More specifically, we price and hedge digital options under BSM and CEV models, and we price bonds under HJM model. Finally, we price supershare and gap options under the BSM model. Using the new limiters we developed show higher accuracy profiles (solutions) for the option prices and hedges than standard finite difference schemes or standard limiters, and guaranteed non-oscillatory solutions.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-8813
- Format
- Thesis
- Title
- The Frequentist Performance of Some Bayesian Confidence Intervals for the Survival Function.
- Creator
-
Tao, Yingfeng, Huffer, Fred, Okten, Giray, Sinha, Debajyoti, Niu, Xufeng, Department of Statistics, Florida State University
- Abstract/Description
-
Estimation of a survival function is a very important topic in survival analysis with contributions from many authors. This dissertation considers estimation of confidence intervals for the survival function based on right censored or interval-censored survival data. Most of the methods for estimating pointwise confidence intervals and simultaneous confidence bands of the survival function are reviewed in this dissertation. In the right-censored case, almost all confidence intervals are based...
Show moreEstimation of a survival function is a very important topic in survival analysis with contributions from many authors. This dissertation considers estimation of confidence intervals for the survival function based on right censored or interval-censored survival data. Most of the methods for estimating pointwise confidence intervals and simultaneous confidence bands of the survival function are reviewed in this dissertation. In the right-censored case, almost all confidence intervals are based in some way on the Kaplan-Meier estimator first proposed by Kaplan and Meier (1958) and widely used as the nonparametric estimator in the presence of right-censored data. For interval-censored data, the Turnbull estimator (Turnbull (1974)) plays a similar role. For a class of Bayesian models involving Dirichlet priors, Doss and Huffer (2003) suggested several simulation techniques to approximate the posterior distribution of the survival function by using Markov chain Monte Carlo or sequential importance sampling. These techniques lead to probability intervals for the survival function (at arbitrary time points) and its quantiles for both the right-censored and interval-censored cases. This dissertation will examine the frequentist properties and general performance of these probability intervals when the prior is non-informative. Simulation studies will be used to compare these probability intervals with other published approaches. Extensions of the Doss-Huffer approach are given for constructing simultaneous confidence bands for the survival function and for computing approximate confidence intervals for the survival function based on Edgeworth expansions using posterior moments. The performance of these extensions is studied by simulation.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7624
- Format
- Thesis
- Title
- Mathematical Models of Dengue Fever and Measures to Control It.
- Creator
-
Shen, Yingyun, Mesterton-Gibbons, Mike, Schwartz, Daniel, Okten, Giray, Cogan, Nick, Ewald, Brian, Department of Mathematics, Florida State University
- Abstract/Description
-
In this dissertation, we build a compartment model to investigate the dynamics of spread of dengue fever in both human and mosquito populations. We study the demographic factors that influence equilibrium prevalence, and perform a sensitivity analysis on the basic reproduction number. Among several intervention measures, the effects of two potential control methods for dengue fever are estimated: introducing Wolbachia to the mosquito population and introducing vaccines to the human population...
Show moreIn this dissertation, we build a compartment model to investigate the dynamics of spread of dengue fever in both human and mosquito populations. We study the demographic factors that influence equilibrium prevalence, and perform a sensitivity analysis on the basic reproduction number. Among several intervention measures, the effects of two potential control methods for dengue fever are estimated: introducing Wolbachia to the mosquito population and introducing vaccines to the human population. A stochastic model for transmission of dengue fever is also built to explore the effect of some demographic factors.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-9093
- Format
- Thesis
- Title
- A Spectral Element Method to Price Single and Multi-Asset European Options.
- Creator
-
Zhu, Wuming, Kopriva, David A., Huffer, Fred, Case, Bettye Anne, Kercheval, Alec N., Okten, Giray, Wang, Xiaoming, Department of Mathematics, Florida State University
- Abstract/Description
-
We develop a spectral element method to price European options under the Black-Scholes model, Merton's jump diffusion model, and Heston's stochastic volatility model with one or two assets. The method uses piecewise high order Legendre polynomial expansions to approximate the option price represented pointwise on a Gauss-Lobatto mesh within each element. This piecewise polynomial approximation allows an exact representation of the non-smooth initial condition. For options with one asset under...
Show moreWe develop a spectral element method to price European options under the Black-Scholes model, Merton's jump diffusion model, and Heston's stochastic volatility model with one or two assets. The method uses piecewise high order Legendre polynomial expansions to approximate the option price represented pointwise on a Gauss-Lobatto mesh within each element. This piecewise polynomial approximation allows an exact representation of the non-smooth initial condition. For options with one asset under the jump diffusion model, the convolution integral is approximated by high order Gauss-Lobatto quadratures. A second order implicit/explicit (IMEX) approximation is used to integrate in time, with the convolution integral integrated explicitly. The use of the IMEX approximation in time means that only a block diagonal, rather than full, system of equations needs to be solved at each time step. For options with two variables, i.e., two assets under the Black-Scholes model or one asset under the stochastic volatility model, the domain is subdivided into quadrilateral elements. Within each element, the expansion basis functions are chosen to be tensor products of the Legendre polynomials. Three iterative methods are investigated to solve the system of equations at each time step with the corresponding second order time integration schemes, i.e., IMEX and Crank-Nicholson. Also, the boundary conditions are carefully studied for the stochastic volatility model. The method is spectrally accurate (exponentially convergent) in space and second order accurate in time for European options under all the three models. Spectral accuracy is observed in not only the solution, but also in the Greeks.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-0513
- Format
- Thesis
- Title
- Quasi-Monte Carlo and Genetic Algorithms with Applications to Endogenous Mortgage Rate Computation.
- Creator
-
Shah, Manan, Okten, Giray, Goncharov, Yevgeny, Srinivasan, Ashok, Bellenot, Steve, Case, Bettye Anne, Kercheval, Alec, Kopriva, David, Nichols, Warren, Department of Mathematics...
Show moreShah, Manan, Okten, Giray, Goncharov, Yevgeny, Srinivasan, Ashok, Bellenot, Steve, Case, Bettye Anne, Kercheval, Alec, Kopriva, David, Nichols, Warren, Department of Mathematics, Florida State University
Show less - Abstract/Description
-
In this dissertation, we introduce a genetic algorithm approach to estimate the star discrepancy of a point set. This algorithm allows for the estimation of the star discrepancy in dimensions larger than seven, something that could not be done adequately by other existing methods. Then, we introduce a class of random digit-permutations for the Halton sequence and show that these permutations yield comparable or better results than their deterministic counterparts in any number of dimensions...
Show moreIn this dissertation, we introduce a genetic algorithm approach to estimate the star discrepancy of a point set. This algorithm allows for the estimation of the star discrepancy in dimensions larger than seven, something that could not be done adequately by other existing methods. Then, we introduce a class of random digit-permutations for the Halton sequence and show that these permutations yield comparable or better results than their deterministic counterparts in any number of dimensions for the test problems considered. Next, we use randomized quasi-Monte Carlo methods to numerically solve a one-factor mortgage model expressed as a stochastic fixed-point problem. Finally, we show that this mortgage model coincides with and is computationally faster than Citigroup's MOATS model, which is based on a binomial tree approach.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-0297
- Format
- Thesis
- Title
- Ensemble Methods for Capturing Dynamics of Limit Order Books.
- Creator
-
Wang, Jian, Zhang, Jinfeng, Ökten, Giray, Kercheval, Alec N., Mio, Washington, Simon, Capstick C., Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
According to rapid development in information technology, limit order books(LOB) mechanism has emerged to prevail in today's nancial market. In this paper, we propose ensemble machine learning architectures for capturing the dynamics of high-frequency limit order books such as predicting price spread crossing opportunities in a future time interval. The paper is more data-driven oriented, so experiments with ve real-time stock data from NASDAQ, measured by nanosecond, are established. The...
Show moreAccording to rapid development in information technology, limit order books(LOB) mechanism has emerged to prevail in today's nancial market. In this paper, we propose ensemble machine learning architectures for capturing the dynamics of high-frequency limit order books such as predicting price spread crossing opportunities in a future time interval. The paper is more data-driven oriented, so experiments with ve real-time stock data from NASDAQ, measured by nanosecond, are established. The models are trained and validated by training and validation data sets. Compared with other models, such as logistic regression, support vector machine(SVM), our out-of-sample testing results has shown that ensemble methods had better performance on both statistical measurements and computational eciency. A simple trading strategy that we devised by our models has shown good prot and loss(P&L) results. Although this paper focuses on limit order books, the similar frameworks and processes can be extended to other classication research area. Keywords: limit order books, high-frequency trading, data analysis, ensemble methods, F1 score.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Wang_fsu_0071E_14047
- Format
- Thesis
- Title
- A Riemannian Approach for Computing Geodesics in Elastic Shape Space and Its Applications.
- Creator
-
You, Yaqing, Gallivan, Kyle A., Absil, Pierre-Antoine, Erlebacher, Gordon, Ökten, Giray, Sussman, Mark, Florida State University, College of Arts and Sciences, Department of...
Show moreYou, Yaqing, Gallivan, Kyle A., Absil, Pierre-Antoine, Erlebacher, Gordon, Ökten, Giray, Sussman, Mark, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
This dissertation proposes a Riemannian approach for computing geodesics for closed curves in elastic shape space. The application of two Riemannian unconstrained optimization algorithms, Riemannian Steepest Descent (RSD) algorithm and Limited-memory Riemannian Broyden-Fletcher-Goldfarb-Shanno (LRBFGS) algorithm are discussed in this dissertation. The application relies on the definition and computation for basic differential geometric components, namely tangent spaces and tangent vectors,...
Show moreThis dissertation proposes a Riemannian approach for computing geodesics for closed curves in elastic shape space. The application of two Riemannian unconstrained optimization algorithms, Riemannian Steepest Descent (RSD) algorithm and Limited-memory Riemannian Broyden-Fletcher-Goldfarb-Shanno (LRBFGS) algorithm are discussed in this dissertation. The application relies on the definition and computation for basic differential geometric components, namely tangent spaces and tangent vectors, Riemannian metrics, Riemannian gradient, as well as retraction and vector transport. The difference between this Riemannian approach to compute closed curve geodesics as well as accurate geodesic distance, the existing Path-Straightening algorithm and the existing Riemannian approach to approximate distances between closed shapes, are also discussed in this dissertation. This dissertation summarizes the implementation details and techniques for both Riemannian algorithms to achieve the most efficiency. This dissertation also contains basic experiments and applications that illustrate the value of the proposed algorithms, along with comparison tests to the existing alternative approaches. It has been demonstrated by various tests that this proposed approach is superior in terms of time and performance compared to a state-of-the-art distance computation algorithm, and has better performance in applications of shape distance when compared to the distance approximation algorithm. This dissertation applies the Riemannian geodesic computation algorithm to calculate Karcher mean of shapes. Algorithms that generate less accurate distances and geodesics are also implemented to compute shape mean. Test results demonstrate the fact that the proposed algorithm has better performance with sacrifice in time. A hybrid algorithm is then proposed, to start with the fast, less accurate algorithm and switch to the proposed accurate algorithm to get the gradient for Karcher mean problem. This dissertation also applies Karcher mean computation to unsupervised learning of shapes. Several clustering algorithms are tested with the distance computation algorithm and Karcher mean algorithm. Different versions of Karcher mean algorithm used are compared with tests. The performance of clustering algorithms are evaluated by various performance metrics.
Show less - Date Issued
- 2018
- Identifier
- 2018_Su_You_fsu_0071E_14686
- Format
- Thesis
- Title
- Riemannian Optimization Methods for Averaging Symmetric Positive Definite Matrices.
- Creator
-
Yuan, Xinru, Gallivan, Kyle A., Absil, Pierre-Antoine, Erlebacher, Gordon, Ökten, Giray, Bauer, Martin, Florida State University, College of Arts and Sciences, Department of...
Show moreYuan, Xinru, Gallivan, Kyle A., Absil, Pierre-Antoine, Erlebacher, Gordon, Ökten, Giray, Bauer, Martin, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Symmetric positive definite (SPD) matrices have become fundamental computational objects in many areas. It is often of interest to average a collection of symmetric positive definite matrices. This dissertation investigates different averaging techniques for symmetric positive definite matrices. We use recent developments in Riemannian optimization to develop efficient and robust algorithms to handle this computational task. We provide methods to produce efficient numerical representations of...
Show moreSymmetric positive definite (SPD) matrices have become fundamental computational objects in many areas. It is often of interest to average a collection of symmetric positive definite matrices. This dissertation investigates different averaging techniques for symmetric positive definite matrices. We use recent developments in Riemannian optimization to develop efficient and robust algorithms to handle this computational task. We provide methods to produce efficient numerical representations of geometric objects that are required for Riemannian optimization methods on the manifold of symmetric positive definite matrices. In addition, we offer theoretical and empirical suggestions on how to choose between various methods and parameters. In the end, we evaluate the performance of different averaging techniques in applications.
Show less - Date Issued
- 2018
- Identifier
- 2018_Su_Yuan_fsu_0071E_14736
- Format
- Thesis
- Title
- Multiple Imputation Methods for Large Multi-Scale Data Sets with Missing or Suppressed Values.
- Creator
-
Cao, Jian, Beaumont, Paul M., Duke, D. W., Norrbin, Stefan C., Ökten, Giray, Cano-Urbina, Javier, Florida State University, College of Social Sciences and Public Policy,...
Show moreCao, Jian, Beaumont, Paul M., Duke, D. W., Norrbin, Stefan C., Ökten, Giray, Cano-Urbina, Javier, Florida State University, College of Social Sciences and Public Policy, Department of Economics
Show less - Abstract/Description
-
Without proper treatment, direct analysis on data sets with missing or suppressed values can lead to biased results. Among all of the missing data handling methods, multiple imputation (MI) methods are regarded as the state of the art. The multiple imputed data sets can, on the one hand, generate unbiased estimates, and on the other hand, provide a reliable way to adjust standard errors based on missing data uncertainty. Despite many advantages, existing MI methods have poor performance on...
Show moreWithout proper treatment, direct analysis on data sets with missing or suppressed values can lead to biased results. Among all of the missing data handling methods, multiple imputation (MI) methods are regarded as the state of the art. The multiple imputed data sets can, on the one hand, generate unbiased estimates, and on the other hand, provide a reliable way to adjust standard errors based on missing data uncertainty. Despite many advantages, existing MI methods have poor performance on complicated Multi-Scale data, especially when the data set is large. The large data set of interest to us is the Quarterly Census of Employment and Wage (QCEW), which is the employment and wages of every establishment in the US. These detailed data are aggregated up through three scales: industry structure, geographic levels and time. The size of the QCEW data is as large as 210 x✕ 2217 ✕ 3193 ≈ 1.5 billion observations. For privacy concerns the data are heavily suppressed and this missingness could appear anywhere in this complicated structure. The existing methods are either accurate or fast but bot both in handling the QCEW data. Our goal is to develop a MI method which is capable of handling the missing value problem of large multi-scale data set both accurately and efficiently. This research addresses this goal in three directions. First, I improve the accuracy of the fastest MI method, Bootstrapping based Expectation Maximization (EMB) algorithm, by equipping it with a Multi-Scale Updating step. This updating step uses the information from the singular covariance matrix to take multi-scale structure into account and to simulate more accurate imputations. Second, I improve the MI method by using a Quasi Monte Carlo technique to accelerate its convergence speed. Finally, I develop a Sequential Parallel Imputation method which can detect the structure and missing pattern of large data sets, and partition it to small data sets automatically. The resulting Parallel Sequential Multi-Scale Bootstrapping Expectation Maximization Multiple Imputation (PSI-MBEMMI) method is accurate, very fast, and can be applied to very large data sets.
Show less - Date Issued
- 2018
- Identifier
- 2018_Su_Cao_fsu_0071E_14706
- Format
- Thesis
- Title
- Evolutionary Dynamics of Bacterial Persistence under Nutrient/Antibiotic Actions.
- Creator
-
Ebadi, Sepideh, Cogan, Nicholas G., Beerli, Peter, Bertram, R., Ökten, Giray, Vo, Theodore, Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
Diseases such as tuberculosis, chronic pneumonia, and inner ear infections are caused by bacterial biofilms. Biofilms can form on any surface such as teeth, floors, or drains. Many studies show that it is much more difficult to kill the bacteria in a biofilm than planktonic bacteria because the structure of biofilms offers additional layered protection against diffusible antimicrobials. Among the bacteria in planktonic-biofilm populations, persisters is a subpopulation that is tolerant to...
Show moreDiseases such as tuberculosis, chronic pneumonia, and inner ear infections are caused by bacterial biofilms. Biofilms can form on any surface such as teeth, floors, or drains. Many studies show that it is much more difficult to kill the bacteria in a biofilm than planktonic bacteria because the structure of biofilms offers additional layered protection against diffusible antimicrobials. Among the bacteria in planktonic-biofilm populations, persisters is a subpopulation that is tolerant to antibiotics and that appears to play a crucial role in survival dynamics. Understanding the dynamics of persister cells is of fundamental importance for developing effective treatments. In this research, we developed a method to better describe the behavior of persistent bacteria through specific experiments and mathematical modeling. We derived an accurate mathematical model by tightly coupling experimental data and theoretical model development. By focusing on dynamic changes in antibiotic tolerance owing to phenotypic differences between bacteria, our experiments explored specific conditions that are relevant to specifying parameters in our model. We deliver deeper intuitions to experiments that address several current hypotheses regarding phenotypic expression. By comparing our theoretical model to experimental data, we determined a parameter regime where we obtain quantitative agreement with our model. This validation supports our modeling approach and our theoretical predictions. This model can be used to enhance the development of new antibiotic treatment protocols.
Show less - Date Issued
- 2018
- Identifier
- 2018_Sp_Ebadi_fsu_0071E_14324
- Format
- Thesis
- Title
- Non-Parametric and Semi-Parametric Estimation and Inference with Applications to Finance and Bioinformatics.
- Creator
-
Tran, Hoang Trong, She, Yiyuan, Ökten, Giray, Chicken, Eric, Niu, Xufeng, Tao, Minjing, Florida State University, College of Arts and Sciences, Department of Statistics
- Abstract/Description
-
In this dissertation, we develop tools from non-parametric and semi-parametric statistics to perform estimation and inference. In the first chapter, we propose a new method called Non-Parametric Outlier Identification and Smoothing (NOIS), which robustly smooths stock prices, automatically detects outliers and constructs pointwise confidence bands around the resulting curves. In real- world examples of high-frequency data, NOIS successfully detects erroneous prices as outliers and uncovers...
Show moreIn this dissertation, we develop tools from non-parametric and semi-parametric statistics to perform estimation and inference. In the first chapter, we propose a new method called Non-Parametric Outlier Identification and Smoothing (NOIS), which robustly smooths stock prices, automatically detects outliers and constructs pointwise confidence bands around the resulting curves. In real- world examples of high-frequency data, NOIS successfully detects erroneous prices as outliers and uncovers borderline cases for further study. NOIS can also highlight notable features and reveal new insights in inter-day chart patterns. In the second chapter, we focus on a method for non-parametric inference called empirical likelihood (EL). Computation of EL in the case of a fixed parameter vector is a convex optimization problem easily solved by Lagrange multipliers. In the case of a composite empirical likelihood (CEL) test where certain components of the parameter vector are free to vary, the optimization problem becomes non-convex and much more difficult. We propose a new algorithm for the CEL problem named the BI-Linear Algorithm for Composite EmPirical Likelihood (BICEP). We extend the BICEP framework by introducing a new method called Robust Empirical Likelihood (REL) that detects outliers and greatly improves the inference in comparison to the non-robust EL. The REL method is combined with CEL by the TRI-Linear Algorithm for Composite EmPirical Likelihood (TRICEP). We demonstrate the efficacy of the proposed methods on simulated and real world datasets. We present a novel semi-parametric method for variable selection with interesting biological applications in the final chapter. In bioinformatics datasets the experimental units often have structured relationships that are non-linear and hierarchical. For example, in microbiome data the individual taxonomic units are connected to each other through a phylogenetic tree. Conventional techniques for selecting relevant taxa either do not account for the pairwise dependencies between taxa, or assume linear relationships. In this work we propose a new framework for variable selection called Semi-Parametric Affinity Based Selection (SPAS), which has the flexibility to utilize struc- tured and non-parametric relationships between variables. In synthetic data experiments SPAS outperforms existing methods and on real world microbiome datasets it selects taxa according to their phylogenetic similarities.
Show less - Date Issued
- 2018
- Identifier
- 2018_Sp_Tran_fsu_0071E_14477
- Format
- Thesis
- Title
- Metric Learning for Shape Classification: A Fast and Efficient Approach with Monte Carlo Methods.
- Creator
-
Cellat, Serdar, Mio, Washington, Ökten, Giray, Aggarwal, Sudhir, Cogan, Nicholas G., Jain, Harsh Vardhan, Florida State University, College of Arts and Sciences, Department of...
Show moreCellat, Serdar, Mio, Washington, Ökten, Giray, Aggarwal, Sudhir, Cogan, Nicholas G., Jain, Harsh Vardhan, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Quantifying shape variation within a group of individuals, identifying morphological contrasts between populations and categorizing these groups according to morphological similarities and dissimilarities are central problems in developmental evolutionary biology and genetics. In this dissertation, we present an approach to optimal shape categorization through the use of a new family of metrics for shapes represented by a finite collection of landmarks. We develop a technique to identify...
Show moreQuantifying shape variation within a group of individuals, identifying morphological contrasts between populations and categorizing these groups according to morphological similarities and dissimilarities are central problems in developmental evolutionary biology and genetics. In this dissertation, we present an approach to optimal shape categorization through the use of a new family of metrics for shapes represented by a finite collection of landmarks. We develop a technique to identify metrics that optimally differentiate and categorize shapes using Monte Carlo based optimization methods. We discuss the theory and the practice of the method and apply it to the categorization of 62 mice offsprings based on the shape of their skull. We also create a taxonomic classification tree for multiple species of fruit flies given the shape of their wings. The results of these experiments validate our method.
Show less - Date Issued
- 2018
- Identifier
- 2018_Sp_Cellat_fsu_0071E_14295
- Format
- Thesis
- Title
- Optimal Portfolio Execution under Time-Varying Liquidity Constraints.
- Creator
-
Lin, Hua-Yi, Fahim, Arash, Atkins, Jennifer, Kercheval, Alec N., Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
The problem of optimal portfolio execution has become one of the most important problems in the area of financial mathematics. Over the past two decades, numerous researchers have developed a variety of different models to address this problem. In this dissertation, we extend the LOB (Limit Order Book) model proposed by Obizhaeva and Wang (2013) by incorporating a more realistic assumption on the order book depth; the amount of liquidity provided by a LOB market is finite at all times. We use...
Show moreThe problem of optimal portfolio execution has become one of the most important problems in the area of financial mathematics. Over the past two decades, numerous researchers have developed a variety of different models to address this problem. In this dissertation, we extend the LOB (Limit Order Book) model proposed by Obizhaeva and Wang (2013) by incorporating a more realistic assumption on the order book depth; the amount of liquidity provided by a LOB market is finite at all times. We use an algorithmic approach to solve the problem of optimal execution under time-varying constraints on the depth of a LOB. For the simplest case where the order book depth stays at a fixed level for the entire trading horizon, we reduce the optimal execution problem into a one-dimensional root-finding problem which can be readily solved by standard numerical algorithms. When the depth of the LOB is monotone in time, we first apply the KKT (Karush-Kuhn-Tucker) conditions to narrow down the set of candidate strategies and then use a dichotomy-based search algorithm to pin down the optimal one. For the general case that the order book depth doesn't exhibit any particular pattern, we start from the optimal strategy subject to no liquidity constraints and iterate over execution strategy by sequentially adding more constraints to the problem in a specific fashion until primal feasibility is achieved. Numerical experiments indicate that our algorithms give comparable results to those of current existing convex optimization toolbox CVXOPT with significantly lower time complexity.
Show less - Date Issued
- 2018
- Identifier
- 2018_Sp_Lin_fsu_0071E_14349
- Format
- Thesis
- Title
- High-Order, Efficient, Numerical Algorithms for Integration in Manifolds Implicitly Defined by Level Sets.
- Creator
-
Khanmohamadi, Omid, Sussman, Mark, Plewa, Tomasz, Moore, M. Nicholas J. (Matthew Nicholas J.), Ökten, Giray, Florida State University, College of Arts and Sciences, Department...
Show moreKhanmohamadi, Omid, Sussman, Mark, Plewa, Tomasz, Moore, M. Nicholas J. (Matthew Nicholas J.), Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
New numerical algorithms are devised for high-order, efficient quadrature in domains arising from the intersection of a hyperrectangle and a manifold implicitly defined by level sets. By casting the manifold locally as the graph of a function (implicitly evaluated through a recurrence relation for the zero level set), a recursion stack is set up in which the interface and integrand information of a single dimension after another will be treated. Efficient means for the resulting dimension...
Show moreNew numerical algorithms are devised for high-order, efficient quadrature in domains arising from the intersection of a hyperrectangle and a manifold implicitly defined by level sets. By casting the manifold locally as the graph of a function (implicitly evaluated through a recurrence relation for the zero level set), a recursion stack is set up in which the interface and integrand information of a single dimension after another will be treated. Efficient means for the resulting dimension reduction process are developed, including maps for identifying lower-dimensional hyperrectangle facets, algorithms for minimal coordinate-flip vertex traversal, which, together with our multilinear-form-based derivative approximation algorithms, are used for checking a proposed integration direction on a facet, as well as algorithms for detecting interface-free sub-hyperrectangles. The multidimensional quadrature nodes generated by this method are inside their respective domains (hence, the method does not require any extension of the integrand) and the quadrature weights inherit any positivity of the underlying single-dimensional quadrature method, if present. The accuracy and efficiency of the method are demonstrated through convergence and timing studies for test cases in spaces of up to seven dimensions. The strengths and weaknesses of the method in high dimensional spaces are discussed.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Khanmohamadi_fsu_0071E_14013
- Format
- Thesis
- Title
- Quasi-Monte Carlo and Markov Chain Quasi-Monte Carlo Methods in Estimation and Prediction of Time Series Models.
- Creator
-
Tzeng, Yu-Ying, Ökten, Giray, Beaumont, Paul M., Srivastava, Anuj, Kercheval, Alec N., Kim, Kyounghee (Professor of Mathematics), Florida State University, College of Arts and...
Show moreTzeng, Yu-Ying, Ökten, Giray, Beaumont, Paul M., Srivastava, Anuj, Kercheval, Alec N., Kim, Kyounghee (Professor of Mathematics), Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Randomized quasi-Monte Carlo (RQMC) methods were first developed in mid 1990’s as a hybrid of Monte Carlo and quasi-Monte Carlo (QMC) methods. They were designed to have the superior error reduction properties of low-discrepancy sequences, but also amenable to the statistical error analysis Monte Carlo methods enjoy. RQMC methods are used successfully in applications such as option pricing, high dimensional numerical integration, and uncertainty quantification. This dissertation discusses the...
Show moreRandomized quasi-Monte Carlo (RQMC) methods were first developed in mid 1990’s as a hybrid of Monte Carlo and quasi-Monte Carlo (QMC) methods. They were designed to have the superior error reduction properties of low-discrepancy sequences, but also amenable to the statistical error analysis Monte Carlo methods enjoy. RQMC methods are used successfully in applications such as option pricing, high dimensional numerical integration, and uncertainty quantification. This dissertation discusses the use of RQMC and QMC methods in econometric time series analysis. In time series simulation, the two main problems are parameter estimation and forecasting. The parameter estimation problem involves the use of Markov chain Monte Carlo (MCMC) algorithms such as Metropolis-Hastings and Gibbs sampling. In Chapter 3, we use an approximately completely uniform distributed sequence which was recently discussed by Owen et al. [2005], and an RQMC sequence introduced by O ̈kten [2009], in some MCMC algorithms to estimate the parameters of a Probit and SV-log-AR(1) model. Numerical results are used to compare these sequences with standard Monte Carlo simulation. In the time series forecasting literature, there was an earlier attempt to use QMC by Li and Winker [2003], which did not provide a rigorous error analysis. Chapter 4 presents how RQMC can be used in time series forecasting with its proper error analysis. Numerical results are used to compare various sequences for a simple AR(1) model. We then apply RQMC to compute the value-at-risk and expected shortfall measures for a stock portfolio whose returns follow a highly nonlinear Markov switching stochastic volatility model which does not admit analytical solutions for the returns distribution. The proper use of QMC and RQMC methods in Monte Carlo and Markov chain Monte Carlo algorithms can greatly reduce the computational error in many applications from sciences, en- gineering, economics and finance. This dissertation brings the proper (R)QMC methodology to time series simulation, and discusses the advantages as well as the limitations of the methodology compared the standard Monte Carlo methods.
Show less - Date Issued
- 2017
- Identifier
- FSU_SUMMER2017_Tzeng_fsu_0071E_13607
- Format
- Thesis
- Title
- Value-at-Risk and Expected Shortfall Estimation via Randomized Quasi-Monte Carlo Methods and Comparative Analysis.
- Creator
-
Franke, Stephen Robert, Marquis, Milton H., Ökten, Giray, Beaumont, Paul M., Fournier, Gary M., Florida State University, College of Social Sciences and Public Policy,...
Show moreFranke, Stephen Robert, Marquis, Milton H., Ökten, Giray, Beaumont, Paul M., Fournier, Gary M., Florida State University, College of Social Sciences and Public Policy, Department of Economics
Show less - Abstract/Description
-
Randomized quasi-Monte Carlo methods have been shown to offer estimates with smaller variances compared with estimates obtained with Monte Carlo. This dissertation examines the application of randomized quasi-Monte Carlo methods in the context of value-at-risk and expected shortfall, two measures of downside risk associated with financial portfolios. It finds that while the randomized quasi-Monte Carlo estimates have the variance-reduction of estimates property when applied to the...
Show moreRandomized quasi-Monte Carlo methods have been shown to offer estimates with smaller variances compared with estimates obtained with Monte Carlo. This dissertation examines the application of randomized quasi-Monte Carlo methods in the context of value-at-risk and expected shortfall, two measures of downside risk associated with financial portfolios. It finds that while the randomized quasi-Monte Carlo estimates have the variance-reduction of estimates property when applied to the aforementioned risk measures of financial portfolios, the reduced standard errors have a rate of convergence much closer to 1/√M than the potential 1/M described by the theory for the 22-day time horizon of value-at-risk and expected shortfall. The rate of convergence increased for the 8-day horizon, suggesting that the advantages of randomized quasi-Monte Carlo estimation in terms of standard error of estimates and accuracy of estimates improve for shorter time horizons of the aforementioned risk measures and are no worse for longer time horizons.
Show less - Date Issued
- 2018
- Identifier
- 2018_Fall_Franke_fsu_0071E_14840
- Format
- Thesis
- Title
- Development, Validation, and Use of an Assessment of Learning Outcomes in Introductory Linear Algebra Classes.
- Creator
-
Haider, Muhammad Qadeer, Larson, Christine (Christine J.), Ökten, Giray, Whitacre, Ian Michael, Almond, Russell G., Florida State University, College of Education, School of...
Show moreHaider, Muhammad Qadeer, Larson, Christine (Christine J.), Ökten, Giray, Whitacre, Ian Michael, Almond, Russell G., Florida State University, College of Education, School of Teacher Education
Show less - Abstract/Description
-
Inquiry-oriented teaching is a specific form of active learning gaining popularity in teaching communities. The goal of inquiry-oriented classes is to help students in gaining a conceptual understanding of the material. My research focus is to gauge students’ performance and conceptual understanding in inquiry-oriented linear algebra classes. This work is part of a broader NSF funded project; Teaching Inquiry-Oriented Mathematics: Establishing Support (TIMES) (Grant # 1431393), and TIMES...
Show moreInquiry-oriented teaching is a specific form of active learning gaining popularity in teaching communities. The goal of inquiry-oriented classes is to help students in gaining a conceptual understanding of the material. My research focus is to gauge students’ performance and conceptual understanding in inquiry-oriented linear algebra classes. This work is part of a broader NSF funded project; Teaching Inquiry-Oriented Mathematics: Establishing Support (TIMES) (Grant # 1431393), and TIMES project was designed to support instructors to shift towards inquiry-oriented instruction/teaching. Being part of the TIMES team, a broader goal of my dissertation is pragmatic to the project that is to measure the effectiveness of inquiry-oriented teaching on students learning of linear algebra concepts. Through my research, my contribution to math education field is the development of a valid and reliable assessment instrument for instructors teaching linear algebra concepts in their classes. My dissertation is a mixed method research and follows a three-paper format, and in these papers I discuss (1) the development and validation of a reliable linear algebra assessment tool, (2) comparison of performance of students in inquiry-oriented classes with the students in non-inquiry-oriented classes by using the tool developed in the first paper, and (3) development of research-based choices and distractors to convert the current open-ended assessment into a multiple-choice test by looking into students’ ways of reasoning and problem-solving approaches. The first paper is a quantitative study in which I establish the validity of the linear algebra assessment, and I also measure the reliability of the assessment. In the second paper, I use the linear algebra assessment to measure students’ conceptual and procedural understanding of linear algebra concepts and to compare the performance of students in inquiry-oriented classes with the students in non-inquiry-oriented classes. In the final paper, I focus on the analysis of patterns in student responses, particularly to open-ended response items, to inform the multiple-choices and distractors for the open-ended questions on the linear algebra assessment. This analysis will help me to convert the existing linear algebra assessment into a multiple-choice format research tool that linear algebra researchers can use for various comparisons to gauge the effectiveness of interventions. Additionally, the multiple-choice format of the assessment will be easy to administer and grade, so instructors can also use the assessment to measure their students’ conceptual and procedural understanding of linear algebra concepts.
Show less - Date Issued
- 2018
- Identifier
- 2019_Spring_Haider_fsu_0071E_14971
- Format
- Thesis
- Title
- Belief Function Theory: Monte Carlo Methods and Application to Stock Markets.
- Creator
-
Salehy, Seyyed Nima, Ökten, Giray, Srivastava, Anuj, Cogan, Nicholas G., Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
Belief function theory, also known as Dempster-Shafer theory or evidence theory, gives a general framework for quantifying, representing, and managing uncertainty, and it is widely used in several applications from artificial intelligence to accounting. The belief function theory provides tools to combine several sources' opinions (belief functions), among which, Dempster's rule of combination is the most commonly used. The main drawback of using Dempster's rule to combine belief functions is...
Show moreBelief function theory, also known as Dempster-Shafer theory or evidence theory, gives a general framework for quantifying, representing, and managing uncertainty, and it is widely used in several applications from artificial intelligence to accounting. The belief function theory provides tools to combine several sources' opinions (belief functions), among which, Dempster's rule of combination is the most commonly used. The main drawback of using Dempster's rule to combine belief functions is its computational complexity, which limits the application of Dempster's rule to small number of belief functions. We introduce a family of new Monte Carlo and quasi-Monte Carlo algorithms aimed at approximating Dempster's rule of combination. Then, we present numerical results to show the superiority of the new methods over the existing ones. The algorithms are then used to implement some stock investment strategies based on Dempster-Shafer theory. We will introduce a new strategy, and apply it to the U.S. stock market over a certain period of time. Numerical results suggest the strategies based on the belief function theory outperform the S&P 500 index, with our new strategy giving the best returns.
Show less - Date Issued
- 2019
- Identifier
- 2019_Spring_SALEHY_fsu_0071E_15151
- Format
- Thesis
- Title
- Random Walks over Point Processes and Their Application in Finance.
- Creator
-
Salehy, Seyyed Navid, Kercheval, Alec N., Ewald, Brian, Fahim, Arash, Ökten, Giray, Huffer, Fred W. (Fred William), Florida State University, College of Arts and Sciences,...
Show moreSalehy, Seyyed Navid, Kercheval, Alec N., Ewald, Brian, Fahim, Arash, Ökten, Giray, Huffer, Fred W. (Fred William), Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
In continuous-time models in finance, it is common to assume that prices follow a geometric Brownian motion. More precisely, it is assumed that the price at time t ≥ 0 is given by Zt = Z₀exp(σBt + mt) where Z₀ is the initial price, B is standard Brownian motion, σ is the volatility, and m is the drift. We discuss how Z can be viewed as the limit of a sequence of discrete price models based on random walks. We note that in the usual random walks, jumps can only happen at deterministic times....
Show moreIn continuous-time models in finance, it is common to assume that prices follow a geometric Brownian motion. More precisely, it is assumed that the price at time t ≥ 0 is given by Zt = Z₀exp(σBt + mt) where Z₀ is the initial price, B is standard Brownian motion, σ is the volatility, and m is the drift. We discuss how Z can be viewed as the limit of a sequence of discrete price models based on random walks. We note that in the usual random walks, jumps can only happen at deterministic times. We first construct a natural simple model for price by considering a random walk in which jumps can happen at random times following a counting process N. We then develop a sequence of discrete price models using random walks over point processes. The limit process gives the new price model: Zt = Z₀exp(σBΛt + mΛt), where Λ is the compensator for the counting process N. We note that if N is a Poisson process with intensity 1, then this model coincides with the geometric Brownian motion model for the price. But this new model provides more flexibility as we can choose N to be many other well-known counting processes. This includes not only homogeneous and inhomogeneous Poisson processes which have deterministic compensators but also Hawkes processes which have stochastic compensators. We also discuss and prove many properties for the process BΛ. For example, we show that BΛ is a continuous square integrable martingale. Moreover, we discuss when BΛ has uncorrelated increments and when it has independent increments. Moreover, we investigate how the Black-Scholes pricing formula will change if the price of the risky asset follows this new model when N is an inhomogeneous Poisson process. We show that the usual Black-Scholes formula is obtained when the counting process N is a Poisson process with intensity 1.
Show less - Date Issued
- 2019
- Identifier
- 2019_Spring_Salehy_fsu_0071E_15152
- Format
- Thesis
- Title
- Univariate and Multivariate Volatility Models for Portfolio Value at Risk.
- Creator
-
Xiao, Jingyi, Niu, Xufeng, Ökten, Giray, Wu, Wei, Huffer, Fred W. (Fred William), Florida State University, College of Arts and Sciences, Department of Statistics
- Abstract/Description
-
In modern day financial risk management, modeling and forecasting stock return movements via their conditional volatilities, particularly predicting the Value at Risk (VaR), became increasingly more important for a healthy economical environment. In this dissertation, we evaluate and compare two main families of models for the conditional volatilities - GARCH and Stochastic Volatility (SV) - in terms of their VaR prediction performance of 5 major US stock indices. We calculate GARCH-type...
Show moreIn modern day financial risk management, modeling and forecasting stock return movements via their conditional volatilities, particularly predicting the Value at Risk (VaR), became increasingly more important for a healthy economical environment. In this dissertation, we evaluate and compare two main families of models for the conditional volatilities - GARCH and Stochastic Volatility (SV) - in terms of their VaR prediction performance of 5 major US stock indices. We calculate GARCH-type model parameters via Quasi Maximum Likelihood Estimation (QMLE) while for those of SV we employ MCMC with Ancillary Sufficient Interweaving Strategy. We use the forecast volatilities corresponding to each model to predict the VaR of the 5 indices. We test the predictive performances of the estimated models by a two-stage backtesting procedure and then compare them via the Lopez loss function. Results of this dissertation indicate that even though it is more computational demanding than GARCH-type models, SV dominates them in forecasting VaR. Since financial volatilities are moving together across assets and markets, it becomes apparent that modeling the volatilities in a multivariate framework of modeling is more appropriate. However, existing studies in the literature do not present compelling evidence for a strong preference between univariate and multivariate models. In this dissertation we also address the problem of forecasting portfolio VaR via multivariate GARCH models versus univariate GARCH models. We construct 3 portfolios with stock returns of 3 major US stock indices, 6 major banks and 6 major technical companies respectively. For each portfolio, we model the portfolio conditional covariances with GARCH, EGARCH and MGARCH-BEKK, MGARCH-DCC, and GO-GARCH models. For each estimated model, the forecast portfolio volatilities are further used to calculate (portfolio) VaR. The ability to capture the portfolio volatilities is evaluated by MAE and RMSE; the VaR prediction performance is tested through a two-stage backtesting procedure and compared in terms of the loss function. The results of our study indicate that even though MGARCH models are better in predicting the volatilities of some portfolios, GARCH models could perform as well as their multivariate (and computationally more demanding) counterparts.
Show less - Date Issued
- 2019
- Identifier
- 2019_Spring_Xiao_fsu_0071E_15172
- Format
- Thesis
- Title
- An Analytic Approach to Estimating the Required Surplus, Benchmark Profit, and Optimal Reinsurance Retention for an Insurance Enterprise.
- Creator
-
Boor, Joseph A. (Joseph Allen), Born, Patricia, Case, Bettye Anne, Tang, Qihe, Rogachev, Grigory, Okten, Giray, Aldrovandi, Ettore, Paris, Steve, Department of Mathematics,...
Show moreBoor, Joseph A. (Joseph Allen), Born, Patricia, Case, Bettye Anne, Tang, Qihe, Rogachev, Grigory, Okten, Giray, Aldrovandi, Ettore, Paris, Steve, Department of Mathematics, Florida State University
Show less - Abstract/Description
-
This paper presents an analysis of the capital needs, needed return on capital, and optimum reinsurance retention for insurance companies, all in the context where claims are either paid out or known with certainty within or soon after the policy period. Rather than focusing on how to estimate such values using Monte Carlo simulation, it focuses on closed form expressions and approximations for key quantities that are needed for such an analysis. Most of the analysis is also done using a...
Show moreThis paper presents an analysis of the capital needs, needed return on capital, and optimum reinsurance retention for insurance companies, all in the context where claims are either paid out or known with certainty within or soon after the policy period. Rather than focusing on how to estimate such values using Monte Carlo simulation, it focuses on closed form expressions and approximations for key quantities that are needed for such an analysis. Most of the analysis is also done using a distribution-free approach with respect to the loss severity distribution, so minimal or no assumptions surrounding the specific distribution are needed when analyzing the results. However, one key parameter, that is treated via an exhaustion of cases, involves the degree of parameter uncertainty, the number of separate lines of business involved. This is done for the no parameter uncertainty monoline compound Poisson distribution as well as situations involving (lognormal) severity parameter uncertainty, (gamma/negative binomial) count parameter uncertainty, the multiline compound Poisson case, and the compound Poisson scenario with parameter uncertainty, and especially parameter uncertainty correlated across the lines of business. It shows how the risk of extreme aggregate losses that is inherent in insurance operations may be understood (and, implicitly, managed) by performing various calculations using the loss severity distribution, and, where appropriate, key parameters driving the parameter uncertainty distributions. Formulas are developed that estimate the capital and surplus needs of a company(using the VaR approach), and therefore the profit needs of a company that involve tractable calculations. As part of that the process the benchmark loading for profit, reflecting both the needed financial support for the amount of capital to adequately secure to a given one year survival probability, and the amount needed to recompense investors for diversifiable risk is discussed. An analysis of whether or not the loading for diversifiable risk is needed is performed. Approximations to the needed values are performed using the moments of the capped severity distribution and analytic formulas from the frequency distribution as inputs into method of moments normal and lognormal approximations to the percentiles of the aggregate loss distribution. An analysis of the optimum reinsurance retention/policy limit is performed as well, with capped loss distribution/frequency distribution equations resulting from the relationship that the marginal profit (with respect to the loss cap) should be equal to the marginal expense and profit dollar loading with respect to the loss cap. Analytical expressions are developed for the optimum reinsurance retention. Approximations to the optimum retention based on the normal distribution were developed and their error analyzed in great detail. The results indicate that in the vast majority of practical scenarios, the normal distribution approximation to the optimum retention is acceptable. Also included in the paper is a brief comparison of the VaR (survival probability) and expected policyholder deficit (EPD) and TVaR approaches to surplus adequacy (which conclude that the VaR approach is superior for most property/casualty companies); a mathematical analysis of the propriety of insuring the upper limits of the loss distribution, which concludes that, even if unlimited funds were available to secure losses in capital and reinsurance, it would not be in the insured's best interest to do so. Further inclusions to date include a illustrative derivation of the generalized collective risk equation and a method for interpolating ``along'' a mathematical curve rather than directly using the values on the curve. As a prelude to a portion of the analysis, a theorem was proven indicating that in most practical situations, the n-1st order derivatives of a suitable probability mass function at values L, when divided by the product of L and the nth order derivative, generate a quotient with a limit at infinity that is less than 1/n.
Show less - Date Issued
- 2012
- Identifier
- FSU_migr_etd-4726
- Format
- Thesis
- Title
- Parameter Estimation for a Stochastic Volatility Model with Coupled Additive and Multiplicative Noise.
- Creator
-
Amusan, Ibukun O. O., Ewald, Brian, Okten, Giray, Fuelberg, Henry, Kercheval, Alec, Wang, Xiaoming, Department of Mathematics, Florida State University
- Abstract/Description
-
In this dissertation we look at a stochastic volatility model with coupled additive and multiplicative noise. We begin by explaining the suitability of the model for the logarithm of volatility by looking at the skewness and kurtosis. We then proceed to estimate the five parameters of the model. The first two parameters are found using the method of least squares on successive observation pairs. Then the remaining three parameters are estimated by further using the maximum likelihood method...
Show moreIn this dissertation we look at a stochastic volatility model with coupled additive and multiplicative noise. We begin by explaining the suitability of the model for the logarithm of volatility by looking at the skewness and kurtosis. We then proceed to estimate the five parameters of the model. The first two parameters are found using the method of least squares on successive observation pairs. Then the remaining three parameters are estimated by further using the maximum likelihood method on the least squares residuals. This leads to a minimization problem with a function of three variables. Using the first-order conditions, we get a system of three equations in three unknowns. After doing a change of variables and making a substitution, we find that the function to be minimized can be expressed as a function of two variables instead of the original three variables. The parameters for some stocks are then estimated for the coupled additive and multiplicative stochastic model and also for the Ornstein-Uhlenbeck model. These estimated parameters are used to price European call options, and the prices for the coupled additive and multiplicative model, Ornstein-Uhlenbeck model and Black-Scholes model are compared.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7692
- Format
- Thesis
- Title
- Monte Carlo and Quasi-Monte Carlo Methods in Financial Derivative Pricing.
- Creator
-
Göncü, Ahmet, Okten, Giray, Huffer, Fred, Ewald, Brian, Kercheval, Alec N., Mascagni, Michael, Department of Mathematics, Florida State University
- Abstract/Description
-
In this dissertation, we discuss the generation of low discrepancy sequences, randomization of these sequences, and the transformation methods to generate normally distributed random variables. Two well known methods for generating normally distributed numbers are considered, namely; Box-Muller and inverse transformation methods. Some researchers and financial engineers have claimed that it is incorrect to use the Box-Muller method with low-discrepancy sequences, and instead, the inverse...
Show moreIn this dissertation, we discuss the generation of low discrepancy sequences, randomization of these sequences, and the transformation methods to generate normally distributed random variables. Two well known methods for generating normally distributed numbers are considered, namely; Box-Muller and inverse transformation methods. Some researchers and financial engineers have claimed that it is incorrect to use the Box-Muller method with low-discrepancy sequences, and instead, the inverse transformation method should be used. We investigate the sensitivity of various computational finance problems with respect to different normal transformation methods. Box-Muller transformation method is theoretically justified in the context of the quasi-Monte Carlo by showing that the same error bounds apply for Box-Muller transformed point sets. Furthermore, new error bounds are derived for financial derivative pricing problems and for an isotropic integration problem where the integrand is a function of the Euclidean norm. Theoretical results are derived for financial derivative pricing problems; such as European call, Asian geometric, and Binary options with a convergence rate of 1/N. A stratified Box-Muller algorithm is introduced as an alternative to Box-Muller and inverse transformation methods, and new numerical evidence is presented in favor of this method. Finally, a statistical test for pseudo-random numbers is adapted for measuring the uniformity of transformed low discrepancy sequences.
Show less - Date Issued
- 2009
- Identifier
- FSU_migr_etd-4144
- Format
- Thesis
- Title
- Probabilistic Methods in Estimation and Prediction of Financial Models.
- Creator
-
Nguyen, Nguyet Thi, Okten, Giray, Hawkes, Lois, Case, Bettye Anne, Kim, Kyounghee, Nichols, Warren, Zhang, Jinfeng, Department of Mathematics, Florida State University
- Abstract/Description
-
Many computational finance problems can be classified into two categories: estimation and prediction. In estimation, one starts with a probability model and expresses the quantity of interest as an expected value or a probability of an event. These quantities are then computed either exactly, or numerically using methods such as numerical PDEs or Monte Carlo simulation. Many problems in derivative pricing and risk management are in this category. In prediction, the main objective is to use...
Show moreMany computational finance problems can be classified into two categories: estimation and prediction. In estimation, one starts with a probability model and expresses the quantity of interest as an expected value or a probability of an event. These quantities are then computed either exactly, or numerically using methods such as numerical PDEs or Monte Carlo simulation. Many problems in derivative pricing and risk management are in this category. In prediction, the main objective is to use methods such as machine learning, neural networks, or Markov chain models, to build a model, train it using historical data, and predict future behavior of some financial indicators. In this dissertation, we consider an estimation method known as the (randomized) quasi-Monte Carlo method. We introduce an acceptance-rejection algorithm for the quasi-Monte Carlo method, which substantially increases the scope of applications where the method can be used efficiently. We prove a convergence result, and discuss examples from applied statistics and derivative pricing. In the second part of the dissertation, we present a novel prediction algorithm based on hidden Markov models. We use the algorithm to predict economic regimes, and stock prices, based on historical data.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-9059
- Format
- Thesis
- Title
- Radically Elementary Stochastic Summation with Applications to Finance.
- Creator
-
Zhu, Ming, Nichols, Warren D., Kim, Kyounghee, Huffer, Fred W., Ewald, Brian, Kercheval, Alec N., Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation develops a nonstandard approach to probability, stochastic calculus and financial modeling, within the framework of the Radically Elementary Probability Theory of Edward Nelson. The fundamental objects of investigation are stochastic sums with respect to a martingale, defined on a finite probability space and indexed by a finite set. We study the external (nonstandard) properties of these sums, such as almost sure continuity of trajectories, the Lp property, and the...
Show moreThis dissertation develops a nonstandard approach to probability, stochastic calculus and financial modeling, within the framework of the Radically Elementary Probability Theory of Edward Nelson. The fundamental objects of investigation are stochastic sums with respect to a martingale, defined on a finite probability space and indexed by a finite set. We study the external (nonstandard) properties of these sums, such as almost sure continuity of trajectories, the Lp property, and the Lindeberg condition; we also study external properties of related processes, such as quadratic variation and proper time. Using the tools so developed, we obtain an Itô-Doeblin formula for change of variable and a Girsanov theorem for change of measure in a quite general setting. We also obtain results that will aid us in the comparison of certain of the processes we investigate to conventional ones. We illustrate the theory by using general techniques to build stock models driven by Wiener walks, Poisson walks and their combinations, and show in each case that when our parameter processes are constant we recover the prices for European calls of the corresponding models that use conventional stochastic calculus. Finally, we exhibit a model driven by a nonstandard Wiener process that produces different prices for European calls than are given by the conventional Black-Scholes model.
Show less - Date Issued
- 2014
- Identifier
- FSU_migr_etd-9125
- Format
- Thesis
- Title
- Probabilistic Uncertainty Analysis and Its Applications in Option Models.
- Creator
-
Namihira, Motoi J., Kopriva, David A., Srivastava, Anuj, Ewald, Brian, Hussaini, M. Yousuff, Nichols, Warren, Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
In this work we quantify the effect of uncertainty in volatility in the prices and Deltas of an American and European put using probabilistic uncertainty analysis. We review the current methods of uncertainty analysis including worst case or scenario analysis, Monte Carlo, and provide an in depth review of Polynomial Chaos in both one and multiple dimensions. We develop a numerically stable method of generating orthogonal polynomials that is used in the practical construction of the...
Show moreIn this work we quantify the effect of uncertainty in volatility in the prices and Deltas of an American and European put using probabilistic uncertainty analysis. We review the current methods of uncertainty analysis including worst case or scenario analysis, Monte Carlo, and provide an in depth review of Polynomial Chaos in both one and multiple dimensions. We develop a numerically stable method of generating orthogonal polynomials that is used in the practical construction of the Polynomial Chaos basis functions. We also develop a semi analytic density transform method that is 200 times faster and 1000 times more accurate than the Monte Carlo based kernel density method. Finally, we analyze the European and American put option models assuming a distribution for the volatility that is historically observed. We find that the sensitivity to uncertainty in volatility is greatest for the price of ATM puts, and tapers as one moves away from the strike. The Delta, however, exhibits the least sensitivity when ATM and is most sensitive when moderately ITM. The price uncertainty for ITM American puts is less than the price uncertainty of equivalent European puts. For OTM options, the price uncertainty is similar between American and European puts. The uncertainty in the Delta of ITM American puts is greater than the uncertainty of equivalent European puts. For OTM puts, the uncertainty in Delta is similar between American and European puts. For the American put, uncertainty in volatility introduces uncertainty in the location of the optimal exercise boundary, thereby making optimal exercise decisions more difficult.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7525
- Format
- Thesis
- Title
- Calibration of Local Volatility Models and Proper Orthogonal Decomposition Reduced Order Modeling for Stochastic Volatility Models.
- Creator
-
Geng, Jian, Navon, Ionel Michael, Case, Bettye Anne, Contreras, Rob, Okten, Giray, Kercheval, Alec N., Ewald, Brian, Department of Mathematics, Florida State University
- Abstract/Description
-
There are two themes in this thesis: local volatility models and their calibration, and Proper Orthogonal Decomposition (POD) reduced order modeling with application in stochastic volatility models, which has a potential in the calibration of stochastic volatility models. In the first part of this thesis (chapters II-III), the local volatility models are introduced first and then calibrated for European options across all strikes and maturities of the same underlying. There is no...
Show moreThere are two themes in this thesis: local volatility models and their calibration, and Proper Orthogonal Decomposition (POD) reduced order modeling with application in stochastic volatility models, which has a potential in the calibration of stochastic volatility models. In the first part of this thesis (chapters II-III), the local volatility models are introduced first and then calibrated for European options across all strikes and maturities of the same underlying. There is no interpolation or extrapolation of either the option prices or the volatility surface. We do not make any assumption regarding the shape of the volatility surface except to assume that it is smooth. Due to the smoothness assumption, we apply a second order Tikhonov regularization. We choose the Tikhonov regularization parameter as one of the singular values of the Jacobian matrix of the Dupire model. Finally we perform extensive numerical tests to assess and verify the aforementioned techniques for both local volatility models with known analytical solutions of European option prices and real market option data. In the second part of this thesis (chapters IV-V), stochastic volatility models, POD reduced order modeling are introduced first respectively. Then POD reduced order modeling is applied to the Heston stochastic volatility model for the pricing of European options. Finally, chapter VI summaries the thesis and points out future research areas.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7388
- Format
- Thesis
- Title
- Asset Pricing Equilibria for Heterogeneous, Limited-Information Agents.
- Creator
-
Jones, Dawna Candice, Kercheval, Alec N., Beaumont, Paul M, Van Winkle, David H., Nichols, Warren, Ökten, Giray, Florida State University, College of Arts and Sciences,...
Show moreJones, Dawna Candice, Kercheval, Alec N., Beaumont, Paul M, Van Winkle, David H., Nichols, Warren, Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
The standard general equilibrium asset pricing models typically make two simplifying assumptions: homogeneous agents and the existence of a rational expectations equilibrium. This context sometimes yields outcomes that are inconsistent with the empirical findings. We hypothesize that allowing agent heterogeneity could assist in replicating the empirical results. However, the inclusion of heterogeneity in models where agents are fully rational proves impossible to solve without severe...
Show moreThe standard general equilibrium asset pricing models typically make two simplifying assumptions: homogeneous agents and the existence of a rational expectations equilibrium. This context sometimes yields outcomes that are inconsistent with the empirical findings. We hypothesize that allowing agent heterogeneity could assist in replicating the empirical results. However, the inclusion of heterogeneity in models where agents are fully rational proves impossible to solve without severe simplifying assumptions. The reason for this difficulty is that heterogeneous agent models generate an endogenously complicated distribution of wealth across the agents. The state space for each agent's optimization problem includes the complex dynamics of the wealth distribution. There is no general way to characterize the interaction between the distribution of wealth and the macroeconomic aggregates. To address this issue, we implement an agent-based model where the agents have bounded rationality. In our model, we have a complete markets economy with two agents and two assets. The agents are heterogeneous and utility maximizing with constant coefficient of relative risk aversion [CRRA] preferences. How the agents address the stochastic behaviour of the evolution of the wealth distribution is central to our task since aggregate prices depend on this behaviour. An important component of this dissertation involves dealing with the computational difficulty of dynamic heterogeneous-agent models. That is, in order to predict prices, agents need a way to keep track of the evolution of the wealth distribution. We do this by allowing each agent to assume that a price-equivalent representative agent exists and that the representative agent has a constant coefficient of relative risk aversion. In so doing, the agents are able to formulate predictive pricing and demand functions which allow them to predict aggregate prices and make consumption and investment decisions each period. However, the agents' predictions are only approximately correct. Therefore, we introduce a learning mechanism to maintain the required level of accuracy in the agents' price predictions. From this setup, we find that the model, with learning, will converge over time to an approximate expectations equilibrium, provided that the the initial conditions are close enough to the rational expectations equilibrium prices. Two main contributions in our work are: 1) to formulate a new concept of approximate equilibria, and 2) to show how equilibria can be approximated numerically, despite the fact that the true state space at any point in time is mathematically complex. These contributions offer the possibility of characterizing a new class of asset pricing models where agents are heterogeneous and only just slightly limited in their rationality. That is, the partially informed agents in our model are able to forecast and utility-maximize only just as well as economists who face problems of estimating aggregate variables. By using an exogenously assigned adaptive learning rule, we analyse this implementation in a Lucas-type heterogeneous agent model. We focus on the sensitivity of the risk parameter and the convergence of the model to an approximate expectations equilibrium. Also, we study the extent to which adaptive learning is able to explain the empirical findings in an asset pricing model with heterogeneous agents.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9624
- Format
- Thesis
- Title
- A Game-Theoretic Analysis of Competition over Indivisible Resources.
- Creator
-
Karabiyik, Tugba, Mesterton-Gibbons, Mike, Ryvkin, Dmitry, Cogan, Nicholas G., Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
In this dissertation, we build several game-theoretic models to explore animal contest behavior. Classical game theory predicts that respect for ownership or "Bourgeois" behavior can arise as an arbitrary convention to avoid costly fights. The same theory also predicts that disrespect for ownership or "anti-Bourgeois" behavior can evolve under the same conditions. However, Bourgeois is very common in nature, while anti-Bourgeois is very rare. In order to explain the rarity of anti-Bourgeois...
Show moreIn this dissertation, we build several game-theoretic models to explore animal contest behavior. Classical game theory predicts that respect for ownership or "Bourgeois" behavior can arise as an arbitrary convention to avoid costly fights. The same theory also predicts that disrespect for ownership or "anti-Bourgeois" behavior can evolve under the same conditions. However, Bourgeois is very common in nature, while anti-Bourgeois is very rare. In order to explain the rarity of anti-Bourgeois behavior, we create a single-round Hawk-Dove model with four pure strategies: Hawk or H, Bourgeois or B, anti-Bourgeois or X and Dove or D. We show that if intruders sometimes believe themselves to be owners, then the resulting confusion can broaden the conditions under which Bourgeois behavior is evolutionarily stable in the single-round Hawk-Dove game. We also develop a multi-period Hawk-Dove model that includes the effect of confusion over ownership. We determine the effect of ownership uncertainty on Bourgeois vs. anti-Bourgeois strategies, and we show how this effect can allow a fighting population to evolve to a non-fighting population under increasing costs of fighting. Another possible explanation for the rarity of anti-Bourgeois behavior in nature is that two X-strategists would exchange roles repeatedly over many rounds in a costly "infinite regress." Here we further analyze an existing infinite-regress model to allow for polymorphic ESSs, and thus explore conditions that favor partial respect for ownership. We identify a pathway through which respect for ownership can evolve from total disrespect under increasing costs of fighting.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9368
- Format
- Thesis
- Title
- The Studies of Joint Structure Sparsity Pursuit in the Applications of Hierarchical Variable Selection and Fused Lasso.
- Creator
-
Jiang, He, She, Yiyuan, Ökten, Giray, Barbu, Adrian G. (Adrian Gheorghe), Mai, Qing, Florida State University, College of Arts and Sciences, Department of Statistics
- Abstract/Description
-
In this dissertation, we study joint sparsity pursuit and its applications in variable selection in high dimensional data. The first part of dissertation focuses on hierarchical variable selection and its application in a two-way interaction model. In high-dimensional models that involve interactions, statisticians usually favor variable selection obeying certain logical hierarchical constraints. The first part of this paper focuses on structural hierarchy which means that the existence of an...
Show moreIn this dissertation, we study joint sparsity pursuit and its applications in variable selection in high dimensional data. The first part of dissertation focuses on hierarchical variable selection and its application in a two-way interaction model. In high-dimensional models that involve interactions, statisticians usually favor variable selection obeying certain logical hierarchical constraints. The first part of this paper focuses on structural hierarchy which means that the existence of an interaction term implies that at least one or both associated main effects must be present. Lately this problem has attracted a lot of attentions from statisticians, but existing computational algorithms converge slow and cannot meet the challenge of big data computation. More importantly, theoretical studies of hierarchical variable selection are extremely scarce, largely due to the difficulty that multiple sparsity-promoting penalties are enforced on the same subject. This work investigates a new type of estimator based on group multi-regularization to capture various types of structural parsimony simultaneously. In this work, we present non-asymptotic results based on combined statistical and computational analysis, and reveal the minimax optimal rate. A general-purpose algorithm is developed with a theoretical guarantee of strict iterate convergence and global optimality. Simulations and real data experiments demonstrate the efficiency and efficacy of the proposed approach. The second topic studies Fused Lasso which pursues joint sparsity of both variables and their consecutive differences simultaneously. The overlapping penalties of Fused Lasso pose critical challenges to computation studies and theoretical analysis. Some theoretical analysis about fused lasso, however, is only performed under an orthogonal design and there is hardly any nonasymptotic study in the past literature. In this work, we study Fused Lasso and its application in a classification problem to achieve exact clustering. Computationally, we derive a simple-to-implement algorithm which scales well to big data computation; in theory, we propose a brand new technique and some nonasymptotic analysis are performed. To evaluate the prediction performance theoretically, we derived oracle inequality of Fused Lasso estimator to show the $ell_2$ prediction error rate. The minimax optimal rate is also revealed. For estimation accuracy, $ell_q (1leq q leq infty)$ norm error bound for fused lasso estimator is derived. The simulation studies shows that exact clustering can be achieved using post-thresholding technique.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9362
- Format
- Thesis
- Title
- Stochastic Modeling of Financial Derivatives.
- Creator
-
Huang, Wanwan, Okten, Giray, Ewald, Brian, Huffer, Fred, Kercheval, Alec, Tang, Qihe, Kim, Kyounghee, Department of Mathematics, Florida State University
- Abstract/Description
-
The Coupled Additive Multiplicative Noises (CAM) model is introduced as a stochastic volatility process to extend the classical Black-Scholes model. The fast Fourier transform (FFT) method is used to compute the values of the probability density function of the underlying assets under the CAM model, as well as the price of European call options. We discuss four dierent discretization schemes for the CAM model: the Euler scheme, the simplied weak Euler scheme, the order 2 weak Taylor scheme...
Show moreThe Coupled Additive Multiplicative Noises (CAM) model is introduced as a stochastic volatility process to extend the classical Black-Scholes model. The fast Fourier transform (FFT) method is used to compute the values of the probability density function of the underlying assets under the CAM model, as well as the price of European call options. We discuss four dierent discretization schemes for the CAM model: the Euler scheme, the simplied weak Euler scheme, the order 2 weak Taylor scheme and the stochastic Adams-Bashforth scheme. A martingale control variate method for pricing European call options is developed, and its advantages in terms of variance reduction are investigated numerically. We also develop Monte Carlo methods for estimating the sensitivities of the European call options under the CAM model.
Show less - Date Issued
- 2013
- Identifier
- FSU_migr_etd-7429
- Format
- Thesis
- Title
- Estimating Sensitivities of Exotic Options Using Monte Carlo Methods.
- Creator
-
Yuan, Wei, Ökten, Giray, Kim, Kyounghee, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren, Florida State University, College of Arts and Sciences, Department...
Show moreYuan, Wei, Ökten, Giray, Kim, Kyounghee, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
In this dissertation, methods of estimating the sensitivity of complex exotic options, including options written on multiple assets, and have discontinuous payoffs, are investigated. The calculation of the sensitivities (Greeks) is based on the finite difference method, pathwise method, likelihood ratio method and kernel method, via Monte Carlo or quasi-Monte Carlo simulation. Direct Monte Carlo estimators for various sensitivities of weather derivatives and mountain range options are given....
Show moreIn this dissertation, methods of estimating the sensitivity of complex exotic options, including options written on multiple assets, and have discontinuous payoffs, are investigated. The calculation of the sensitivities (Greeks) is based on the finite difference method, pathwise method, likelihood ratio method and kernel method, via Monte Carlo or quasi-Monte Carlo simulation. Direct Monte Carlo estimators for various sensitivities of weather derivatives and mountain range options are given. The numerical results show that the pathwise method outperforms other methods when the payoff function is Lipschitz continuous. The kernel method and the central finite difference methods are competitive when the payoff function is discontinuous.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9528
- Format
- Thesis
- Title
- Rank-Constrained Optimization: A Riemannian Manifold Approach.
- Creator
-
Zhou, Guifang, Gallivan, Kyle A., Van Dooren, Paul, Barbu, Adrian G. (Adrian Gheorghe), Ökten, Giray, Wang, Xiaoming, Florida State University, College of Arts and Sciences,...
Show moreZhou, Guifang, Gallivan, Kyle A., Van Dooren, Paul, Barbu, Adrian G. (Adrian Gheorghe), Ökten, Giray, Wang, Xiaoming, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
This dissertation considers optimization problems on a Riemannian matrix manifold ℳ⊆ℝ[superscript m x n] with an additional rank inequality constraint. A novel technique for building new rank-related geometric objects from known Riemannian objects is developed and used as the basis for new approach to adjusting matrix rank during the optimization process. The new algorithms combine the dynamic update of matrix rank with state-of-the-art rapidly converging and well-understood Riemannian...
Show moreThis dissertation considers optimization problems on a Riemannian matrix manifold ℳ⊆ℝ[superscript m x n] with an additional rank inequality constraint. A novel technique for building new rank-related geometric objects from known Riemannian objects is developed and used as the basis for new approach to adjusting matrix rank during the optimization process. The new algorithms combine the dynamic update of matrix rank with state-of-the-art rapidly converging and well-understood Riemannian optimization algorithms. A rigorous convergence analysis for the new methods addresses the tradeoffs involved in achieving computationally efficient and effective optimization. Conditions that ensure the ranks of all iterates become fixed eventually are given. This guarantees the desirable consequence that the new dynamic-rank algorithms maintain the convergence behavior of the fixed rank Riemannian optimization algorithm used as the main computational primitive. The weighted low-rank matrix approximation problem and the low-rank approximation approach to the problem of quantifying the similarity of two graphs are used to empirically evaluate and compare the performance of the new algorithms with that of existing methods. The experimental results demonstrate the significant advantages of the new algorithms and, in particular, the importance of the new rank-related geometric objects in efficiently determining a suitable rank for the minimizer.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9533
- Format
- Thesis
- Title
- Variance Reduction Techniques in Pricing Financial Derivatives.
- Creator
-
Salta, Emmanuel R., Okten, Giray, Srinivasan, Ashok, Case, Bettye Anne, Ewald, Brian, Nolder, Craig, Quine, John R., Department of Mathematics, Florida State University
- Abstract/Description
-
In this dissertation, we evaluate existing Monte Carlo estimators and develop new Monte Carlo estimators for pricing financial options with the goal of improving precision. In Chapter 2, we discuss the conditional expectation Monte Carlo estimator for pricing barrier options, and show that the formulas for this estimator that are used in the literature are incorrect. We provide a correct version of the formula. In Chapter 3, we focus on importance sampling methods in estimating the price of...
Show moreIn this dissertation, we evaluate existing Monte Carlo estimators and develop new Monte Carlo estimators for pricing financial options with the goal of improving precision. In Chapter 2, we discuss the conditional expectation Monte Carlo estimator for pricing barrier options, and show that the formulas for this estimator that are used in the literature are incorrect. We provide a correct version of the formula. In Chapter 3, we focus on importance sampling methods in estimating the price of barrier options. We show how a simulated annealing procedure can be used to estimate the parameters required in the importance sampling method. We end this chapter by evaluating the performance of the combined importance sampling and conditional expectation method. In Chapter 4, we analyze the estimators introduced by Ross and Shanthikumar in pricing barrier options and present a numerical example to test their performance.
Show less - Date Issued
- 2008
- Identifier
- FSU_migr_etd-2102
- Format
- Thesis
- Title
- Numerical Optimization Methods on Riemannian Manifolds.
- Creator
-
Qi, Chunhong, Gallivan, Kyle A., Absil, Pierre-Antoine, Duke, Dennis, Erlebacher, Gordon, Hussaini, M. Yousuff, Okten, Giray, Department of Mathematics, Florida State University
- Abstract/Description
-
This dissertation considers the generalization of two well-known unconstrained optimization algorithms for Rn to solve optimization problems whose constraints can be characterized as a Riemannian manifold. Efficiency and effectiveness are obtained compared to more traditional approaches to Riemannian optimization by applying the concepts of retraction and vector transport. We present a theory of building vector transports on submanifolds of Rn and use the theory to assess convergence...
Show moreThis dissertation considers the generalization of two well-known unconstrained optimization algorithms for Rn to solve optimization problems whose constraints can be characterized as a Riemannian manifold. Efficiency and effectiveness are obtained compared to more traditional approaches to Riemannian optimization by applying the concepts of retraction and vector transport. We present a theory of building vector transports on submanifolds of Rn and use the theory to assess convergence conditions and computational efficiency of the Riemannian optimization algorithms. We generalize the BFGS method which is an highly effective quasi-Newton method for unconstrained optimization on Rn. The Riemannian version, RBFGS, is developed and its convergence and efficiency analyzed. Conditions that ensure superlinear convergence are given. We also consider the Euclidean Adaptive Regularization using Cubics method (ARC) for unconstrained optimization on Rn. ARC is similar to trust region methods in that it uses a local model to determine the modification to the current estimate of the optimal solution. Rather than a quadratic local model and constraints as in a trust region method, ARC uses a parameterized local cubic model. We present a generalization, the Riemannian Adaptive Regularization using Cubics method (RARC), along with global and local convergence theory. The efficiency and effectiveness of the RARC and RBFGS methods are investigated and their performance compared to the predictions made by the convergence theory via a series of optimization problems on various manifolds.
Show less - Date Issued
- 2011
- Identifier
- FSU_migr_etd-2263
- Format
- Thesis
- Title
- Exponential Convergence Fourier Method and Its Application to Option Pricing with Lévy Processes.
- Creator
-
Gu, Fangxi, Nolder, Craig, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren D., Ökten, Giray, Florida State University, College of Arts and Sciences,...
Show moreGu, Fangxi, Nolder, Craig, Huffer, Fred W. (Fred William), Kercheval, Alec N., Nichols, Warren D., Ökten, Giray, Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
Option pricing by the Fourier method has been popular for the past decade, many of its applications to Lévy processes has been applied especially for European options. This thesis focuses on exponential convergence Fourier method and its application to discrete monitoring options and Bermudan options. An alternative payoff truncating method is derived to compare the benchmark Hilbert transform. A general error control framework is derived to keep the Fourier method out of an overflow problem....
Show moreOption pricing by the Fourier method has been popular for the past decade, many of its applications to Lévy processes has been applied especially for European options. This thesis focuses on exponential convergence Fourier method and its application to discrete monitoring options and Bermudan options. An alternative payoff truncating method is derived to compare the benchmark Hilbert transform. A general error control framework is derived to keep the Fourier method out of an overflow problem. Numerical results verify that the alternative payoff truncating sinc method performs better than the benchmark Hilbert transform method under the error control framework.
Show less - Date Issued
- 2016
- Identifier
- FSU_FA2016_Gu_fsu_0071E_13579
- Format
- Thesis
- Title
- Modeling Credit Risk in the Default Threshold Framework.
- Creator
-
Chiu, Chun-Yuan, Kercheval, Alec N., Chicken, Eric, Ökten, Giray, Fahim, Arash, Florida State University, College of Arts and Sciences, Department of Mathematics
- Abstract/Description
-
The default threshold framework for credit risk modeling developed by Garreau and Kercheval [SIAM Journal on Financial Mathematics, 7:642-673, 2016] enjoys the advantages of both the structural form models and the reduced form models, including excellent analytical tractability. In their paper, the closed form default time distribution of a company is derived when the default threshold is a constant or a deterministic function. As for stochastic default threshold, it is shown that the...
Show moreThe default threshold framework for credit risk modeling developed by Garreau and Kercheval [SIAM Journal on Financial Mathematics, 7:642-673, 2016] enjoys the advantages of both the structural form models and the reduced form models, including excellent analytical tractability. In their paper, the closed form default time distribution of a company is derived when the default threshold is a constant or a deterministic function. As for stochastic default threshold, it is shown that the survival probability can be derived as an expectation. How to specify the stochastic default threshold so that this expectation can be obtained in closed form is however left unanswered. The purpose of this thesis is to fulfill this gap. In this thesis, three credit risk models with stochastic default thresholds are proposed, under each of which the closed form default time distribution is derived. Unlike Garreau and Kercheval's work where the log-return of a company's stock price is assumed to be independent and identically distributed and the interest rate is assumed constant, in our new proposed models the random interest rate and the stochastic volatility of a company's stock price are taken into consideration. While in some cases the defaultable bond price, the credit spread and the CDS premium are derived in closed form under the new proposed models, in others it seems not so easy. The difficulty that stops us from getting closed form formulas is also discussed in this thesis. Our new models involve the Heston model, which has a closed form characteristic function. We found the common characteristic function formula used in the literature not always applicable for all input variables. In this thesis the safe region of the formula is analyzed completely. A new formula is also derived that can be used to find the characteristic function value in some cases when the common formula is not applicable. An example is given where the common formula fails and one should use the new formula.
Show less - Date Issued
- 2016
- Identifier
- FSU_FA2016_Chiu_fsu_0071E_13584
- Format
- Thesis
- Title
- GPU Computing in Financial Engineering.
- Creator
-
Xu, Linlin, Ökten, Giray, Sinha, Debajyoti, Bellenot, Steven F., Gallivan, Kyle A., Kercheval, Alec N., Florida State University, College of Arts and Sciences, Department of...
Show moreXu, Linlin, Ökten, Giray, Sinha, Debajyoti, Bellenot, Steven F., Gallivan, Kyle A., Kercheval, Alec N., Florida State University, College of Arts and Sciences, Department of Mathematics
Show less - Abstract/Description
-
GPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo algorithms are embarrassingly parallel, Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are...
Show moreGPU computing has become popular in computational finance and many financial institutions are moving their CPU based applications to the GPU platform. We explore efficient implementations for two main financial problems on GPU: pricing, and computing sensitivities (Greeks). Since most Monte Carlo algorithms are embarrassingly parallel, Monte Carlo has become a focal point in GPU computing. GPU speed-up examples reported in the literature often involve Monte Carlo algorithms, and there are software tools commercially available that help migrate Monte Carlo financial pricing models to GPU. We present a survey of Monte Carlo and randomized quasi-Monte Carlo methods, and discuss existing (quasi) Monte Carlo sequences in NVIDIA's GPU CURAND libraries. We discuss specific features of GPU architecture relevant for developing efficient (quasi) Monte Carlo methods. We introduce a recent randomized quasi-Monte Carlo method, and compare it with some of the existing implementations on GPU, when they are used in pricing caplets in the LIBOR market model and mortgage backed securities. We then develop a cache-aware implementation of a 3D parabolic PDE solver on GPU. We apply the well-known Craig-Sneyd scheme and derive the corresponding discretization. We discuss memory hierarchy of GPU and suggest a data structure that is suitable for GPU's caching system. We compare the performance of the PDE solver on CPU and GPU. Finally, we consider sensitivity analysis for financial problems via Monte Carlo and PDE methods. We review three commonly used methods and point out their advantages and disadvantages. We present a survey of automatic differentiation (AD), and show the challenges faced in memory consumption when AD is applied in financial problems. We discuss two optimization techniques that help reduce memory footprint significantly. We conduct the sensitivity analysis for the LIBOR market model and suggest an optimization for its AD implementation on GPU. We also apply AD to a 3D parabolic PDE and use GPU to reduce the execution time.
Show less - Date Issued
- 2015
- Identifier
- FSU_migr_etd-9526
- Format
- Thesis