Current Search: Research Repository (x) » * (x) » Citation (x) » Choi, S. (x) » Department of Scientific Computing (x)
Search results
 Title
 Detonability Of White Dwarf Plasma: Turbulence Models At Low Densities.
 Creator

Fenn, D., Plewa, T.
 Abstract/Description

We study the conditions required to produce selfsustained detonations in turbulent, carbonoxygen degenerate plasma at low densities. We perform a series of threedimensional hydrodynamic simulations of turbulence driven with various degrees of compressibility. The average conditions in the simulations are representative of models of merging binary white dwarfs. We find that material with very short ignition times is abundant in case turbulence is driven compressively. This material forms...
Show moreWe study the conditions required to produce selfsustained detonations in turbulent, carbonoxygen degenerate plasma at low densities. We perform a series of threedimensional hydrodynamic simulations of turbulence driven with various degrees of compressibility. The average conditions in the simulations are representative of models of merging binary white dwarfs. We find that material with very short ignition times is abundant in case turbulence is driven compressively. This material forms contiguous structures that persist overmany ignition times, and that we identify as prospective detonation kernels. Detailed analysis of prospective kernels reveals that these objects are centrally condensed and their shape is characterized by low curvature, supportive of selfsustained detonations. The key characteristic of the newly proposed detonation mechanism is thus high degree of compressibility of turbulent drive. The simulated detonation kernels have sizes notably smaller than the spatial resolution of any white dwarf merger simulation performed to date. The resolution required to resolve kernels is 0.1 km. Our results indicate a high probability of detonations in such wellresolved simulations of carbonoxygen white dwarf mergers. These simulations will likely produce detonations in systems of lower total mass, thus broadening the population of white dwarf binaries capable of producing Type Ia supernovae. Consequently, we expect a downward revision of the lower limit of the total merger mass that is capable of producing a prompt detonation. We review application of the new detonation mechanism to various explosion scenarios of single, Chandrasekharmass white dwarfs.
Show less  Date Issued
 201706
 Identifier
 FSU_libsubv1_wos_000399429600007, 10.1093/mnras/stx524
 Format
 Citation
 Title
 Bayesian Calibration Of Groundwater Models With Input Data Uncertainty.
 Creator

Xu, Tianfang, Valocchi, Albert J., Ye, Ming, Liang, Feng, Lin, YuFeng
 Abstract/Description

Effective water resources management typically relies on numerical models to analyze groundwater flow and solute transport processes. Groundwater models are often subject to input data uncertainty, as some inputs (such as recharge and well pumping rates) are estimated and subject to uncertainty. Current practices of groundwater model calibration often overlook uncertainties in input data; this can lead to biased parameter estimates and compromised predictions. Through a synthetic case study...
Show moreEffective water resources management typically relies on numerical models to analyze groundwater flow and solute transport processes. Groundwater models are often subject to input data uncertainty, as some inputs (such as recharge and well pumping rates) are estimated and subject to uncertainty. Current practices of groundwater model calibration often overlook uncertainties in input data; this can lead to biased parameter estimates and compromised predictions. Through a synthetic case study of surfaceground water interaction under changing pumping conditions and land use, we investigate the impacts of uncertain pumping and recharge rates on model calibration and uncertainty analysis. We then present a Bayesian framework of model calibration to handle uncertain input of groundwater models. The framework implements a marginalizing step to account for input data uncertainty when evaluating likelihood. It was found that not accounting for input uncertainty may lead to biased, overconfident parameter estimates because parameters could be overadjusted to compensate for possible input data errors. Parameter compensation can have deleterious impacts when the calibrated model is used to make forecast under a scenario that is different from calibration conditions. By marginalizing input data uncertainty, the Bayesian calibration approach effectively alleviates parameter compensation and gives more accurate predictions in the synthetic case study. The marginalizing Bayesian method also decomposes prediction uncertainty into uncertainties contributed by parameters, input data, and measurements. The results underscore the need to account for input uncertainty to better inform postmodeling decision making.
Show less  Date Issued
 201704
 Identifier
 FSU_libsubv1_wos_000403682600039, 10.1002/2016WR019512
 Format
 Citation
 Title
 A New Process Sensitivity Index To Identify Important System Processes Under Process Model And Parametric Uncertainty.
 Creator

Dai, Heng, Ye, Ming, Walker, Anthony P., Chen, Xingyuan
 Abstract/Description

A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing...
Show moreA hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods with variancebased global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. For demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond. Plain Language Summary If we have only one model, we always know how to identify the important factors of the models. However, if there are multiple models, it is not always clear how to identify the important factors. The factors important to one model may not be important to another model. It is necessary to develop a method that can identify important factors not for a single model but for multiple models. This study aims at resolving this problem by developing a mathematically rigorous method to provide a single summary measure for identifying important factors in the face of competing models. This is called multimodel process sensitivity analysis, and the mathematical measure is called process sensitivity index. The new index is demonstrated using a numerical example of groundwater reactive transport modeling with two recharge models and two geology models. The multimodel process sensitivity analysis has a wide range of applications in hydrologic and environmental modeling.
Show less  Date Issued
 201704
 Identifier
 FSU_libsubv1_wos_000403682600053, 10.1002/2016WR019715
 Format
 Citation
 Title
 A Measuretheoretic Algorithm For Estimating Bottom Friction In A Coastal Inlet: Case Study Of Bay St. Louis During Hurricane Gustav (2008).
 Creator

Graham, Lindley, Butler, Troy, Walsh, Scott, Dawson, Clint, Westerink, Joannes J.
 Abstract/Description

The majority of structural damage and loss of life during a hurricane is due to storm surge, thus it is important for communities in hurricaneprone regions to understand their risk due to surge. Storm surge in particular is largely influenced by coastal features such as topography/ bathymetry and bottom roughness. Bottom roughness determines how much resistance there is to the flow. Manning's formula can be used to model the bottom stress with the Manning's n coefficient, a spatially...
Show moreThe majority of structural damage and loss of life during a hurricane is due to storm surge, thus it is important for communities in hurricaneprone regions to understand their risk due to surge. Storm surge in particular is largely influenced by coastal features such as topography/ bathymetry and bottom roughness. Bottom roughness determines how much resistance there is to the flow. Manning's formula can be used to model the bottom stress with the Manning's n coefficient, a spatially dependent field. Given a storm surge model and a set of model outputs, an inverse problem may be solved to determine probable Manning's n fields to use for predictive simulations. The inverse problem is formulated and solved in a measuretheoretic framework using the stateofthe art Advanced Circulation (ADCIRC) storm surge model. The use of measure theory requires minimal assumptions and involves the direct inversion of the physicsbased map from model inputs to output data determined by the ADCIRC model. Thus, key geometric relationships in this map are preserved and exploited. By using a recently available subdomain implementation of ADCIRC that significantly reduces the computational cost of forward model solves, the authors demonstrate the method on a case study using data obtained from anADCIRC hindcast study of HurricaneGustav (2008) to quantify uncertainties in Manning's n within Bay St. Louis. However, the methodology is general and could be applied to any inverse problem that involves a map from model input to output quantities of interest.
Show less  Date Issued
 201703
 Identifier
 FSU_libsubv1_wos_000399327900006, 10.1175/MWRD160149.1
 Format
 Citation
 Title
 A MeasureTheoretic Algorithm for Estimating Bottom Friction in a Coastal Inlet: Case Study of Bay St. Louis during Hurricane Gustav (2008).
 Creator

Graham, Lindley, Butler, Troy, Walsh, Scott, Dawson, Clint, Westerink, Joannes J.
 Abstract/Description

The majority of structural damage and loss of life during a hurricane is due to storm surge, thus it is important for communities in hurricaneprone regions to understand their risk due to surge. Storm surge in particular is largely influenced by coastal features such as topography/bathymetry and bottom roughness. Bottom roughness determines how much resistance there is to the flow. Manning’s formula can be used to model the bottom stress with the Manning’s n coefficient, a spatially...
Show moreThe majority of structural damage and loss of life during a hurricane is due to storm surge, thus it is important for communities in hurricaneprone regions to understand their risk due to surge. Storm surge in particular is largely influenced by coastal features such as topography/bathymetry and bottom roughness. Bottom roughness determines how much resistance there is to the flow. Manning’s formula can be used to model the bottom stress with the Manning’s n coefficient, a spatially dependent field. Given a storm surge model and a set of model outputs, an inverse problem may be solved to determine probable Manning’s n fields to use for predictive simulations. The inverse problem is formulated and solved in a measuretheoretic framework using the stateoftheart Advanced Circulation (ADCIRC) storm surge model. The use of measure theory requires minimal assumptions and involves the direct inversion of the physicsbased map from model inputs to output data determined by the ADCIRC model. Thus, key geometric relationships in this map are preserved and exploited. By using a recently available subdomain implementation of ADCIRC that significantly reduces the computational cost of forward model solves, the authors demonstrate the method on a case study using data obtained from an ADCIRC hindcast study of Hurricane Gustav (2008) to quantify uncertainties in Manning’s n within Bay St. Louis. However, the methodology is general and could be applied to any inverse problem that involves a map from model input to output quantities of interest.
Show less  Date Issued
 20170221
 Identifier
 FSU_libsubv1_scholarship_submission_1487883656, 10.1175/MWRD160149.1
 Format
 Citation
 Title
 Inter And Intraobserver Agreement Of Biradsbased Subjective Visual Estimation Of Amount Of Fibroglandular Breast Tissue With Magnetic Resonance Imaging.
 Creator

Wengert, G. J., Helbich, T. H., Woitek, R., Kapetas, P., Clauser, P., Baltzer, P. A., Vogl, W.D., Weber, M., MeyerBaese, A., Pinker, Katja
 Abstract/Description

To evaluate the inter/intraobserver agreement of BIRADSbased subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observerindependent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BIRADS 1 and 2) were included in this institutional review board (IRB)approved prospective study. All women underwent un...
Show moreTo evaluate the inter/intraobserver agreement of BIRADSbased subjective visual estimation of the amount of fibroglandular tissue (FGT) with magnetic resonance imaging (MRI), and to investigate whether FGT assessment benefits from an automated, observerindependent, quantitative MRI measurement by comparing both approaches. Eighty women with no imaging abnormalities (BIRADS 1 and 2) were included in this institutional review board (IRB)approved prospective study. All women underwent unenhanced breast MRI. Four radiologists independently assessed FGT with MRI by subjective visual estimation according to BIRADS. Automated observerindependent quantitative measurement of FGT with MRI was performed using a previously described measurement system. Inter/intraobserver agreements of qualitative and quantitative FGT measurements were assessed using Cohen's kappa (k). Inexperienced readers achieved moderate inter/intraobserver agreement and experienced readers a substantial inter and perfect intraobserver agreement for subjective visual estimation of FGT. Practice and experience reduced observerdependency. Automated observerindependent quantitative measurement of FGT was successfully performed and revealed only fair to moderate agreement (k = 0.2090.497) with subjective visual estimations of FGT. Subjective visual estimation of FGT with MRI shows moderate intra/interobserver agreement, which can be improved by practice and experience. Automated observerindependent quantitative measurements of FGT are necessary to allow a standardized risk evaluation. aEuro cent Subjective FGT estimation with MRI shows moderate intra/interobserver agreement in inexperienced readers. aEuro cent Interobserver agreement can be improved by practice and experience. aEuro cent Automated observerindependent quantitative measurements can provide reliable and standardized assessment of FGT with MRI.
Show less  Date Issued
 201611
 Identifier
 FSU_libsubv1_wos_000385248600016, 10.1007/s003300164274x
 Format
 Citation
 Title
 Diversification In Wild Populations Of The Model Organism Anolis Carolinensis: A Genomewide Phylogeographic Investigation.
 Creator

Manthey, Joseph D., Tollis, Marc, Lemmon, Alan R., Lemmon, Emily Moriarty, Boissinot, Stephane
 Abstract/Description

The green anole (Anolis carolinensis) is a lizard widespread throughout the southeastern United States and is a model organism for the study of reproductive behavior, physiology, neural biology, and genomics. Previous phylogeographic studies of A. carolinensis using mitochondrial DNA and small numbers of nuclear loci identified conflicting and poorly supported relationships among geographically structured clades; these inconsistencies preclude confident use of A. carolinensis evolutionary...
Show moreThe green anole (Anolis carolinensis) is a lizard widespread throughout the southeastern United States and is a model organism for the study of reproductive behavior, physiology, neural biology, and genomics. Previous phylogeographic studies of A. carolinensis using mitochondrial DNA and small numbers of nuclear loci identified conflicting and poorly supported relationships among geographically structured clades; these inconsistencies preclude confident use of A. carolinensis evolutionary history in association with morphological, physiological, or reproductive biology studies among sampling localities and necessitate increased effort to resolve evolutionary relationships among natural populations. Here, we used anchored hybrid enrichment of hundreds of genetic markers across the genome of A. carolinensis and identified five strongly supported phylogeographic groups. Using multiple analyses, we produced a fully resolved species tree, investigated relative support for each lineage across all gene trees, and identified mitonuclear discordance when comparing our results to previous studies. We found fixed differences in only one cladesouthern Florida restricted to the Everglades regionwhile most polymorphisms were shared between lineages. The southern Florida group likely diverged from other populations during the Pliocene, with all other diversification during the Pleistocene. Multiple lines of support, including phylogenetic relationships, a latitudinal gradient in genetic diversity, and relatively more stable longterm population sizes in southern phylogeographic groups, indicate that diversification in A. carolinensis occurred northward from southern Florida.
Show less  Date Issued
 201611
 Identifier
 FSU_libsubv1_wos_000387664500017, 10.1002/ece3.2547
 Format
 Citation
 Title
 No Double Detonations But Core Carbon Ignitions In Highresolution, Gridbased Simulations Of Binary White Dwarf Mergers.
 Creator

Fenn, D., Plewa, T., Gawryszczak, A.
 Abstract/Description

We study the violent phase of the merger of massive binary white dwarf systems. Our aim is to characterize the conditions for explosive burning to occur, and identify a possible explosion mechanism of Type Ia supernovae. The primary components of our model systems are carbonoxygen (C/O) white dwarfs, while the secondaries are made either of C/O or of pure helium. We account for tidal effects in the initial conditions in a selfconsistent way, and consider initially wellseparated systems...
Show moreWe study the violent phase of the merger of massive binary white dwarf systems. Our aim is to characterize the conditions for explosive burning to occur, and identify a possible explosion mechanism of Type Ia supernovae. The primary components of our model systems are carbonoxygen (C/O) white dwarfs, while the secondaries are made either of C/O or of pure helium. We account for tidal effects in the initial conditions in a selfconsistent way, and consider initially wellseparated systems with slow inspiral rates. We study the merger evolution using an adaptive mesh refinement, reactive, Eulerian code in three dimensions, assuming symmetry across the orbital plane. We use a corotating reference frame to minimize the effects of numerical diffusion, and solve for selfgravity using a multigrid approach. We find a novel detonation mechanism in C/O mergers with massive primaries. Here, the detonation occurs in the primary's core and relies on the combined action of tidal heating, accretion heating, and selfheating due to nuclear burning. The exploding structure is compositionally stratified, with a reverse shock formed at the surface of the dense ejecta. The existence of such a shock has not been reported elsewhere. The explosion energy (1.6 x 10(51) erg) and Ni56 mass (0.86 Mcircle dot) are consistent with an SN Ia at the bright end of the luminosity distribution, with an approximated decline rate of Delta m(15)(B) approximate to 0.99. Our study does not support doubledetonation scenarios in the case of a system with a 0.6 Mcircle dot helium secondary and a 0.9 Mcircle dot primary. Although the accreted helium detonates, it fails to ignite carbon at the base of the boundary layer or in the primary's core.
Show less  Date Issued
 20161101
 Identifier
 FSU_libsubv1_wos_000384676000015, 10.1093/mnras/stw1831
 Format
 Citation
 Title
 Estimating daily air temperatures over the Tibetan Plateau by dynamically integrating MODIS LST data.
 Creator

Zhang, Hongbo, Zhang, Fan, Ye, Ming, Che, Tao, Zhang, Guoqing
 Abstract/Description

Recently, remotely sensed land surface temperature (LST) data have been used to estimate air temperatures because of the sparseness of station measurements in remote mountainous areas. Due to the availability and accuracy of Moderate Resolution Imaging Spectroradiometer (MODIS) LST data, the use of a single term or a fixed combination of terms (e.g., Terra/Aqua night and Terra/Aqua day), as used in previous estimation methods, provides only limited practical application. Furthermore, the...
Show moreRecently, remotely sensed land surface temperature (LST) data have been used to estimate air temperatures because of the sparseness of station measurements in remote mountainous areas. Due to the availability and accuracy of Moderate Resolution Imaging Spectroradiometer (MODIS) LST data, the use of a single term or a fixed combination of terms (e.g., Terra/Aqua night and Terra/Aqua day), as used in previous estimation methods, provides only limited practical application. Furthermore, the estimation accuracy may be affected by different combinations and variable data quality among the MODIS LST terms and models. This study presents a method that dynamically integrates the available LST terms to estimate the daily mean air temperature and simultaneously considers model selection, data quality, and estimation accuracy. The results indicate that the differences in model performance are related to the combinations of LST terms and their data quality. The spatially averaged cloud cover of similar to 14% for the developed product between 2003 and 2010 is much lower than the 3554% for single LST terms. The average crossvalidation rootmeansquare difference values are approximately 2 degrees C. This study identifies the best LST combinations and statistical models and provides an efficient method for daily air temperature estimation with low cloud blockage over the Tibetan Plateau (TP). The developed data set and the method proposed in this study can help alleviate the problem of sparse air temperature data over the TP.
Show less  Date Issued
 201610
 Identifier
 FSU_libsubv1_wos_000386976100033, 10.1002/2016JD025154
 Format
 Citation
 Title
 Anchored Enrichment Dataset For True Flies (order Diptera) Reveals Insights Into The Phylogeny Of Flower Flies (family Syrphidae).
 Creator

Young, Andrew Donovan, Lemmon, Alan R., Skevington, Jeffrey H., Mengual, Ximo, Stahls, Gunilla, Reemer, Menno, Jordaens, Kurt, Kelso, Scott, Lemmon, Emily Moriarty, Hauser,...
Show moreYoung, Andrew Donovan, Lemmon, Alan R., Skevington, Jeffrey H., Mengual, Ximo, Stahls, Gunilla, Reemer, Menno, Jordaens, Kurt, Kelso, Scott, Lemmon, Emily Moriarty, Hauser, Martin, De Meyer, Marc, Misof, Bernhard, Wiegmann, Brian M.
Show less  Abstract/Description

Background: Anchored hybrid enrichment is a form of nextgeneration sequencing that uses oligonucleotide probes to target conserved regions of the genome flanked by less conserved regions in order to acquire data useful for phylogenetic inference from a broad range of taxa. Once a probe kit is developed, anchored hybrid enrichment is superior to traditional PCRbased Sanger sequencing in terms of both the amount of genomic data that can be recovered and effective cost. Due to their incredibly...
Show moreBackground: Anchored hybrid enrichment is a form of nextgeneration sequencing that uses oligonucleotide probes to target conserved regions of the genome flanked by less conserved regions in order to acquire data useful for phylogenetic inference from a broad range of taxa. Once a probe kit is developed, anchored hybrid enrichment is superior to traditional PCRbased Sanger sequencing in terms of both the amount of genomic data that can be recovered and effective cost. Due to their incredibly diverse nature, importance as pollinators, and historical instability with regard to subfamilial and tribal classification, Syrphidae (flower flies or hoverflies) are an ideal candidate for anchored hybrid enrichmentbased phylogenetics, especially since recent molecular phylogenies of the syrphids using only a few markers have resulted in highly unresolved topologies. Over 6200 syrphids are currently known and uncovering their phylogeny will help us to understand how these species have diversified, providing insight into an array of ecological processes, from the development of adult mimicry, the origin of adult migration, to pollination patterns and the evolution of larval resource utilization. Results: We present the first use of anchored hybrid enrichment in insect phylogenetics on a dataset containing 30 flower fly species from across all four subfamilies and 11 tribes out of 15. To produce a phylogenetic hypothesis, 559 loci were sampled to produce a final dataset containing 217,702 sites. We recovered a well resolved topology with bootstrap support values that were almost universally >95 %. The subfamily Eristalinae is recovered as paraphyletic, with the strongest support for this hypothesis to date. The ant predators in the Microdontinae are sister to all other syrphids. Syrphinae and Pipizinae are monophyletic and sister to each other. Larval predation on softbodied hemipterans evolved only once in this family. Conclusions: Anchored hybrid enrichment was successful in producing a robustly supported phylogenetic hypothesis for the syrphids. Subfamilial reconstruction is concordant with recent phylogenetic hypotheses, but with much higher support values. With the newly designed probe kit this analysis could be rapidly expanded with further sampling, opening the door to more comprehensive analyses targeting problem areas in syrphid phylogenetics and ecology.
Show less  Date Issued
 20160629
 Identifier
 FSU_libsubv1_wos_000378675500003, 10.1186/s1286201607140
 Format
 Citation
 Title
 Multiparametric [F18]Fluorodeoxyglucose/[F18]Fluoromisonidazole Positron Emission Tomography/Magnetic Resonance Imaging of Locally Advanced Cervical Cancer for the NonInvasive Detection of Tumor Heterogeneity: A Pilot Study.
 Creator

Pinker, Katja, Andrzejewski, Piotr, Baltzer, Pascal, Polanec, Stephan H., Sturdza, Alina, Georg, Dietmar, Helbich, Thomas H., Karanikas, Georgios, Grimm, Christoph, Polterauer,...
Show morePinker, Katja, Andrzejewski, Piotr, Baltzer, Pascal, Polanec, Stephan H., Sturdza, Alina, Georg, Dietmar, Helbich, Thomas H., Karanikas, Georgios, Grimm, Christoph, Polterauer, Stephan, Poetter, Richard, Wadsak, Wolfgang, Mitterhauser, Markus, Georg, Petra
Show less  Abstract/Description

Objectives To investigate fused multiparametric positron emission tomography/magnetic resonance imaging (MP PET/MRI) at 3T in patients with locally advanced cervical cancer, using highresolution T2weighted, contrastenhanced MRI (CEMRI), diffusionweighted imaging (DWI), and the radiotracers [F18]fluorodeoxyglucose ([F18]FDG) and [F18]fluoromisonidazol ([F18]FMISO) for the noninvasive detection of tumor heterogeneity for an improved planning of chemoradiation therapy (CRT). Materials...
Show moreObjectives To investigate fused multiparametric positron emission tomography/magnetic resonance imaging (MP PET/MRI) at 3T in patients with locally advanced cervical cancer, using highresolution T2weighted, contrastenhanced MRI (CEMRI), diffusionweighted imaging (DWI), and the radiotracers [F18]fluorodeoxyglucose ([F18]FDG) and [F18]fluoromisonidazol ([F18]FMISO) for the noninvasive detection of tumor heterogeneity for an improved planning of chemoradiation therapy (CRT). Materials and Methods Sixteen patients with locally advanced cervix were enrolled in this IRB approved and were examined with fused MP [F18]FDG/[F18]FMISO PET/MRI and in eleven patients complete data sets were acquired. MP PET/MRI was assessed for tumor volume, enhancement (EH)kinetics, diffusivity, and [F18]FDG/[F18]FMISOavidity. Descriptive statistics and voxelbyvoxel analysis of MRI and PET parameters were performed. Correlations were assessed using multiple correlation analysis. Results All tumors displayed imaging parameters concordant with cervix cancer, i.e. type II/III EHkinetics, restricted diffusivity (median ADC 0.80x10(3)mm(2)/sec), [F18]FDG(median SUV(max)16.2) and [F18]FMISOavidity (median SUV(max)3.1). In all patients, [F18]FMISO PET identified the hypoxic tumor subvolume, which was independent of tumor volume. A voxelbyvoxel analysis revealed only weak correlations between the MRI and PET parameters (0.050.22), indicating that each individual parameter yields independent information and the presence of tumor heterogeneity. Conclusion
Show less  Date Issued
 20160511
 Identifier
 FSU_libsubv1_wos_000376587300065, 10.1371/journal.pone.0155333
 Format
 Citation
 Title
 Regional QuasiThreeDimensional UnsaturatedSaturated Water Flow Model Based on a VerticalHorizontal Splitting Concept.
 Creator

Zhu, Yan, Shi, Liangsheng, Wu, Jingwei, Ye, Ming, Cui, Lihong, Yang, Jinzhong
 Abstract/Description

Due to the high nonlinearity of the threedimensional (3D) unsaturatedsaturated water flow equation, using a fully 3D numerical model is computationally expensive for large scale applications. A new unsaturatedsaturated water flow model is developed in this paper based on the vertical/horizontal splitting (VHS) concept to split the 3D unsaturatedsaturated Richards' equation into a twodimensional (2D) horizontal equation and a onedimensional (1D) vertical equation. The horizontal...
Show moreDue to the high nonlinearity of the threedimensional (3D) unsaturatedsaturated water flow equation, using a fully 3D numerical model is computationally expensive for large scale applications. A new unsaturatedsaturated water flow model is developed in this paper based on the vertical/horizontal splitting (VHS) concept to split the 3D unsaturatedsaturated Richards' equation into a twodimensional (2D) horizontal equation and a onedimensional (1D) vertical equation. The horizontal plane of average head gradient in the triangular prism element is derived to split the 3D equation into the 2D equation. The lateral flow in the horizontal plane of average head gradient represented by the 2D equation is then calculated by the water balance method. The 1D vertical equation is discretized by the finite difference method. The two equations are solved simultaneously by coupling them into a unified nonlinear system with a single matrix. Three synthetic cases are used to evaluate the developed model code by comparing the modeling results with those of Hydrus1D, SWMS2D and FEFLOW. We further apply the model to regionalscale modeling to simulate groundwater table fluctuations for assessing the model applicability in complex conditions. The proposed modeling method is found to be accurate with respect to measurements.
Show less  Date Issued
 201605
 Identifier
 FSU_libsubv1_wos_000377984300026, 10.3390/w8050195
 Format
 Citation
 Title
 Extending the density functional embedding theory to finite temperature and an efficient iterative method for solving for embedding potentials.
 Creator

Huang, Chen
 Abstract/Description

A key element in the density functional embedding theory (DFET) is the embedding potential. We discuss two major issues related to the embedding potential: (1) its nonuniqueness and (2) the numerical difficulty for solving for it, especially for the spinpolarized systems. To resolve the first issue, we extend DFET to finite temperature: all quantities, such as the subsystem densities and the total system's density, are calculated at a finite temperature. This is a physical extension since...
Show moreA key element in the density functional embedding theory (DFET) is the embedding potential. We discuss two major issues related to the embedding potential: (1) its nonuniqueness and (2) the numerical difficulty for solving for it, especially for the spinpolarized systems. To resolve the first issue, we extend DFET to finite temperature: all quantities, such as the subsystem densities and the total system's density, are calculated at a finite temperature. This is a physical extension since materials work at finite temperatures. We show that the embedding potential is strictly unique at T > 0. To resolve the second issue, we introduce an efficient iterative embedding potential solver. We discuss how to relax the magnetic moments in subsystems and how to equilibrate the chemical potentials across subsystems. The solver is robust and efficient for several nontrivial examples, in all of which good quality spinpolarized embedding potentials were obtained. We also demonstrate the solver on an extended periodic system: iron bodycentered cubic (110) surface, which is related to the modeling of the heterogeneous catalysis involving iron, such as the FischerTropsch and the Haber processes. This work would make it efficient and accurate to perform embedding simulations of some challenging material problems, such as the heterogeneous catalysis and the defects of complicated spin configurations in electronic materials. (C) 2016 AIP Publishing LLC.
Show less  Date Issued
 20160328
 Identifier
 FSU_libsubv1_wos_000373644400009, 10.1063/1.4944464
 Format
 Citation
 Title
 A LandmarkFree Method for ThreeDimensional Shape Analysis.
 Creator

Pomidor, Benjamin J., Makedonska, Jana, Slice, Dennis E.
 Abstract/Description

Background The tools and techniques used in morphometrics have always aimed to transform the physical shape of an object into a concise set of numerical data for mathematical analysis. The advent of landmarkbased morphometrics opened new avenues of research, but these methods are not without drawbacks. The time investment required of trained individuals to accurately landmark a data set is significant, and the reliance on readilyidentifiable physical features can hamper research efforts....
Show moreBackground The tools and techniques used in morphometrics have always aimed to transform the physical shape of an object into a concise set of numerical data for mathematical analysis. The advent of landmarkbased morphometrics opened new avenues of research, but these methods are not without drawbacks. The time investment required of trained individuals to accurately landmark a data set is significant, and the reliance on readilyidentifiable physical features can hamper research efforts. This is especially true of those investigating smooth or featureless surfaces. Methods In this paper, we present a new method to perform this transformation for data obtained from highresolution scanning technology. This method uses surface scans, instead of landmarks, to calculate a shape difference metric analogous to Procrustes distance and perform superimposition. This is accomplished by building upon and extending the Iterative Closest Point algorithm. We also explore some new ways this data can be used; for example, we can calculate an averaged surface directly and visualize pointwise shape information over this surface. Finally, we briefly demonstrate this method on a set of primate skulls and compare the results of the new methodology with traditional geometric morphometric analysis.
Show less  Date Issued
 20160308
 Identifier
 FSU_libsubv1_wos_000371991300024, 10.1371/journal.pone.0150368
 Format
 Citation
 Title
 A LandmarkFree Method for ThreeDimensional Shape Analysis.
 Creator

Slice, Dennis, Pomidor, Benjamin J.
 Abstract/Description

Background The tools and techniques used in morphometrics have always aimed to transform the physical shape of an object into a concise set of numerical data for mathematical analysis. The advent of landmarkbased morphometrics opened new avenues of research, but these methods are not without drawbacks. The time investment required of trained individuals to accurately landmark a data set is significant, and the reliance on readilyidentifiable physical features can hamper research efforts....
Show moreBackground The tools and techniques used in morphometrics have always aimed to transform the physical shape of an object into a concise set of numerical data for mathematical analysis. The advent of landmarkbased morphometrics opened new avenues of research, but these methods are not without drawbacks. The time investment required of trained individuals to accurately landmark a data set is significant, and the reliance on readilyidentifiable physical features can hamper research efforts. This is especially true of those investigating smooth or featureless surfaces. Methods In this paper, we present a new method to perform this transformation for data obtained from highresolution scanning technology. This method uses surface scans, instead of landmarks, to calculate a shape difference metric analogous to Procrustes distance and perform superimposition. This is accomplished by building upon and extending the Iterative Closest Point algorithm. We also explore some new ways this data can be used; for example, we can calculate an averaged surface directly and visualize pointwise shape information over this surface. Finally, we briefly demonstrate this method on a set of primate skulls and compare the results of the new methodology with traditional geometric morphometric analysis.
Show less  Date Issued
 20160308
 Identifier
 FSU_libsubv1_scholarship_submission_1475156153, 10.1371/journal.pone.0150368
 Format
 Citation
 Title
 Evaluating marginal likelihood with thermodynamic integration method and comparison with several other numerical methods.
 Creator

Liu, Peigui, Elshall, Ahmed S., Ye, Ming, Beerli, Peter, Zeng, Xiankui, Lu, Dan, Tao, Yuezan
 Abstract/Description

Evaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has...
Show moreEvaluating marginal likelihood is the most critical and computationally expensive task, when conducting Bayesian model averaging to quantify parametric and model uncertainties. The evaluation is commonly done by using Laplace approximations to evaluate semianalytical expressions of the marginal likelihood or by using Monte Carlo (MC) methods to evaluate arithmetic or harmonic mean of a joint likelihood function. This study introduces a new MC method, i.e., thermodynamic integration, which has not been attempted in environmental modeling. Instead of using samples only from prior parameter space (as in arithmetic mean evaluation) or posterior parameter space (as in harmonic mean evaluation), the thermodynamic integration method uses samples generated gradually from the prior to posterior parameter space. This is done through a path sampling that conducts Markov chain Monte Carlo simulation with different power coefficient values applied to the joint likelihood function. The thermodynamic integration method is evaluated using three analytical functions by comparing the method with two variants of the Laplace approximation method and three MC methods, including the nested sampling method that is recently introduced into environmental modeling. The thermodynamic integration method outperforms the other methods in terms of their accuracy, convergence, and consistency. The thermodynamic integration method is also applied to a synthetic case of groundwater modeling with four alternative models. The application shows that model probabilities obtained using the thermodynamic integration method improves predictive performance of Bayesian model averaging. The thermodynamic integration method is mathematically rigorous, and its MC implementation is computationally general for a wide range of environmental problems.
Show less  Date Issued
 201602
 Identifier
 FSU_libsubv1_wos_000373117300008, 10.1002/2014WR016718
 Format
 Citation
 Title
 On the origin of high ionic conductivity in Nadoped SrSiO3.
 Creator

Chien, PoHsiu, Jee, Youngseok, Huang, Chen, Dervisoglu, Riza, Hung, Ivan, Gan, Zhehong, Huang, Kevin, Hu, YanYan
 Abstract/Description

Understanding the local structure and ion dynamics is at the heart of ion conductor research. This paper reports on highresolution solidstate Si29, Na23, and O17 NMR investigation of the structure, chemical composition, and ion dynamics of a newly discovered fast ion conductor, Nadoped SrSiO3, which exhibited a much higher ionic conductivity than most of current oxide ion conductors. Quantitative analyses reveal that with a small dose (<10 mol%) of Na, the doped Na integrates into the...
Show moreUnderstanding the local structure and ion dynamics is at the heart of ion conductor research. This paper reports on highresolution solidstate Si29, Na23, and O17 NMR investigation of the structure, chemical composition, and ion dynamics of a newly discovered fast ion conductor, Nadoped SrSiO3, which exhibited a much higher ionic conductivity than most of current oxide ion conductors. Quantitative analyses reveal that with a small dose (<10 mol%) of Na, the doped Na integrates into the SrSiO3 structure to form NaxSr1xSiO30.5x, and with >10 mol% Na doping, phase separation occurs, leading to the formation of an amorphous phase betaNa2Si2O5 and a crystalline Srrich phase. Variabletemperature Na23 and O17 magicanglespinning NMR up to 618 degrees C have shown significant changes in Na ion dynamics at high temperatures but little oxide ion motion, suggesting that Na ions are responsible for the observed high ionic conductivity. In addition, betaNa2Si2O5 starts to crystallize at temperatures higher than 480 degrees C with prolonged heating, resulting in reduction in Na+ motion, and thus degradation of ionic conductivity. This study has contributed critical evidence to the understanding of ionic conduction in Nadoped SrSiO3 and demonstrated that multinuclear highresolution and hightemperature solidstate NMR is a uniquely useful tool for investigating ion conductors at their operating conditions.
Show less  Date Issued
 2016
 Identifier
 FSU_libsubv1_wos_000377262200023, 10.1039/c5sc04270d
 Format
 Citation
 Title
 A MULTISCALE IMPLEMENTATION BASED ON ADAPTIVE MESH REFINEMENT FOR THE NONLOCAL PERIDYNAMICS MODEL IN ONE DIMENSION.
 Creator

Xu, Feifei, Gunzburger, Max, Burkardt, John, Du, Qiang
 Abstract/Description

Peridynamics models for solid mechanics feature a horizon parameter (5 that specifies the maximum extent of nonlocal interactions. In this paper, a multiscale implementation of peridynamics models is proposed. In regions in which the displacement field is smooth, grid sizes are large relative to delta, leading to a local behavior of the models, whereas in regions containing defects, e.g., cracks, delta is larger than the grid size. Discontinuous (continuous) Galerkin finite element...
Show morePeridynamics models for solid mechanics feature a horizon parameter (5 that specifies the maximum extent of nonlocal interactions. In this paper, a multiscale implementation of peridynamics models is proposed. In regions in which the displacement field is smooth, grid sizes are large relative to delta, leading to a local behavior of the models, whereas in regions containing defects, e.g., cracks, delta is larger than the grid size. Discontinuous (continuous) Galerkin finite element discretizations are used in regions where defects do (do not) occur. Moreover, in regions where no defects occur, the multiscale implementation seamlessly transitions to the use of a standard finite element discretization of a corresponding PDE model. Here, we demonstrate the multiscale implementation in a simple onedimensional setting. An adaptive strategy is incorporated to detect discontinuities and effect grid refinement, resulting in a highly accurate and efficient implementation of peridynamics.
Show less  Date Issued
 2016
 Identifier
 FSU_libsubv1_wos_000373366500014, 10.1137/15M1010300
 Format
 Citation
 Title
 Hyperspherical Sparse Approximation Techniques for HighDimensional Discontinuity Detection.
 Creator

Zhang, Guannan, Webster, Clayton G., Gunzburger, Max, Burkardt, John
 Abstract/Description

This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in highdimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of wellknown approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are...
Show moreThis work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in highdimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of wellknown approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a onedimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.
Show less  Date Issued
 2016
 Identifier
 FSU_libsubv1_wos_000381097400005, 10.1137/16M1071699
 Format
 Citation
 Title
 OPTIMAL MODEL MANAGEMENT FOR MULTIFIDELITY MONTE CARLO ESTIMATION.
 Creator

Peherstorfer, Benjamin, Willcox, Karen, Gunzburger, Max
 Abstract/Description

This work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive highfidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projectionbased reduced models, datafit models, support vector...
Show moreThis work presents an optimal model management strategy that exploits multifidelity surrogate models to accelerate the estimation of statistics of outputs of computationally expensive highfidelity models. Existing acceleration methods typically exploit a multilevel hierarchy of surrogate models that follow a known rate of error decay and computational costs; however, a general collection of surrogate models, which may include projectionbased reduced models, datafit models, support vector machines, and simplifiedphysics models, does not necessarily give rise to such a hierarchy. Our multifidelity approach provides a framework to combine an arbitrary number of surrogate models of any type. Instead of relying on error and cost rates, an optimization problem balances the number of model evaluations across the highfidelity and surrogate models with respect to error and costs. We show that a unique analytic solution of the model management optimization problem exists under mild conditions on the models. Our multifidelity method makes occasional recourse to the highfidelity model; in doing so it provides an unbiased estimator of the statistics of the highfidelity model, even in the absence of error bounds and error estimators for the surrogate models. Numerical experiments with linear and nonlinear examples show that speedups by orders of magnitude are obtained compared to Monte Carlo estimation that invokes a single model only.
Show less  Date Issued
 2016
 Identifier
 FSU_libsubv1_wos_000387347700070, 10.1137/15M1046472
 Format
 Citation