url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://nips.cc/Conferences/2021/ScheduleMultitrack?event=26994
` Timezone: » Poster From Optimality to Robustness: Adaptive Re-Sampling Strategies in Stochastic Bandits Dorian Baudry · Patrick Saux · Odalric-Ambrym Maillard Tue Dec 07 08:30 AM -- 10:00 AM (PST) @ None #None The stochastic multi-arm bandit problem has been extensively studied under standard assumptions on the arm's distribution (e.g bounded with known support, exponential family, etc). These assumptions are suitable for many real-world problems but sometimes they require knowledge (on tails for instance) that may not be precisely accessible to the practitioner, raising the question of the robustness of bandit algorithms to model misspecification. In this paper we study a generic \emph{Dirichlet Sampling} (DS) algorithm, based on pairwise comparisons of empirical indices computed with \textit{re-sampling} of the arms' observations and a data-dependent \textit{exploration bonus}. We show that different variants of this strategy achieve provably optimal regret guarantees when the distributions are bounded and logarithmic regret for semi-bounded distributions with a mild quantile condition. We also show that a simple tuning achieve robustness with respect to a large class of unbounded distributions, at the cost of slightly worse than logarithmic asymptotic regret. We finally provide numerical experiments showing the merits of DS in a decision-making problem on synthetic agriculture data. #### Author Information ##### Patrick Saux (Inria) PhD Student in Reinforcement Learning at Scool (ex SequeL).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9200541973114014, "perplexity": 2307.5663180559777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358323.91/warc/CC-MAIN-20211127223710-20211128013710-00098.warc.gz"}
https://www.physicsforums.com/threads/divergence-of-newtons-law-of-gravitation.363717/
# Divergence of newton's law of gravitation 1. Dec 15, 2009 ### bitrex I am studying vector calculus, and I saw the following result in a physics text: $$g = -\frac{m}{r^3}\vec{r}$$ $$r^2 = x^2 + y^2 + z^2$$ $$\vec{r} = ix + jy +kz$$ $$\nabla \cdot g = 0$$ I'm not sure how this was done. Is the product rule used somehow? What happened to the extra power of r? Thanks for any advice. 2. Dec 15, 2009 ### Jolb It helps to use the spherical version of divergence. = I don't expect you to know the spherical version (few people know it by heart), but doing that problem in cartesian coordinates is only for masochists. This is actually an interesting problem, though. That field is the gravitational field of a point mass (or electric field of a point charge) at the origin. Since there is a nonzero field everywhere, you'd expect there to some source of the field, i.e., a point of nonzero divergence! You'll find that if you plug your field into that formula for the divergence, it is zero everywhere but has a value of 0/0 at the origin. In many texts, they give an argument for why that field's divergence should actually be a dirac delta function (i.e., there is an infinitely small region of nonzero, finite divergence at the origin.) Last edited by a moderator: Apr 17, 2017 3. Jan 26, 2010 ### coki2000 Okey.But why don't you use the cartesian coordinates, why the spherical coordinates??and why masochism. Last edited by a moderator: Apr 17, 2017 4. Jan 26, 2010 ### D H Staff Emeritus Computing $\nabla\cdot \vec g$ in spherical coordinates is easy. In spherical coordinates, the gravitational acceleration is $$\vec g= -\frac {mG}{r^2}\hat r$$ Thus with the caveat that $r\ne 0$, $$\nabla \cdot \vec g = \frac 1 {r^2} \frac {\partial}{\partial r}\left(r^2\left(-\frac {mG}{r^2}\right)\right) = \frac 1 {r^2} \frac {\partial }{\partial r}\left(-mG\right) = 0$$ The masochism arises in computing it in cartesian coordinates. Try it. 5. Jan 26, 2010 ### coki2000 But the divergence of gravitational acceleration could be $$-4\pi.GM$$ isn't it. 6. Jan 26, 2010 ### D H Staff Emeritus You are thinking of the integral of the divergence over some volume, $$\iiint_V \nabla \cdot \vec g \, dV = \oint_S \vec g \cdot d\vec S = -4\pi mG$$ This is why some write the divergence as a (three dimensional) delta function. 7. Jan 26, 2010 ### coki2000 Thanks.Alright what is its proof that $$\int_V \nabla \cdot \vec g \, dV = \oint_S \vec g \cdot d\vec S = -4\pi mG$$? Do you explain to me? 8. Jan 26, 2010 ### D H Staff Emeritus The first equality is the divergence theorem. The second equality results from evaluating the surface integral.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.940793514251709, "perplexity": 979.7353758448402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826306.47/warc/CC-MAIN-20181214184754-20181214210754-00525.warc.gz"}
https://space.stackexchange.com/questions/35980/at-what-altitude-does-a-spacecraft-slow-down
# At what altitude does a spacecraft slow down? So as far as I understand, at 100km atmosphere exists, although very faint. Could someone approximate the speed of a sphere at 50km altitude after re-entry from a high orbit? It might sound as a wierd question but I am trying to calculate a factor and need an approximation of speed that would be realistic. An answer like 'hypersonic' would suffice with a minimum realistic value (educational guess). I am satisfying my curiosity of approximating a possibility of something that's quite nuts. • you would need to know the ballistic coefficient of your object (which you could calculate from the mass and diameter of the sphere) to calculate the terminal velocity at a given altitude. – user20636 May 7 '19 at 3:36 • I think the title needs a clean up. Is this question asking about spaceships slowing down, or the behavior of a sphere at 50km altitude? – Innovine May 7 '19 at 6:59 • Welcome to Space! Your title asks for altitude, but your post says speed. Which one? Also, do you ultimately want a theoretical answer, or what has actually occurred with past spacecraft? – DrSheldon May 7 '19 at 13:24 tl;dr: At 50 km altitude, about Mach 8-10 from LEO, and about Mach 15 returning from the Moon (which is coming in roughly the same speed it would be from a high Earth orbit). Your milage will vary depending on a lot of aerodynamic details! See the question How does a Reentry Breakup Recorder survive reentry and then broadcast its data before impact? for the first image of the actual spacecraft, and then @Uwe's excellent answer for a lot more details on the spacecraft. For the data shown below, see the 2013 presentation by Andrew S. Feistel, Michael A. Weaver, and William H. Ailor, of the Vehicle Systems Division, The Aerospace Corporation: Comparison of Reentry Breakup Measurements for Three Atmospheric Reentries at the 6th IAASS Conference: Safety is Not an Option, May 2013. So for a small blunt reentry craft deorbited from LEO, it looks like about Mach 8 to 10. below: From page 10 of Comparison of Reentry Breakup Measurements for Three Atmospheric Reentries From NASA Technical Note TN D-6792: The Aerodynamic Environment of the Apollo Command Module during Superorbital Entry by Dorothy B. Lee and Winston D. Goodrich (1972) one of the examples shown gives about 16,000 feet/sec or about 4850 m/s or about mach 15 when re-entering the atmosphere from cis-ulnar space or "Superorbital" speed. • your answer suggests the speeds are dependant on the entry velocity, not the sizes of the objects. Was that intentional? – user20636 May 7 '19 at 14:34 Atmospheric effects during reentry usually start around 120km. The atmosphere extends far beyond 100km. Even at 400km the ISS is constantly slowing down, enough to require periodic reboosts. The exosphere extends much further again, up to 10,000km were it merges with the solar wind. The exosphere will also slow spacecraft down, but the amount only becomes significant over longer timescales. https://en.m.wikipedia.org/wiki/Exosphere • There is no border at 120 km, the drag is gradually increasing from 400 km down to 100 km. – Uwe May 7 '19 at 7:45 • Obviously, but that's roughly the point when vehicles start taking an active interest in aerodynamics. the drag is actually increasing from thousands of km, but it only matters in years up there, days/weeks at hundreds of km, and matters in minutes/seconds as you approach 100km – Innovine May 7 '19 at 7:54 • +1 fyi I'm fine with adjusting the title (or other aspects) of the question. – uhoh May 7 '19 at 8:55 There's not an answer to that. At least not precisely. The compromises are: • Slowing down higher up usually requires lift generation or very high drag to mass. If you're in a space shuttle this is a lot easier than in a capsule or just a chunk or rock. You also generate a bit of heat over a significant time. Favouring not ablative heat-shielding. • Slowing down lower down requires surviving greater aero-dynamic stresses. It also means higher temperatures, but they must be endured for less time. This favours ablative heat- shielding. • The faster you are going to start with, out scrubbing as much speed as you can in the upper atmosphere will still leave you a fair amount to get rid of lower down. This graph: https://www.semanticscholar.org/paper/Radiation-Ablation-Coupling-for-Capsule-Reentry-via-Leyland-Morgan/471eb8a136f41006def99b65e8ccf3cc5592322e/figure/7 show this in action for Saturn V capsules vs the Space shuttle. These are pretty much opposite ends of the spectrum on all three counts and as predicted the capsule does most of its slowing down at about half the altitude of the shuttle. I think these two should be good bounds though. You would need quite exotic designs to considerably extend the "slowing down" altitude out of those ranges.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7120583653450012, "perplexity": 1878.7212515310464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00650.warc.gz"}
https://www.physicsforums.com/threads/hamiltonian-formulation-of-qcd-and-nucleon-mass.743255/
# Hamiltonian formulation of QCD and nucleon mass 1. Mar 14, 2014 ### tom.stoer Hello, there are several papers on QCD in Hamiltonian formulation, especially in Coulomb gauge. Unfortunately the Hamiltonian H is rather formel and highly complex. Question: is there a paper discussing the contribution of individual terms of H to the nucleon mass? 2. Mar 15, 2014 ### Einj The question about nucleon masses is rather complicated by itself, especially because it is a non-perturbative problem. I don't know if there is any simple way to related any term in the QCD Hamiltonian to the nucleon mass. I would say that the most relevant terms are the quark-gluon interaction and the gluon-gluon-gluon or gluon-gluon-gluon-gluon interactions. This is because the nucleon (or in general hadron) masses are mainly given by binding energy rather then the actual quark masses. However, if you want a more phenomenological way of deriving (with a pretty good accuracy) the mass of the hadrons you can take a look at the "Constituent quark and spin-spin interaction" model. I think that the original reference should be: http://journals.aps.org/prd/abstract/10.1103/PhysRevD.12.147 However, there is a pretty good explanation of that in: http://arxiv.org/pdf/hep-ph/0412098v2.pdf In section II. I hope this is useful. 3. Mar 15, 2014 ### tom.stoer Yes, I know. What I am looking for us an analysis like http://arxiv-web3.library.cornell.edu/pdf/1310.1797v1.pdf for the nucleon mass. Draft saved Draft deleted Similar Discussions: Hamiltonian formulation of QCD and nucleon mass
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718974232673645, "perplexity": 1414.769922824456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171834.68/warc/CC-MAIN-20170219104611-00591-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/31986-intergration-substitution.html
# Thread: Intergration (Substitution) 1. ## Intergration (Substitution) Use the substitution u = 1 + sinx and intergration to show that integra (sinxcosx(1+sinx)^5 dx) = (1/42)(1+sinx)^6 [6sinx-1]+C I have tried ingetrating this by substituion then by parts twice but it still deosnt give me thr answer desired by the question, or maybe my working out is wrong. 2. Did you try to read your post after submitting it? 3. yes i did, i did wot the latex script page sed tr o do to create an equation usinf latex, its jus that this 1 wouldnt work 4. well, If you can't write the expression in latex, write the expression without it. Because the way it's written know is kind of hard to understand... 5. ok i have written the question without using latex the best i ca, hope it snow understandable 6. Well by defining $u=1+\sin x\implies\frac{du}{dx}=\cos x.$ The rest is a matter of replacement.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504342675209045, "perplexity": 2487.4618843522353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292330.57/warc/CC-MAIN-20160823195812-00009-ip-10-153-172-175.ec2.internal.warc.gz"}
http://support.sas.com/documentation/cdl/en/procstat/68142/HTML/default/procstat_freq_examples05.htm
# The FREQ Procedure ### Example 3.5 Analysis of a 2x2 Contingency Table This example computes chi-square tests and Fisher’s exact test to compare the probability of coronary heart disease for two types of diet. It also estimates the relative risks and computes exact confidence limits for the odds ratio. The data set FatComp contains hypothetical data for a case-control study of high fat diet and the risk of coronary heart disease. The data are recorded as cell counts, where the variable Count contains the frequencies for each exposure and response combination. The data set is sorted in descending order by the variables Exposure and Response, so that the first cell of the table contains the frequency of positive exposure and positive response. The FORMAT procedure creates formats to identify the type of exposure and response with character values. proc format; value ExpFmt 1='High Cholesterol Diet' 0='Low Cholesterol Diet'; value RspFmt 1='Yes' 0='No'; run; data FatComp; input Exposure Response Count; label Response='Heart Disease'; datalines; 0 0 6 0 1 2 1 0 4 1 1 11 ; proc sort data=FatComp; by descending Exposure descending Response; run; In the following PROC FREQ statements, ORDER=DATA option orders the contingency table values by their order in the input data set. The TABLES statement requests a two-way table of Exposure by Response. The CHISQ option produces several chi-square tests, and the RELRISK option produces relative risk measures. The EXACT statement requests the exact Pearson chi-square test and exact confidence limits for the odds ratio. proc freq data=FatComp order=data; format Exposure ExpFmt. Response RspFmt.; tables Exposure*Response / chisq relrisk; exact pchi or; weight Count; title 'Case-Control Study of High Fat/Cholesterol Diet'; run; The contingency table in Output 3.5.1 displays the variable values so that the first table cell contains the frequency for the first cell in the data set (the frequency of positive exposure and positive response). Output 3.5.1: Contingency Table Case-Control Study of High Fat/Cholesterol Diet The FREQ Procedure Frequency Percent Row Pct Col Pct Table of Exposure by Response Exposure Response(Heart Disease) Yes No Total High Cholesterol Diet 11 47.83 73.33 84.62 4 17.39 26.67 40 15 65.22 Low Cholesterol Diet 2 8.7 25 15.38 6 26.09 75 60 8 34.78 Total 13 56.52 10 43.48 23 100 Output 3.5.2 displays the chi-square statistics. Because the expected counts in some of the table cells are small, PROC FREQ gives a warning that the asymptotic chi-square tests might not be appropriate. In this case, the exact tests are appropriate. The alternative hypothesis for this analysis states that coronary heart disease is more likely to be associated with a high fat diet, and therefore a one-sided test is appropriate. Fisher’s exact right-sided test analyzes whether the probability of heart disease in the high fat group exceeds the probability of heart disease in the low fat group; because this p-value is small, the alternative hypothesis is supported. The odds ratio, displayed in Output 3.5.3, provides an estimate of the relative risk when an event is rare. This estimate indicates that the odds of heart disease is 8.25 times higher in the high fat diet group; however, the wide confidence limits indicate that this estimate has low precision. Output 3.5.2: Chi-Square Statistics Statistic DF Value Prob Chi-Square 1 4.9597 0.0259 Likelihood Ratio Chi-Square 1 5.0975 0.0240 Continuity Adj. Chi-Square 1 3.1879 0.0742 Mantel-Haenszel Chi-Square 1 4.7441 0.0294 Phi Coefficient   0.4644 Contingency Coefficient   0.4212 Cramer's V   0.4644 Pearson Chi-Square Test Chi-Square 4.9597 DF 1 Asymptotic Pr > ChiSq 0.0259 Exact Pr >= ChiSq 0.0393 Fisher's Exact Test Cell (1,1) Frequency (F) 11 Left-sided Pr <= F 0.9967 Right-sided Pr >= F 0.0367 Table Probability (P) 0.0334 Two-sided Pr <= P 0.0393 Output 3.5.3: Relative Risk Odds Ratio and Relative Risks Statistic Value 95% Confidence Limits Odds Ratio 8.2500 1.1535 59.0029 Relative Risk (Column 1) 2.9333 0.8502 10.1204 Relative Risk (Column 2) 0.3556 0.1403 0.9009 Odds Ratio Odds Ratio 8.2500 Asymptotic Conf Limits 95% Lower Conf Limit 1.1535 95% Upper Conf Limit 59.0029 Exact Conf Limits 95% Lower Conf Limit 0.8677 95% Upper Conf Limit 105.5488
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7790100574493408, "perplexity": 5978.168124985732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00148.warc.gz"}
https://www.atmos-chem-phys.net/18/12845/2018/
Journal cover Journal topic Atmospheric Chemistry and Physics An interactive open-access journal of the European Geosciences Union Journal topic Atmos. Chem. Phys., 18, 12845-12857, 2018 https://doi.org/10.5194/acp-18-12845-2018 Atmos. Chem. Phys., 18, 12845-12857, 2018 https://doi.org/10.5194/acp-18-12845-2018 Research article 06 Sep 2018 Research article | 06 Sep 2018 Stratospheric aerosol radiative forcing simulated by the chemistry climate model EMAC using Aerosol CCI satellite data Christoph Brühl1, Jennifer Schallock1, Klaus Klingmüller1, Charles Robert2, Christine Bingen2, Lieven Clarisse3, Andreas Heckel4, Peter North4, and Landon Rieger5 Christoph Brühl et al. • 1Atmospheric Chemistry Department, Max Planck Institute for Chemistry, Mainz, Germany • 2Royal Belgian Institute for Space Aeronomy (BIRA-IASB), Brussels, Belgium • 3Faculty of Sciences, Université Libre de Bruxelles (ULB), Brussels, Belgium • 4Department of Geography, Swansea University, Swansea, UK Abstract This paper presents decadal simulations of stratospheric and tropospheric aerosol and its radiative effects by the chemistry general circulation model EMAC constrained with satellite observations in the framework of the ESA Aerosol CCI project such as GOMOS (Global Ozone Monitoring by Occultation of Stars) and (A)ATSR ((Advanced) Along Track Scanning Radiometer) on the ENVISAT (European Environmental Satellite), IASI (Infrared Atmospheric Sounding Interferometer) on MetOp (Meteorological Operational Satellite), and, additionally, OSIRIS (Optical Spectrograph and InfraRed Imaging System). In contrast to most other studies, the extinctions and optical depths from the model are compared to the observations at the original wavelengths of the satellite instruments covering the range from the UV (ultraviolet) to terrestrial IR (infrared). This avoids conversion artifacts and provides additional constraints for model aerosol and interpretation of the observations. MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) SO2 limb measurements are used to identify plumes of more than 200 volcanic eruptions. These three-dimensional SO2 plumes are added to the model SO2 at the eruption times. The interannual variability in aerosol extinction in the lower stratosphere, and of stratospheric aerosol radiative forcing at the tropopause, is dominated by the volcanoes. To explain the seasonal cycle of the GOMOS and OSIRIS observations, desert dust simulated by a new approach and transported to the lowermost stratosphere by the Asian summer monsoon and tropical convection turns out to be essential. This also applies to the radiative heating by aerosol in the lowermost stratosphere. The existence of wet dust aerosol in the lowermost stratosphere is indicated by the patterns of the wavelength dependence of extinction in observations and simulations. Additional comparison with (A)ATSR total aerosol optical depth at different wavelengths and IASI dust optical depth demonstrates that the model is able to represent stratospheric as well as tropospheric aerosol consistently. 1 Introduction Climate effects of stratospheric aerosols can be important, as analyzed for example by , and . Stratospheric aerosol exerts a negative radiative forcing on the troposphere because enhanced scattering by the particles reduces solar radiation reaching the surface and the lower atmosphere. In addition, changes in diffuse light fraction have shown their potential to enhance photosynthesis . The aim of the present paper is to jointly use model simulations and satellite observations, taking into account the multiple spectral channels of the instruments to better understand the spatiotemporal evolution of the stratospheric aerosol burden and the contribution of the different aerosol types to the observed dynamical aerosol patterns at the different altitudes. Most earlier studies focus on the effects of major volcanic eruptions like Pinatubo (e.g., Aquila et al.2012; English et al.2013). For the post-Pinatubo period with only medium size eruptions present simulations with the chemistry climate model WACCM (Whole Atmosphere Community Model) with interactive aerosol, using estimates for volcanic injections mostly from nadir sounders. That and the present study contribute to the SPARC/SSIRC initiative (Stratosphere-troposphere Processes And their Role in Climate / Stratospheric Sulfur and Its Role in Climate, see for example ), aiming at a better understanding of the composition, microphysical and radiative properties characteristics of stratospheric aerosols and their impact on climate . In this work, we rely on the multiple instrument satellite dataset provided in the Climate Change Initiative (CCI) of the European Space Agency (ESA) , which was developed as a tool for evaluation and improvement of the treatment of stratospheric and tropospheric aerosols in global chemistry climate models, like the EMAC (ECHAM5/MESSy Atmospheric Chemistry) model . The datasets providing extinctions or total optical depth at wavelengths from the ultraviolet (UV) to terrestrial infrared (IR) are very useful to validate and optimize assumptions on the size distribution and on the composition of aerosol in the model, but also on aerosol sources. Some aspects of the stratospheric part of this study follow up . The ATSR and IASI datasets provide additional constraints on tropospheric aerosol, especially desert dust. We find in the present study that this latter aerosol compound can penetrate the tropopause via the Asian summer monsoon system and, to a smaller extent, via tropical convection. The present paper is organized as follows: in Sect. 2, we briefly present the satellite datasets used to evaluate the model, and to check for consistency of observations at different wavelengths: GOMOS, IASI, (A)ATSR and OSIRIS. In Sect. 3 we describe the EMAC model and the various versions and resolutions used in our work, including the use of MIPAS SO2 for input. In Sect. 4, we study the impact of the main aerosol sources on the upper tropospheric and lower stratospheric aerosol burden. The influence of volcanic sources derived from satellite data, but also of dust and organic aerosols, is analyzed. We present examples of the constraints by satellite observations in different spectral regions on different aerosol types with respect to particle size and composition. We discuss the evolution of the optical depth and radiative forcing by stratospheric aerosols, including uncertainties introduced be horizontal model resolution. Finally, we show that the findings concerning the importance of dust for the lower stratosphere are consistent with observations and simulations of tropospheric aerosol. Conclusions are drawn in Sect. 5. 2 Satellite data products from Aerosol CCI II 2.1 GOMOS (Global Ozone Monitoring by Occultation of Stars) GOMOS is an instrument based on the stellar occultation technique and provides atmospheric measurements in the UV-visible-IR range (248–690, 755–774 and 926–954 nm). The use of stellar occultation results in a high rate of occultation measurements, and, consequently, a very good spatial coverage compared to solar occultation. As a drawback, the signal-to-noise ratio of each measurement is much lower than in the solar case, and varies with the star characteristics (especially its magnitude and its temperature). The operational retrieval, IPF (Instrument Processing Facility), provides density profiles for trace gases such as ozone (O3), nitrogen dioxide (NO2) and nitrogen trioxide (NO3) , as well as aerosol extinction. However, the extinction shows a poor quality for the reference wavelength at 500 nm. For this reason an alternative inverse algorithm called AerGOM was specifically developed to optimize the aerosol retrieval . AerGOM provides vertical profiles of the same gas species, and the total extinction coefficient for the nongaseous species and its spectral dependence, currently over the range 250–750 nm. The nature of the total extinction fraction for nongaseous species is then inferred using simple criteria based on the geolocation, associated temperature value and extinction value, and each point of the vertical extinction profile is attributed to aerosols, cirrus clouds, polar stratospheric clouds or meteoritic dust. From the AerGOM extinction, climate data records (CDRs) were developed in the framework of the ESA Aerosol CCI project for different quantities including the aerosol extinction and the related aerosol optical depth at several wavelengths (355, 440, 470, 550 and 750 nm; Bingen et al.2017). Particular attention was paid to the grid choice, which should optimally render the information contained in the GOMOS measurement set. The most important conclusions of this optimization were that grid resolutions should be chosen to ensure a reasonable statistical sampling in most of the grid cells, and that it should optimally reflect the typical transport of volcanic plumes after an eruption reaching the upper troposphere or the lower stratosphere (UTLS). Therefore, the grid should represent, in a coherent way, the longitudinal and latitudinal air mass transport, and the time needed for this transport. Also, the temporal resolution should be short enough to enable the detection of volcanic signatures, taking into account the typical lifetime of the plume. In this respect, we could verify that time intervals of about 5 days are able to represent the signature of most of the eruptions injecting sulfuric gases in the UTLS, while such a signature is often diluted, underestimated or even disappears in the case of coarser grid cells. This is the case, for instance, for monthly zonal means, even though this representation is very commonly used in the field. The ability of the grid to reproduce the signature of volcanic plume in a satisfactory way is of particularly great importance when the CDRs are used to constrain climate models. More detail about the investigations of the optimal grid choice and all other aspects of the implementation of the CDRs can be found in . In their current version (version 3.0), these CDRs are defined on a grid with a resolution of 5 in latitude, 60 in longitude, 1 km in altitude and 5-day time period. The records cover the whole ENVISAT period (March 2002–April 2012) and include the total extinction of nongaseous species, but also the polar stratospheric cloud (PSC) fraction and the cloud-free aerosol fraction which is dominated by sulfate aerosols below an altitude of 32 km. It is important to mention that cloud detection is not yet optimal, and that cloud contamination of the aerosol fraction is possible in the UTLS region. This issue is still under investigation. 2.2 IASI (Infrared Atmospheric Sounding Interferometer) The IASI dust dataset of the Université Libre de Bruxelles (ULB) was generated in the context of ESA CCI's project . It is based on a statistical regression technique and the use of a neural network trained on synthetic IASI data. A similar scheme has already been applied for the retrieval of NH3 (ammonia; ). As input variables it uses the IASI L2 pressure, humidity and temperature information, as well as spectral information and a CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) derived dust altitude climatology. The main output variables are dust optical depth at 10 and 11 µm (and 550 nm). Initial results and validation performance are provided in . The ATSR (SU) algorithm has been developed at Swansea University for estimation of atmospheric aerosol and surface reflectance for the ATSR-2, AATSR sensors and SLSTR (Sea and Land Surface Temperature Radiometer) on Sentinel-3. Over land, the algorithm employs a parameterized model of the surface angular anisotropy (North2002), and uses the dual-view capability of the instrument to allow aerosol property estimation without a priori assumptions on surface spectral reflectance. Over ocean, the algorithm uses a simple a priori model of ocean surface reflectance at both nadir and along-track view angles. A climatology is used to constrain chemical composition of the aerosol components at $\mathrm{1}{}^{\circ }×\mathrm{1}{}^{\circ }$ latitude–longitude grid, while the method retrieves aerosol size and optical thickness on a 10 km grid. Both optical thickness and size are retrieved as vertical column values. Size is not resolved vertically, but is represented by fraction of fine and coarse mode aerosol in total. The algorithm has been developed from initial prototype under the Aerosol CCI program, and results and validation performance for version 4.21 are provided in . The version used here (v 4.3) differs from that summarized in by improvements in retrieval of coarse/fine mode fraction, and improved cloud screening over ocean in the region of dense plumes, resulting in approximately 10 % greater coverage, with small improvement in correlation against AERONET (AErosol RObotic NETwork) values. AERONET is recognized as a reference dataset for validation of satellite data products . 2.4 OSIRIS (Optical Spectrograph and InfraRed Imager) – an additional instrument OSIRIS was launched on board the Odin satellite, and has provided vertical profiles of limb scattered radiance between 280 and 810 nm since 2001 . The radiance profiles are inverted to provide aerosol extinction measurements at 750 nm at altitudes between 10 and 35 km with a vertical resolution of approximately 2 km . This technique provides high sampling rates with hundreds of measurements per day over the sunlit portion of the globe, enabling excellent spatial and temporal sampling of short-lived events. OSIRIS aerosol extinction retrievals agree well with coincident occultation measurements from Stratospheric Aerosol and Gas Experiments II and III during background periods but have known low biases above approximately 25 km, and will have some cloud contamination near and below the tropopause . Additionally, seasonal biases are possible due to the orbital geometry and changes in aerosol optical properties such as after volcanic eruptions may also bias the retrievals. These effects are described in more detail by . This work uses the OSIRIS version 5.10 aerosol retrieval averaged into daily, 5 latitude by 30 longitude bins for comparisons. 3 Model setup For the simulations of the radiative and chemical effects of stratospheric aerosol, the ECHAM5 (5th generation of European Centre Hamburg) general circulation model coupled to the Modular Earth Submodel System Atmospheric Chemistry (EMAC) was used , updated to the version of . In contrast to – who use stratospheric aerosol extinction climatologies derived from observations – our model setup aerosol and its optical properties are calculated from precursor gases and emissions. As dust reaching the UTLS region turned out to be sensitive to model resolution, we used different model resolutions: the T42 resolution (spectral, 2.75 in latitude and longitude) of the previous studies, T63 resolution (1.88), the standard resolution for the stratosphere used in this study and T106 resolution (1.1) for a 1-year sensitivity test. The vertical grid has 90 layers from the surface up to 0.01 hPa (80 km altitude, short L90) with finest resolution in the boundary layer and near the tropopause. For T106 only simulations with the low top model version with 31 levels up to 30 km altitude (L31), the setup used by , which is well tested regarding the representation of tropospheric aerosol, are discussed here in detail. In all simulations, except the T42L90 one of the previous studies, the meteorology below about the 100 hPa level is nudged to the reanalysis ERA-Interim . The simulations were performed for the ENVISAT time period from July 2002 to March 2012 to allow for the use of data from MIPAS for input, and GOMOS and ATSR for validation. The period from 1997 to 2002 using SAGE II (Stratospheric Aerosol and Gas Experiment) was simulated first to get consistent initial conditions. The applied aerosol module GMXE accounts for seven modes using lognormal size distributions (nucleation mode, soluble and insoluble Aitken, accumulation and coarse modes). The boundary between accumulation mode and coarse mode, a model parameter, is set at a dry particle radius of 1.6 µm to avoid too fast sedimentation of a too large coarse mode fraction in case of major volcanic eruptions. For dust sensitivity studies in T106 which focus on the troposphere, a boundary of 1.0 µm is also used. The mode parameters are used for every aerosol type and listed for convenience in Table S1 of the Supplement. Optical properties for the types sulfate, dust, organic carbon and black carbon (OC and BC), sea salt, and aerosol water are calculated using Mie-theory-based lookup tables consistent with the selected size distribution widths of the modes. The resulting optical depths, single scattering albedos and asymmetry factors are used in radiative transfer calculations which (except for the T106 low top sensitivity studies) feedback to atmospheric dynamics. The contribution of stratospheric aerosol to (instantaneous) radiative forcing and heating is calculated online via multiple calls of the radiation module. The mineral dust emissions are calculated online using the emission scheme of which builds on previous studies by , , , , and . The emission scheme parameterizes saltation bombardment and aggregate disintegration by sand blasting, combining the surface friction velocity with descriptions of land cover type, clay fraction of the soil and vegetation cover. For an improved representation of dust at higher resolution, we adopted the updates presented by in the T106L31 simulation. Aerosol module parameters, for example the composition of sea salt, were optimized on the basis of the satellite data. We apply the chemical speciation of the sea salt emission flux used by as listed in Table S2 of the Supplement. The sea salt composition affects the hygroscopic growth and thereby the AOD. The setting of , dominated by Na and Cl ions, which we initially applied in our simulations produced very high AOD levels over the North Pacific which are not consistent with the satellite observations. SO2 plumes (sulfur dioxide) from about 230 explosive volcanic eruptions into the stratosphere were derived from 3-dimensional data fields of MIPAS and, in case of data gaps, of GOMOS on ENVISAT with a temporal resolution of 5 days, and added as volume mixing ratio to the simulated SO2 at the time of the eruption. Each identified volcanic eruption (with names from the Smithsonian volcanic database, http://www.volcano.si.edu, last access: 31 August 2018) is listed in an emission inventory published recently , which provides an estimate of the altitude and the amount of SO2 injected into the atmosphere. The table and the 3-D fields of volcanic SO2 are available at https://doi.org/10.1594/WDCC/SSIRC_1. These data were derived from MIPAS within the uncertainty range but nearer the upper end for best results with the model resolution T42L90 and free running mode, which has some artifacts from the convection scheme and a dry bias at the tropical tropopause. For the nudged T63L90 simulation, the volcanic SO2 data of the inventory have to be downscaled by about a factor of 0.7 which is actually closer to the most likely MIPAS measurements. The actual values for each injection, which depend on the time span between the eruptions and on corrections for data gaps, are given in the Supplement (Table S3). Boundary conditions for background concentrations of SO2 from outgassing volcanoes into the troposphere are taken from the monthly climatology of truncated at 200 hPa to avoid double counting in the stratosphere. The sulfur source gas OCS (carbonyl sulfide) is constrained by observed monthly zonal average surface volume mixing ratios (update of the data by Montzka et al.2007). Marine DMS (dimethyl sulfide) as a natural sulfur source is also included in the model, using a module for exchange fluxes between seawater and atmosphere by and the climatology. For anthropogenic emissions of CO (carbon monoxide), NOx (nitrogen oxides), sulfur, OC and BC the DLR- MACCity emission inventory is used. Biomass burning is based on ACCMIP-MACCity and GFEDv2, OC-SOA (secondary organic aerosol) on AEROCOM_UMZ1. For details on these emission inventories selected for the Chemistry Climate Model Initiative (CCMI) see . 4 Stratospheric aerosol and its radiative effect 4.1 Volcanic eruptions Volcanic emissions have a large impact on the stratospheric aerosol burden. Even small and moderate eruptions contribute to the stratospheric aerosol load due to convective transport of SO2 and its gradual uplift to the upper troposphere and the lower stratosphere, and resulting accumulation of sulfate aerosol. Volcanic SO2 injections explain most of the interannual variability of stratospheric aerosol extinction (decadal logarithm) observed by GOMOS, as depicted in Fig. 1 at three wavelengths. For each wavelength (350 nm in Fig. 1a, b; 550 nm in Fig. 1c, d and 750 nm in Fig. 1e, f), the GOMOS time series (Fig. 1a, c, e) showing the altitude dependence in the tropics, is compared with the EMAC simulation in resolution T63L90 including the dust contribution (Fig. 1b, d, f; see Sect. 4.2 for more detail). Figure 1 shows, at all three wavelengths, that an enhancement of the extinction value is observed around 16–18 km, corresponding to the aerosol load resulting from a succession of volcanic eruptions during the whole period 2002–2012. The eruptions of Nabro in June 2011 and the successive eruptions of Soufriere Hills and Rabaul in 2006 have the largest effects on extinction in the lower stratosphere in the observations and the simulation. The best agreement between GOMOS and EMAC is found in the case of the extinction at 550 nm (Fig. 1c, d), where the quality of the GOMOS retrieval is the best. At 750 nm (Fig. 1e, f) also, GOMOS measurements agree well with EMAC for the aerosol layer (16–22 km) where measured extinction values exceed $\approx \mathrm{2}×{\mathrm{10}}^{-\mathrm{4}}$ km−1. At lower altitudes (14–16 km), rather unstructured patterns of enhanced extinction are found by GOMOS, probably corresponding to cloud contamination. At 350 nm, where a decrease in the GOMOS quality is expected due to a loss in signal-to-noise ratio obtained in the UV spectral region while using cold stars, still the volcanic events stick out. More details over these aspects can be found in references . also present the latitude dependence of 550 nm aerosol extinction at 17 km altitude as observed by GOMOS and simulated by EMAC in the coarse resolution T42L90 in their Fig. 10. Figure 1GOMOS and EMAC extinctions (log) in the tropics as a function of altitude for different wavelengths: (a, b) UV 350 nm, (c, d) visible 550 nm and (e, f) near-infrared 750 nm; resolution T63L90. Figure 2Observed (a, b) and simulated (c, d, EMAC T63L90) extinction in the Asian sector (60–120 E, 20 S–60 N) for 550 nm (a, c) and 750 nm (b, d). Contribution of wet dust (e, f) and wet sulfate (g, h) to extinction for 550 nm (e, g) and 750 nm (f, h). (i) Median wet radius in accumulation mode (for effective radius multiply by 1.4). 4.2 Dust and organics from the troposphere in the UTLS) Extinction in the lowermost stratosphere and upper troposphere is to a large fraction due to desert dust and organic carbon aerosol. These contributions were strongly underestimated in due to a crude parameterization in the used model version based on , but overestimated in . Both simulations were performed in the relatively coarse resolution T42L90. Dust reaching the UTLS is sensitive to model resolution, mostly via the convection parameterization (Tiedtke1989). In Fig. 1 the simulated extinction at resolution T63L90 fits well to the GOMOS observations which appear to have a seasonal contribution from the Asian summer monsoon. For more detailed analysis, Fig. 2 shows observed and simulated extinction in the Asian sector at 17 km in the visible and the near-IR. The largest extinction values are indeed found at the location and time of the Asian summer monsoon at the altitude of outflow. This feature is clearest in years not perturbed by medium strength volcanic eruptions, for example 2010. For a clear separation, the contributions of wet dust and wet sulfate to extinction are displayed separately (Fig. 2e–h). The wet dust particles in the monsoon region have a larger median wet radius than the volcanic sulfate particles (e.g., from Sarychev in 2009, Fig. 2i) which is consistent with a relatively larger extinction in the infrared compared to the visible in the monsoon region in observations and simulations. Figure 2a–d demonstrates that dust is essential to reproduce the observations. Total extinction without wet dust in T63L90 is shown in the Supplement. Comparing Fig. S1b with Fig. 2g shows a small contribution of organics from biomass burning in northern spring (for volume mixing ratios see Fig. S2). Figure S1 also contains results from the T42L90 simulation of , showing that for this resolution the contribution of wet dust to extinction has to be downscaled (i.e., divided) by a factor of 2 to get agreement (Fig. S1d, factor of 3 if only dry dust is considered). Observations by IASI and ATSR indicate a maximum in dust aerosol optical depth (DAOD) in early Northern Hemisphere summer over the Asian deserts located in the inflow regions of the monsoon (see Sect. 4.4). A similar feature is found in the simulations by EMAC. This supports our findings that desert dust is also important for the UTLS. Figure 3(a) Stratospheric aerosol radiative forcing, (b, c) stratospheric AOD for tropics and midlatitudes. Red lines and crosses: EMAC, resolution T63L90, current version; black: EMAC T42L90 ; blue: T63L90 without downscaling the SO2 injections for T42L90; green: from observations (crosses annual mean for forcing; ; SAGE II, CALIPSO, OSIRIS). Figure 4(a, b) Stratospheric AOD at 550 nm observed by GOMOS (green) and simulated by EMAC in resolutions T42L90 (black) and T63L90 (red). (c, d) Stratospheric AOD at 750 nm in the northern tropics and subtropics (SAOD above 15 km), additionally with OSIRIS observations (light blue). Figure 5Simulated aerosol radiative heating in the tropics (solar + infrared, T63L90). Figure 6Observed (left) and simulated (right). (a, b) 10 µm dust AOD (DAOD) for IASI and EMAC; (c, d) 0.55 µm DAOD from ATSR and EMAC; (e, f) fine mode AOD; (g, h) absorbing AOD (AAOD) and (i, j) total AOD for ATSR (SU) and EMAC in T63L90 resolution, annual mean 2011. 4.3 Stratospheric aerosol radiative forcing, stratospheric aerosol optical depth and radiative heating Desert dust transported to the UTLS mostly via the Asian summer monsoon contributes significantly to the seasonal cycle of total stratospheric aerosol optical depth (SAOD) in satellite observations and the EMAC simulations shown in Fig. 3b for the tropics (vertical integral of extinction above about 16 km) and in Fig. 3c for midlatitudes (above about 14 km). Global radiative forcing at the tropopause is depicted in Fig. 3a. The figure contains in black results from the T42L90 simulation of and in blue the T63L90 simulation with the high volcanic sulfur input derived for the coarse resolution. Green lines and symbols show estimates derived from satellite observations (SAGE II, OSIRIS and CALIPSO; Solomon et al.2011; Santer et al.2014; Bourassa et al.2012; Glantz et al.2014). Red shows results of the current model version in T63L90 with the dust scheme and corrected SO2 input (see Sect. 3 and Supplement). Concerning global radiative forcing, the volcanoes are the dominating effect with up to 0.13 W m−2 for Rabaul and Nabro compared to the volcanically quiet period in 2002. Here the use of the SO2 inventory for T42L90 in the T63L90 simulation (blue) causes an overestimate of up to 50 % in 2006 and 2007 due to accumulation effects of eruptions following in short sequence. This is visible in the overestimate of tropical SAOD depicted by the blue curve in Fig. 3b. Especially in northern midlatitude summer SAOD in T42L90 appears to be high because at that resolution the convective transport of dust to the UTLS in the Asian monsoon region is overestimated (Fig. 3c). This is clearly seen in Fig. 4 which shows in black the T42L90 simulation, in green the observations of 550 and 750 nm SAOD by GOMOS, and in light blue (Fig. 4c, d only) by OSIRIS in different latitude bands, including the monsoon region. For the narrow latitude bands in Fig. 4c and d, inclusion of OSIRIS data is important because GOMOS coverage is often too low. Nevertheless, for a lot of features the two satellite datasets agree well. Using the higher resolution T63L90, for which the convection parameterization was developed, the agreement with the satellite observations is much better (Figs. 3 and 4, red) than with T42L90, especially at midlatitudes and in the subtropics. In the subtropics (Fig. 4d), the simulation with low resolution (black) always overestimates the monsoon peaks in August compared to the ones seen in the observations. Comparing the model results with OSIRIS in the northern tropics (Fig. 4c) indicates that some volcanic events are still missing in the inventory, for example in spring 2007 and 2010. This would also explain the differences in radiative forcing (indicated by crosses in Fig. 3a) in these years. The simulated aerosol radiative heating, derived from radiation calls with and without aerosol, reflects the medium volcanic eruptions with the largest effects near 18 km (Fig. 5). There, desert dust causes additional heating at the time of the Asian summer monsoon. In the UTLS, below, every year in September, a clear signal from biomass burning organic aerosol – its volume mixing ratio is shown in Fig. S2 of the Supplement – is visible. Above, around 22 km, the dust below in Northern Hemisphere summer causes a reduction of absorption of terrestrial radiation by ozone. 4.4 Constraints from total aerosol optical depth in different spectral regions and for different aerosol subsets The first comparisons are carried out for EMAC in T63L90, the standard resolution used in the previous sections. Here AOD refers to the troposphere and stratosphere. The DAOD (dust AOD) in terrestrial infrared is most sensitive to the coarse mode of tropospheric dust. Figure 6a, b shows that the model reproduces most of the IASI features. DAOD in the visible spectral region (Fig. 6c, d) is too high over central Asia, pointing to an overestimate of dust in the accumulation mode near the Taklamakan Desert. The patterns in the IR and visible spectral range are different despite considering the factor 2 often applied by the AEROCOM/AEROSAT (Aerosol Comparison between Observations and Models) community for conversion in the color scales of Fig. 6a, b and c, d. This holds for model and observations. The fine mode AOD fraction, which is dominated by the accumulation mode, is slightly overestimated over Europe and underestimated in the biomass burning regions in Africa (Fig. 6e, f). In the model this is sensitive to the way the extinction of aerosol water is attributed to the soluble aerosol species, especially sea salt. Absorbing AOD, i.e., AOD × (1−ω) with ω representing single scattering albedo, agrees surprisingly well (Fig. 6g, h). In the total AOD (Fig. 6i, j) there appears to be too much sea salt in the model, or still suboptimal parameters for the sea salt composition which controls water uptake (see Sect. 3). Figure 7Annual mean for 2011 of the DAOD at 10 µm wavelength observed by IASI (b, IASI ULB dataset version 8) and simulated by EMAC (a) at T106L31 resolution. Figure 8Annual mean for 2011 of the AOD at (from left to right) 550, 670 and 870 nm wavelength observed by AATSR (d, e, f; SU-ATSR algorithm version 4.3) and simulated by EMAC (a, b, c) at T106L31 resolution. Figure 7 compares the annual average for 2011 of the 10 µm DAOD observed by IASI and simulated by EMAC in the low top version with high horizontal resolution (T106L31, about 1.1). The satellite retrievals are taken from version 8 of the ULB dataset. The simulation uses the dust emission scheme of which calculates the emissions online considering the meteorological conditions. To extract the DAOD from the total EMAC AOD at 10 µm, we apply a filter nullifying sea-salt-dominated AOD values. To identify the latter, we compare the AOD weighted with the volume of sea salt and dust. The observed and modeled global DAOD distributions shown in Fig. 7 agree remarkably well. The pixel values of each map are strongly correlated with a correlation coefficient of 0.91. The overall AOD level is consistent as well, so that a similar variance in the pixel values is obtained for the observed (0.00038) and the modeled (0.00041) DAOD distribution. Interestingly, the DAOD from the older version 7 of the ULB dataset yields a pixel by pixel correlation coefficient of only 0.89 and a pixel value variance of only 0.00029. We conclude that the agreement of EMAC and IASI has improved with the update from version 7 to version 8 of the IASI ULB dataset. The main disagreement of the two maps in Fig. 7 is the less pronounced maximum over the Taklamakan Desert in central Asia in the model result. This underestimation is related to the model surface friction velocity in mountainous regions like the surroundings of the Taklamakan Desert, which tends to be lower in simulations at higher horizontal resolution (e.g., T106) than at lower resolution (e.g., T63), possibly resulting in an underestimation of the dust emissions. Figure 8 compares results from the T106L31 EMAC simulation for the annual average of the total AOD at visible and near-infrared wavelengths with AASTR retrievals using the ATSR (SU) algorithm version 4.3. Generally good agreement is obtained at 550 nm which is consistent with the good agreement between the 550 nm MODIS (Moderate-resolution Imaging Spectroradiometer) AOD and model results based on the same EMAC version . As for the T63L90 simulation, the model yields higher sea-salt-related AOD levels over the oceans. In contrast, the model AOD over the Sahara is lower than the satellite retrieved values. This becomes even more evident at larger wavelengths (Fig. 8c, f): the model AOD over the Sahara, in contrast to most other regions, has a stronger wavelength dependence than the observed AOD, corresponding to a larger Ångström exponent. This discrepancy might be resolved by adjusting the dust particle size distribution in the model under the constraint of not sacrificing the good agreement of model and observed AOD at 550 nm and at 10 µm. This could involve modifying the parameters of the log-normal modes, i.e., their widths and boundaries, but also reassessing the parameterization of relevant processes such as emission, deposition, coagulation and hygroscopic growth, or even adding an extra mode for extremely coarse particles which can be relevant close to dust sources. Over South America, the biomass burning regions of Africa, and India and China the wavelength dependence of model and observed AOD is largely consistent. 5 Conclusions Satellite data are not only important to constrain model parameters but they are also very important for model improvement. Comparing satellite data with model results at different wavelengths simultaneously provides additional information and is also valuable for the satellite community to check internal consistency, as in our case for GOMOS and OSIRIS. Sophisticated modeling of dust and organic aerosol as well as a detailed volcano dataset are necessary to reproduce the seasonal cycle and the interannual variability in extinction in the lowermost stratosphere observed by GOMOS at different wavelengths. From the wavelength dependence in observations and simulations, aerosol in the UTLS with enhanced particle size due to water uptake can be identified as aged dust in the Asian monsoon region. Convective transport of dust into the UTLS is resolution dependent because of differences in convection top height and overshooting convection. A resolution of T63L90 (1.88 in longitude and latitude, 90 vertical layers) fits best to the observations. For the low resolution T42L90 (2.75), dust SAOD (and stratospheric mixing ratio) has to be downscaled by a factor of about 0.33; for higher resolutions (e.g., T106L90), upscaling is required. The resolution dependent differences in convection also modify the residence time of sulfur species in the lowermost stratosphere, and especially at low latitudes, at resolution T42L90, it appears to be too short. The total AOD in the visible spectral range is very sensitive to aerosol water and the composition of sea salt. In the modal model, the bulk fraction has to be increased compared to ions to reduce artifacts of too much water uptake by sea salt. The satellite data helped to identify a preferred parameter set for the sea salt emission composition. Our simulated dust total aerosol optical depth agrees with satellite data in the visible (ATSR SU) and the infrared (IASI ULB, version 8). The combined comparison at visible and infrared wavelengths provides strong constraints on the modeled particle size distribution. The direct comparison of observations and model reveals different structures in the extinction patterns at both spectral ranges. From this, we conclude that simply assuming a spatially constant factor of (about) 2 for conversion of DAOD from 10 µm to 550 nm, as commonly applied in the AEROCOM/AEROSAT community, is too crude. Satellite datasets identifying volcanic SO2, including its vertical distribution or enhanced extinction by aged dust enable the model to get closer to observationally based estimates for radiative forcing, showing the interest of a close interaction between modeling and observation research teams. Data availability Data availability. The Aerosol CCI satellite data are available at ICARE, Lille. All model output of EMAC used here is stored at DKRZ, Hamburg, and available on request. This includes 5-day averages and 10-hourly values. Volcanic SO2 input data are available at https://doi.org/10.1594/WDCC/SSIRC_1 (Brühl2018). Supplement Supplement. Author contributions Author contributions. CBr wrote the paper and performed the stratospheric simulations, supported by JS. KK performed the tropospheric simulations and provided code for the stratospheric part. CBi and CR provided the GOMOS data and the corresponding text, LC the IASI data; PN and AH provided the ATSR data, and LR the OSIRIS data. Competing interests Competing interests. The authors declare that they have no conflict of interest. Special issue statement Special issue statement. Acknowledgements Acknowledgements. This study was funded by the Aerosol CCI project, phase II, of the ESA Climate Change Initiative, as a user option, and by the EU FP7 project STRATOCLIM. Supporting work for the development of GOMOS datasets was performed in the framework of a Marie Curie Career Integration Grant within the 7th European Community Framework Programme under grant agreement no. 293560. The satellite data, except OSIRIS, were provided via the Aerosol CCI database at ICARE, Lille, France; the model simulations were performed at DKRZ, Hamburg, Germany, where the results are also stored. The article processing charges for this open-access publication were covered by the Max Planck Society. Edited by: Farahnaz Khosrawi Reviewed by: two anonymous referees References Abdelkader, M., Metzger, S., Mamouri, R. E., Astitha, M., Barrie, L., Levin, Z., and Lelieveld, J.: Dust–air pollution dynamics over the eastern Mediterranean, Atmos. Chem. Phys., 15, 9173–9189, https://doi.org/10.5194/acp-15-9173-2015, 2015. a Aquila, V., Oman, L. D., Stolarski, R. S., Colarco, P. R., and Newman, P. A.: Dispersion of the volcanic sulfate cloud from a Mount Pinatubolike eruption, J. Geophys. Res., 117, D06216, https://doi.org/10.1029/2011JD016968, 2012. a Astitha, M., Lelieveld, J., Abdel Kader, M., Pozzer, A., and de Meij, A.: Parameterization of dust emissions in the global atmospheric chemistry-climate model EMAC: impact of nudging and soil properties, Atmos. Chem. Phys., 12, 11057–11083, https://doi.org/10.5194/acp-12-11057-2012, 2012. a, b Bertaux, J. L., Kyrölä, E., Fussen, D., Hauchecorne, A., Dalaudier, F., Sofieva, V., Tamminen, J., Vanhellemont, F., Fanton d'Andon, O., Barrot, G., Mangin, A., Blanot, L., Lebrun, J. C., Pérot, K., Fehr, T., Saavedra, L., Leppelmeier, G. W., and Fraisse, R.: Global ozone monitoring by occultation of stars: an overview of GOMOS measurements on ENVISAT, Atmos. Chem. Phys., 10, 12091–12148, https://doi.org/10.5194/acp-10-12091-2010, 2010. a Bevan, S., North, P., Los, S., and Grey, W.: A global dataset of atmospheric aerosol optical depth and surface reflectance from AATSR, Remote Sens. Environ., 116, 199–210, 2012. a Bingen, C., Robert, C. E., Stebel, K., Brühl, C., Schallock, J., Vanhellemont, F., Mateshvili, N., Höpfner, M., Trickl, T., Barnes, J. E., Jumelet, J., Vernier, J.-P., Popp, T., de Leeuw, G., and Pinnock, S.: Stratospheric aerosol data records for the climate change initiative: Development, validation and application to chemistry-climate modelling, Remote Sens. Environ., 203, 296–321, 2017. a, b, c, d, e, f, g, h, i, j Bourassa, A. E., Rieger, L. A., Lloyd, N. D., and Degenstein, D. A.: Odin-OSIRIS stratospheric aerosol data product and SAGE III intercomparison, Atmos. Chem. Phys., 12, 605–614, https://doi.org/10.5194/acp-12-605-2012, 2012. a, b, c Bourassa, A. E., Roth, C. Z., Zawada, D. J., Rieger, L. A., McLinden, C. A., and Degenstein, D. A.: Drift-corrected Odin-OSIRIS ozone product: algorithm and updated stratospheric ozone trends, Atmos. Meas. Tech., 11, 489–498, https://doi.org/10.5194/amt-11-489-2018, 2018. a Brühl, C.: Volcanic SO2 data derived from limb viewing satellites for the lower stratosphere from 1998 to 2012, and from nadir viewing satellites for the troposphere. World Data Center for Climate (WDCC) at DKRZ, https://doi.org/10.1594/WDCC/SSIRC_1, 2018. a Brühl, C., Lelieveld, J., Tost, H., Höpfner, M., and Glatthor, N.: Stratospheric sulphur and its implications for radiative forcing simulated by the chemistry climate model EMAC, J. Geophys. Res.-Atmos. 120, 2103–2118, https://doi.org/10.1002/2014JD022430, 2015. a, b, c Diehl, T., Heil, A., Chin, M., Pan, X., Streets, D., Schultz, M., and Kinne, S.: Anthropogenic, biomass burning, and volcanic emissions of black carbon, organic carbon, and SO2 from 1980 to 2010 for hindcast model experiments, Atmos. Chem. Phys. Discuss., 12, 24895–24954, https://doi.org/10.5194/acpd-12-24895-2012, 2012. a English, J. M., Toon, O. B., and Mills, M. J.: Microphysical simulations of large volcanic eruptions: Pinatubo and Toba, J. Geophys. Res.-Atmos, 118, 1880–1895, https://doi.org/10.1002/jgrd.50196, 2013. a Glantz, P., Bourassa, A., Herber, A., Iversen, T., Karlsson, J., and Kirkevåg, A.: Remote sensing of aerosols in the Arctic for an evaluation of evaluation of global climate model simulations, J. Geophys. Res.-Atmos., 119, 8169–8188, https://doi.org/10.1002/2013JD021279, 2014. a Gu, L. H., Baldocchi, D. D., Wofsy, S. C., Munger, J. W., Michalsky, J. J., Urbanski, S. P., and Boden, T. A.: Response of a deciduous forest to the mount Pinatubo eruption: Enhanced photosynthesis, Science, 299, 2035–2038, 2003. a Holben, B. N., Eck, T. F., Slutsker, I., Tanré, D., Buis, J. P., Setzer, A., Vermote, E., Reagan, J. A., Kaufman, Y. J., Nakajima, T., Lavenu, F., Jankowiak, I., and Smirnov, A.: AERONET – A Federated Instrument Network and Data Archive for Aerosol Characterization, Remote Sens. Environ., 66, 1–16, 1998. a Höpfner, M., Boone, C. D., Funke, B., Glatthor, N., Grabowski, U., Günther, A., Kellmann, S., Kiefer, M., Linden, A., Lossow, S., Pumphrey, H. C., Read, W. G., Roiger, A., Stiller, G., Schlager, H., von Clarmann, T., and Wissmüller, K.: Sulfur dioxide (SO2) from MIPAS in the upper troposphere and lower stratosphere 2002–2012, Atmos. Chem. Phys., 15, 7017–7037, https://doi.org/10.5194/acp-15-7017-2015, 2015. a Jöckel, P., Tost, H., Pozzer, A., Brühl, C., Buchholz, J., Ganzeveld, L., Hoor, P., Kerkweg, A., Lawrence, M. G., Sander, R., Steil, B., Stiller, G., Tanarhte, M., Taraborrelli, D., van Aardenne, J., and Lelieveld, J.: The atmospheric chemistry general circulation model ECHAM5/MESSy1: consistent simulation of ozone from the surface to the mesosphere, Atmos. Chem. Phys., 6, 5067–5104, https://doi.org/10.5194/acp-6-5067-2006, 2006. a, b Jöckel, P., Kerkweg, A., Pozzer, A., Sander, R., Tost, H., Riede, H., Baumgaertner, A., Gromov, S., and Kern, B.: Development cycle 2 of the Modular Earth Submodel System (MESSy2), Geosci. Model Dev., 3, 717–752, https://doi.org/10.5194/gmd-3-717-2010, 2010. a Jöckel, P., Tost, H., Pozzer, A., Kunze, M., Kirner, O., Brenninkmeijer, C. A. M., Brinkop, S., Cai, D. S., Dyroff, C., Eckstein, J., Frank, F., Garny, H., Gottschaldt, K.-D., Graf, P., Grewe, V., Kerkweg, A., Kern, B., Matthes, S., Mertens, M., Meul, S., Neumaier, M., Nützel, M., Oberländer-Hayn, S., Ruhnke, R., Runde, T., Sander, R., Scharffe, D., and Zahn, A.: Earth System Chemistry integrated Modelling (ESCiMo) with the Modular Earth Submodel System (MESSy) version 2.51, Geosci. Model Dev., 9, 1153–1200, https://doi.org/10.5194/gmd-9-1153-2016, 2016. a, b, c Kinne, S., Schulz, M., Textor, C., Guibert, S., Balkanski, Y., Bauer, S. E., Berntsen, T., Berglen, T. F., Boucher, O., Chin, M., Collins, W., Dentener, F., Diehl, T., Easter, R., Feichter, J., Fillmore, D., Ghan, S., Ginoux, P., Gong, S., Grini, A., Hendricks, J., Herzog, M., Horowitz, L., Isaksen, I., Iversen, T., Kirkevåg, A., Kloster, S., Koch, D., Kristjansson, J. E., Krol, M., Lauer, A., Lamarque, J. F., Lesins, G., Liu, X., Lohmann, U., Montanaro, V., Myhre, G., Penner, J., Pitari, G., Reddy, S., Seland, O., Stier, P., Takemura, T., and Tie, X.: An AeroCom initial assessment – optical properties in aerosol component modules of global models, Atmos. Chem. Phys., 6, 1815–1834, https://doi.org/10.5194/acp-6-1815-2006, 2006. a Klingmüller, K., Metzger, S., Abdelkader, M., Karydis, V. A., Stenchikov, G. L., Pozzer, A., and Lelieveld, J.: Revised mineral dust emissions in the atmospheric chemistry–climate model EMAC (MESSy 2.52 DU_Astitha1 KKDU2017 patch), Geosci. Model Dev., 11, 989–1008, https://doi.org/10.5194/gmd-11-989-2018, 2018. a, b, c, d Kremser, S., Thomason, L. W., von Hobe, M., Hermann, M., Deshler, T., Timmreck, C., Toohey, M., Stenke, A., Schwarz, J. P., Weigel, R., Fueglistaler, S., Prata, F. J., Vernier, J.-P., Schlager, H., Barnes, J. E., Antuña-Marrero, J.-C., Fairlie, D., Palm, M., Mahieu, E., Notholt, J., Rex, M., Bingen, C., Vanhellemont, F. Bourassa, A., Plane, J. M. C., Klocke, D., Carn, S. A., Clarisse, L., Trickl, T., Neely, R., James, A. D., Rieger, L., Wilson, J. C., and Meland, B.: Stratospheric aerosol – Observations, processes, and impact on climate, Rev. Geophys., 54, 278–335, https://doi.org/10.1002/2015RG000511, 2016. a Kyrölä, E., Tamminen, J., Sofieva, V., Bertaux, J. L., Hauchecorne, A., Dalaudier, F., Fussen, D., Vanhellemont, F., Fanton d'Andon, O., Barrot, G., Guirlet, M., Fehr, T., and Saavedra de Miguel, L.: GOMOS O3, NO2, and NO3 observations in 2002–2008, Atmos. Chem. Phys., 10, 7723–7738, https://doi.org/10.5194/acp-10-7723-2010, 2010. a Lana, A., Bell, T. G., Simó, R., Vallina, S. M., Ballabrera-Poy, J., Kettle, A. J., Dachs, J., Bopp, L., Saltzman, E. S., Stefels, J., Johnson, J. E., and Liss, P. S.: An updated climatology of surface dimethlysulfide concentrations and emission fluxes in the global ocean, Global Biogeochem. Cy., 25, GB1004, https://doi.org/10.1029/2010GB003850, 2011. a Laurent, B., Marticorena, B., Bergametti, G., Léon, J. F., and Mahowald, N. M.: Modeling mineral dust emissions from the Sahara desert using new surface properties and soil database, J. Geophys. Res.-Atmos., 113, D14218, https://doi.org/10.1029/2007JD009484, 2008. a Laurent, B., Tegen, I., Heinold, B., Schepanski, K., Weinzierl, B., and Esselborn, M.: A model study of Saharan dust emissions and distributions during the SAMUM-1 campaign, J. Geophys. Res.-Atmos., 115, D21210, https://doi.org/10.1029/2009JD012995, 2010. a Llewellyn, E. J., Lloyd, N. D., Degenstein, D. A., Gattinger, R. L., Petelina, S. V., Bourassa, A. E., Wiensz, J. T., Ivanov, E. V., McDade, I. C., Solheim, B. H., McConnell, J. C, Haley, C. S., von Savigny, C., Sioris, C. E., McLinden, C. A., Griffioen, E., Kaminski, J., Evans, W. F. J., Puckrin, E., Strong, K., Wehrle, V., Hum, R. H., Kendall, D. J. W., Matsushita, J., Murtagh, D. P., Brohede, S., Stegman, J., Witt, G., Barnes, G., Payne, W. F., Piché, L., Smith, K., Warshaw, G., Deslauniers, D.-L., Marchand, P., Richardson, E. H., King, R. A., Wevers, I., McCreath, W., Kyrölä, E., Oikarinen, L., Leppelmeier, G. W., Auvinen, H., Mégie, G., Hauchecorne, A., Lefèvre, F., de La Nöe, J., Ricaud, P., Frisk, U., Sjoberg, F., von Schéele, F., and Nordh, L.: The OSIRIS instrument on the Odin spacecraft, Can. J. Phys., 82, 411–422, 2004. a Marticorena, B., Bergametti, G., Aumont, B., Callot, Y., N'Doumé, C., and Legrand, M.: Modeling the atmospheric dust cycle: 2. Simulation of Saharan dust sources, J. Geophys. Res.-Atmos., 102, 4387–4404, https://doi.org/10.1029/96JD02964, 1997. a Mills, M. J., Schmidt, A., Easter, R., Solomon, S., Kinnison, D. E., Ghan, S. J., Neely III, R. R., Marsh, D. R., Conley, A., Bardeen, C. G., and Gettelman, A.: Global volcanic aerosol properties derived from emissions, 1990–2014, using CESM1(WACCM), J. Geophys. Res.-Atmos., 121, 2332–2348, https://doi.org/10.1002/2015JD024290, 2016. a Mills, M. J., Richter, J. H., Tilmes, S., Kravitz, B., MacMartin, D. G., Glanville, A. A., and Kinnison, D. E.: Radiative and chemical response to interactive stratospheric sulfate aerosols in fully coupled CESM1(WACCM), J. Geophys. Res.-Atmos., 122, 13061–13078, https://doi.org/10.1002/2017JD027006, 2017. a Montzka, S. A., Calvert, P., Hall, B. D., Elkins, J. W., Conway, T. J., Tans, P. P., and Sweeney, C.: On the global distribution, seasonality, and budget of atmospheric carbonyl sulfide and some similarities with CO2, J. Geophys. Res., 112, D09302, https://doi.org/10.1029/2006JD007665, 2007. a North, P.: Estimation of aerosol opacity and land surface bidirectional reflectance from ATSR-2 dual-angle imagery: Operational method and validation, J. Geophys. Res., 107, 4149, https://doi.org/10.1029/2000JD000207, 2002. a Pérez, C., Nickovic, S., Baldasano, J. M., Sicard, M., Rocadenbosch, F., and Cachorro, V. E.: A long Saharan dust event over the western Mediterranean: Lidar, Sun photometer observations, and regional dust modeling, J. Geophys. Res.-Atmos., 111, D15214, https://doi.org/10.1029/2005JD006579, 2006. a Popp, T., de Leeuw, G., Bingen, C., Brühl, C., Capelle, V., Chedin, A., Clarisse, L., Dubovik, O., Grainger, R., Griesfeller, J., Heckel, A., Kinne, S., Klüser, L., Kosmale, M., Kolmonen, P., Lelli, L., Litvinov, P., Mei, L., North, P., Pinnock, S., Povey, A., Robert, C., Schulz, M., Sogacheva, L., Stebel, K., Stein-Zweers, D., Thomas, G., Tilstra, L. G., Vandenbussche, S., Veefkind, P., Vountas, M., and Xue, Y.: Development, production and evaluation of aerosol climate data records from European satellite observations (Aerosol_cci), Remote Sens., 8, 421, https://doi.org/10.3390/rs8050421, 2016. a, b, c, d, e Pozzer, A., Jöckel, P., Sander, R., Williams, J., Ganzeveld, L., and Lelieveld, J.: Technical Note: The MESSy-submodel AIRSEA calculating the air-sea exchange of chemical species, Atmos. Chem. Phys., 6, 5435–5444, https://doi.org/10.5194/acp-6-5435-2006, 2006. a Pringle, K. J., Tost, H., Message, S., Steil, B., Giannadaki, D., Nenes, A., Fountoukis, C., Stier, P., Vignati, E., and Lelieveld, J.: Description and evaluation of GMXe: a new aerosol submodel for global simulations (v1), Geosci. Model Dev., 3, 391–412, https://doi.org/10.5194/gmd-3-391-2010, 2010. a Ridley, D. A., Solomon, S., Barnes, J.E ., Burlakov, V. D., Deshler, T., Dolgii, S. I., Herber, A. B., Nagai, T., Neely III, R. R., Nevzorov, A. V., Ritter, C., Sakai, T., Santer, B. D., Sato, M., Schmidt, A., Uchino, O., and Vernier, J. P.: Total volcanic stratospheric aerosol optical depths and implications for global climate change, Geophys. Res. Lett., 41, 7763–7769, https://doi.org/10.1002/2014GL061541, 2014. a Rieger, L. A., Bourassa, A. E., and Degenstein, D. A.: Stratospheric aerosol particle size information in Odin-OSIRIS limb scatter spectra, Atmos. Meas. Tech., 7, 507–522, https://doi.org/10.5194/amt-7-507-2014, 2014. a Rieger, L. A., Bourassa, A. E., and Degenstein, D. A.: Merging the OSIRIS and SAGE II stratospheric aerosol records, J. Geophys. Res.-Atmos., 120, 8890–8904, https://doi.org/10.1002/2015JD023133, 2015. a Rieger, L. A., Malinina, E. P., Rozanov, A. V., Burrows, J. P., Bourassa, A. E., and Degenstein, D. A.: A study of the approaches used to retrieve aerosol extinction, as applied to limb observations made by OSIRIS and SCIAMACHY, Atmos. Meas. Tech., 11, 3433–3445, https://doi.org/10.5194/amt-11-3433-2018, 2018. a Robert, C. É., Bingen, C., Vanhellemont, F., Mateshvili, N., Dekemper, E., Tétard, C., Fussen, D., Bourassa, A., and Zehner, C.: AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations – Part 2: Intercomparisons, Atmos. Meas. Tech., 9, 4701–4718, https://doi.org/10.5194/amt-9-4701-2016, 2016.  a, b Santer, B. D., Bonfils, C., Painter, J. F., Zelinka, M. D., Mears, C., Solomon, S., Schmidt, G. A., Fyfe, J. C., Cole, J. N. S., Nazarenko, L., Taylor, K. E., and Wentz, F. J.: Volcanic contribution to decadal changes in tropospheric temperature, Nat. Geosci., 7, 185–189, 2014. a, b Solomon, S., Daniel, J. S., Neely III, R. R., Vernier, J. P., Dutton, E. G., and Thomason, L. W.: The persistently variable “background” stratospheric aerosol layer and global climate change, Science, 333, 866–870, 2011. a, b, c Spyrou, C., Mitsakou, C., Kallos, G., Louka, P., and Vlastou, G.: An improved limited area model for describing the dust cycle in the atmosphere, J. Geophys. Res.-Atmos., 115, D17211, https://doi.org/10.1029/2009JD013682, 2010. a Tegen, I.: Impact of vegetation and preferential source areas on global dust aerosol: Results from a model study, J. Geophys. Res., 107, 4576, https://doi.org/10.1029/2001JD000963, 2002. a Tiedtke, M.: A comprehensive mass flux scheme for cumulus parameterization in large-scale models, Mon. Weather Rev., 117, 1779–1800, 1989. a Timmreck, C., Mann, G. W., Aquila, V., Hommel, R., Lee, L. A., Schmidt, A., Brühl, C., Carn, S., Chin, M., Dhomse, S. S., Diehl, T., English, J. M., Mills, M. J., Neely, R., Sheng, J., Toohey, M., and Weisenstein, D.: The Interactive Stratospheric Aerosol Model Intercomparison Project (ISA-MIP): motivation and experimental design, Geosci. Model Dev., 11, 2581–2608, https://doi.org/10.5194/gmd-11-2581-2018, 2018. a Van Damme, M., Whitburn, S., Clarisse, L., Clerbaux, C., Hurtmans, D., and Coheur, P.-F.: Version 2 of the IASI NH3 neural network retrieval algorithm: near-real-time and reanalysed datasets, Atmos. Meas. Tech., 10, 4905–4914, https://doi.org/10.5194/amt-10-4905-2017, 2017. a Vanhellemont, F., Mateshvili, N., Blanot, L., Robert, C. É., Bingen, C., Sofieva, V., Dalaudier, F., Tétard, C., Fussen, D., Dekemper, E., Kyrölä, E., Laine, M., Tamminen, J., and Zehner, C.: AerGOM, an improved algorithm for stratospheric aerosol extinction retrieval from GOMOS observations – Part 1: Algorithm description, Atmos. Meas. Tech., 9, 4687–4700, https://doi.org/10.5194/amt-9-4687-2016, 2016. a Whitburn, S., Van Damme, M., Clarisse, L., Bauduin, S., Heald, C. L., Hadji-Lazaro, J., Hurtmans, D., Zondlo, M. A., Clerbaux, C., and Coheur, P.-F.: A flexible and robust neural network IASI-NH3 retrieval algorithm, J. Geophys. Res.-Atmos., 121, 6581–6599, https://doi.org/10.1002/2016JD024828, 2016. a Zender, C. S., Bian, H., and Newman, D.: Mineral Dust Entrainment and Deposition (DEAD) model: Description and 1990s dust climatology, J. Geophys. Res.-Atmos., 108, 4416, https://doi.org/10.1029/2002JD002775, 2003. a
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7968822717666626, "perplexity": 10555.09117613012}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00415.warc.gz"}
http://rullf2.xs4all.nl/msct/footnode.html
... function3.1 In [Grassberger, 1986,Broggi, 1988], is kept constant so that the correction function is a constant. It should be investigated how the correction function can be incorporated. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... unclear3.2 Somorjai [Somorjai, 1986] suggests to use expressions derived by Fukunaga and Hostetler [Fukunaga and Hostetler, 1973] for the choice of and . However, here we do not use these expressions since for , and the dimension of the Rössler attractor is very close to 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... found5.1 With data from an analog-to-digital converter (ADC) one could use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... spectrum7.1 The power spectrum was computed using NAG [Numerical Algorithms Group, 1983] routine G13CAF, with mean correction, split cosine bell taper coefficient 0.1 (as in the NAG example), the Parzen window such that the spectral density estimates have approximately 28 degrees of freedom, which is close to the recommended value of 32 in [Beauchamp and Yuen, 1979]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... conditions7.2 This problem may be avoided by using eq. (5.61) instead of eq. (5.86). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... matrix7.3 We used the Euclidean norm here in accordance with [Albano et al., 1988]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... themselves7.4 Note that the number of distances is too large compared to the length of the time series (see section 5.6.1). This problem can be alleviated somewhat by multiplying the confidence intervals by a factor , where is the number of distances drawn and is the number of distances one is allowed to draw (see section 5.6.2). This will result in conservative confidence intervals. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964562177658081, "perplexity": 1209.1537249975258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999210.22/warc/CC-MAIN-20190620105329-20190620131329-00330.warc.gz"}
http://tex.stackexchange.com/questions/13071/option-cmyk-for-xcolor-package-does-not-produce-a-cmyk-pdf
# Option cmyk for xcolor package does not produce a CMYK PDF I am trying to produce a CMYK PDF file to pass onto a printshop. When I use the [cmyk] option for the xcolor package, the colours in the resulting PDF do look muted and less vibrant as is the case with the CMYK colorspace. For example, \documentclass{minimal} \usepackage[cmyk]{xcolor} \begin{document} \textcolor{blue}{\fontsize{24}{28}\selectfont A} \end{document} when processed by pdflatex produces a PDF file that appears onscreen like a muted, darkish CMYK colorspace document, but when I run ImageMagick's identify command on the PDF using identify -verbose cmyk.pdf | grep Colorspace I get Colorspace: RGB Surely the PDF uses only one of CMYK or RGB for colour. How can I reliably tell which? Also, if it is indeed an RGB PDF why does it appear so different from the version I get if the line \usepackage[cmyk]{xcolor} is replaced by \usepackage[rgb]{xcolor} - tex.stackexchange.com/questions/9961/pdf-colour-model-and-latex has a lot to say about color and PDF. – Christian Lindig Mar 9 '11 at 18:45 With using \pdfcompresslevel=0 I'll get the pdf with no compressed streams and can see what happens inside the pdf. With the cmyk option I get: stream 0 0 0 1 k 0 0 0 1 K 0 g 0 G 0 0 0 1 k 0 0 0 1 K 1 1 0 0 k 1 1 0 0 K BT /F15 24.7871 Tf 91.925 752.955 Td [(A)]TJ 0 0 0 1 k 0 0 0 1 K 0 0 0 1 k 0 0 0 1 K 0 0 0 1 k 0 0 0 1 K ET endstream and with the rgb option: stream 0 0 0 rg 0 0 0 RG 0 g 0 G 0 0 0 rg 0 0 0 RG 0 0 1 rg 0 0 1 RG BT /F15 24.7871 Tf 91.925 752.955 Td [(A)]TJ 0 0 0 rg 0 0 0 RG 0 0 0 rg 0 0 0 RG 0 0 0 rg 0 0 0 RG ET endstream which is what I would expect, a correct color setting. But printing a RGB color is not the same as printing a CMYK color ... - How did you get that? – Yiannis Lazarides Mar 9 '11 at 21:40 \pdfcompresslevel=0 – Herbert Jan 18 at 7:19 There's not really such a thing as a 'CMYK PDF' or an 'RGB PDF'. PDFs can contain objects coloured in RGB and CMYK (and many other) colour spaces. See my answer here for some details. So your statement "Surely the PDF uses only one of CMYK or RGB for colour." is wrong, and it's unclear to me on what basis "identify -verbose" is deciding that it is RGB. Maybe the colorspace just defaults to RGB, even for formats where that doesn't make sense? As for your question: "How can I reliably tell which?", in addition to @Herbert's suggestion to look at the uncompressed PDF stream (if that means anything to you) you can use Adobe Acrobat Professional has various tools to see which colour spaces are being used and where. An "Output preview" tool; a "Preflight" tool; a "convert colors" tool; etc. - Thank you for setting me right. Is there a Linux-friendly software package to do what Adobe Acrobat Professional does above? Thanks. – chandra Mar 14 '11 at 14:47 +1---I just ran identify on a PDF created with ConTeXt using only CMYK colors and got Colorspace: RGB as well. Makes me suspect ImageMagick isn't telling the whole story. – Sharpie May 9 '11 at 4:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7572903633117676, "perplexity": 3078.3613347453634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698203920/warc/CC-MAIN-20130516095643-00030-ip-10-60-113-184.ec2.internal.warc.gz"}
http://spmphysics.onlinetuition.com.my/2008/06/application-of-force-on-current.html
# Application of the Force on a Current Carrying Conductor in a Magnetic Field - Moving Coil Meter ### Light Indicator A light indicator which has lower inertia  is used to increase the sensitivity of the meter. ### Linear Scale 1. Due to the radial magnetic field and the cylindrical soft-iron core, a linear scale is produced. 2. A linear scale is more accurate and easier to be read. ### Mirror 1. A mirror is used to prevent parallax error. 2. When the observer's eye is exactly above the indicator, the indicator will cover its own image on the mirror. 3. This can used to prevent parallax error. ### Curved Permanent Magnet 1. A curved permanent magnet is used to produce a radial field. 2. A radial field is a magnetic field where the field lines are either pointing away or toward the center of the field. 3. A radial can be focused by a cylindrical soft-iron core. ### Rectangular Coils 1. When a current flows through the coils, a force will be generated due to the interaction between the magnetic field of the permanent magnet and the coil. 2. The force will turn the coils, which in turn move the indicator. ### Cylindrical Soft-Iron Core 1. A cylindrical soft iron core is placed inside the radial field produced by the curved magnet. 2. A soft-iron core can focus the magnetic field of the permanent magnet. ### Hair Spring 1. The deflection of the coil and the indicator stops when the force is balanced by the opposing force from the hair spring. 2. The angle of deflection is directly proportional to the magnitude of the current in the coil.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815639853477478, "perplexity": 674.8677042491863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320476.39/warc/CC-MAIN-20170625083108-20170625103108-00446.warc.gz"}
http://www.latexmattress.org/types-of-latex/natural-latex.php
## Natural Latex Natural latex is secreted as part of the natural biological processes of various plants, and most prolifically by Hevea brasiliensis, the rubber tree plant. Natural latex is a mixture of organic chemical compounds produced by specialized cells called lacticifers, which are basically modified phloem cells. The composition of this liquid botanical latex varies slightly from plant to plant, but mature rubber trees produce latex on a daily basis. Botanists believe latex to be a natural plant adaptation that protects certain species from insect predation. ## Harvesting Natural Latex Natural latex has been grown on rubber estates for commercial production since the 1800s. Liquid latex is harvested from rubber trees without damaging the plant, so a single rubber tree can produce latex for as many as 50 years. According to the College of Agriculture, Biotechnology, and Natural Resources at the University of Nevada, Reno, the commercial genetic lines of Hevea used today were crafted from hundreds of years of seed saving and specialized breeding programs. Rubber trees can be tapped at around five to eight years of age, according to Purdue University. The liquid is harvested by hand each day by workers, known as tappers, who insert a metal tap into each rubber tree, following time-tested harvesting methods. Each day, the tappers harvest the latex from collection cups hung below the taps. An experienced tapper can process 200 or 300 trees over the course of about three hours. ## Latex Manufacturing The secret to the versatility of latex lies in a cell structure that's adaptable to a wide range of manufacturing processes. Almost every rubber manufacturing process uses some form of vulcanization, which is a means of altering the chemical structure of latex by adding sulfur or other curatives to create to create cross-links, or bridges, between polymer chains. The result is a stronger, more stable, and usually solid rubber structure. The physical properties of the finished rubber depend on the types of additives, the exact vulcanization or curing processes, and any molds or blowing agents used to manipulate the final shape. Read about the different types of latex to find out more concerning the characteristics and end result of each. Some simple rubber curing processes, like the Dunlop latex foam technique, can take place right on the estate. Botanical latex is treated with anticoagulation compounds (usually ammonia) and transported in its liquid form to dedicated laboratories for more complicated chemical processes—like the cold-dipped vulcanization technique used for stretchy medical-grade rubber, or the patented Talalay latex foam method. ## Uses of Natural Latex The unique properties of botanical latex are adaptable to a wide range of applications. More than a century of practice with creative manufacturing techniques has led to a phenomenal diversity of rubber uses. From pencil erasers to medical-grade rubber gloves to foam mattresses, latex is a major component in everyday items that we take for granted. ## Natural Latex vs. Blended Latex A form of latex can be synthesized using man-made components in laboratory settings. Synthetic latex is a passable imitation of real botanical latex and can be used in many of the same vulcanization processes. "Blended latex" mattresses contain a mix of synthetic and natural latex. In most cases, this involves a synthetic core to keep down costs, with a botanical latex sleep surface. Natural latex tends to be used for the sleep surface since it has superior loft and support compared to synthetic versions. ## Natural Latex Pros • Superior loft and bouncy support. • Renewable resource obtained through sustainable farming practices. • Natural product that minimizes the use of synthetic chemicals.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8275648951530457, "perplexity": 5213.831767103088}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860121090.75/warc/CC-MAIN-20160428161521-00091-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1152/2/d/b/
# Properties Label 1152.2.d.b Level $1152$ Weight $2$ Character orbit 1152.d Analytic conductor $9.199$ Analytic rank $0$ Dimension $2$ CM discriminant -4 Inner twists $4$ # Related objects ## Newspace parameters Level: $$N$$ $$=$$ $$1152 = 2^{7} \cdot 3^{2}$$ Weight: $$k$$ $$=$$ $$2$$ Character orbit: $$[\chi]$$ $$=$$ 1152.d (of order $$2$$, degree $$1$$, minimal) ## Newform invariants Self dual: no Analytic conductor: $$9.19876631285$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-1})$$ Defining polynomial: $$x^{2} + 1$$ Coefficient ring: $$\Z[a_1, \ldots, a_{5}]$$ Coefficient ring index: $$2$$ Twist minimal: yes Sato-Tate group: $\mathrm{U}(1)[D_{2}]$ ## $q$-expansion Coefficients of the $$q$$-expansion are expressed in terms of $$i = \sqrt{-1}$$. We also show the integral $$q$$-expansion of the trace form. $$f(q)$$ $$=$$ $$q + 2 i q^{5} +O(q^{10})$$ $$q + 2 i q^{5} + 4 i q^{13} -8 q^{17} + q^{25} + 10 i q^{29} -12 i q^{37} -8 q^{41} -7 q^{49} + 14 i q^{53} + 12 i q^{61} -8 q^{65} -6 q^{73} -16 i q^{85} -16 q^{89} + 18 q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q + O(q^{10})$$ $$2q - 16q^{17} + 2q^{25} - 16q^{41} - 14q^{49} - 16q^{65} - 12q^{73} - 32q^{89} + 36q^{97} + O(q^{100})$$ ## Character values We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/1152\mathbb{Z}\right)^\times$$. $$n$$ $$127$$ $$641$$ $$901$$ $$\chi(n)$$ $$1$$ $$1$$ $$-1$$ ## Embeddings For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below. For more information on an embedded modular form you can click on its label. Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$ 577.1 − 1.00000i 1.00000i 0 0 0 2.00000i 0 0 0 0 0 577.2 0 0 0 2.00000i 0 0 0 0 0 $$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles ## Inner twists Char Parity Ord Mult Type 1.a even 1 1 trivial 4.b odd 2 1 CM by $$\Q(\sqrt{-1})$$ 8.b even 2 1 inner 8.d odd 2 1 inner ## Twists By twisting character orbit Char Parity Ord Mult Type Twist Min Dim 1.a even 1 1 trivial 1152.2.d.b 2 3.b odd 2 1 1152.2.d.e yes 2 4.b odd 2 1 CM 1152.2.d.b 2 8.b even 2 1 inner 1152.2.d.b 2 8.d odd 2 1 inner 1152.2.d.b 2 12.b even 2 1 1152.2.d.e yes 2 16.e even 4 1 2304.2.a.d 1 16.e even 4 1 2304.2.a.m 1 16.f odd 4 1 2304.2.a.d 1 16.f odd 4 1 2304.2.a.m 1 24.f even 2 1 1152.2.d.e yes 2 24.h odd 2 1 1152.2.d.e yes 2 48.i odd 4 1 2304.2.a.c 1 48.i odd 4 1 2304.2.a.n 1 48.k even 4 1 2304.2.a.c 1 48.k even 4 1 2304.2.a.n 1 By twisted newform orbit Twist Min Dim Char Parity Ord Mult Type 1152.2.d.b 2 1.a even 1 1 trivial 1152.2.d.b 2 4.b odd 2 1 CM 1152.2.d.b 2 8.b even 2 1 inner 1152.2.d.b 2 8.d odd 2 1 inner 1152.2.d.e yes 2 3.b odd 2 1 1152.2.d.e yes 2 12.b even 2 1 1152.2.d.e yes 2 24.f even 2 1 1152.2.d.e yes 2 24.h odd 2 1 2304.2.a.c 1 48.i odd 4 1 2304.2.a.c 1 48.k even 4 1 2304.2.a.d 1 16.e even 4 1 2304.2.a.d 1 16.f odd 4 1 2304.2.a.m 1 16.e even 4 1 2304.2.a.m 1 16.f odd 4 1 2304.2.a.n 1 48.i odd 4 1 2304.2.a.n 1 48.k even 4 1 ## Hecke kernels This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(1152, [\chi])$$: $$T_{5}^{2} + 4$$ $$T_{7}$$ $$T_{17} + 8$$ ## Hecke characteristic polynomials $p$ $F_p(T)$ $2$ $$T^{2}$$ $3$ $$T^{2}$$ $5$ $$4 + T^{2}$$ $7$ $$T^{2}$$ $11$ $$T^{2}$$ $13$ $$16 + T^{2}$$ $17$ $$( 8 + T )^{2}$$ $19$ $$T^{2}$$ $23$ $$T^{2}$$ $29$ $$100 + T^{2}$$ $31$ $$T^{2}$$ $37$ $$144 + T^{2}$$ $41$ $$( 8 + T )^{2}$$ $43$ $$T^{2}$$ $47$ $$T^{2}$$ $53$ $$196 + T^{2}$$ $59$ $$T^{2}$$ $61$ $$144 + T^{2}$$ $67$ $$T^{2}$$ $71$ $$T^{2}$$ $73$ $$( 6 + T )^{2}$$ $79$ $$T^{2}$$ $83$ $$T^{2}$$ $89$ $$( 16 + T )^{2}$$ $97$ $$( -18 + T )^{2}$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9655739665031433, "perplexity": 7759.249135106587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154471.78/warc/CC-MAIN-20210803191307-20210803221307-00530.warc.gz"}
http://math.stackexchange.com/questions/441407/lim-limits-n-to-infty-lim-limits-x-to-0fnx
# $\lim\limits_{n\to\infty}\lim\limits_{x\to\ 0}f^{(n)}(x)$ Let $f(x)=\exp(\sqrt{x})+\exp(-\sqrt{x})=2\cosh(\sqrt{x})$. How to calculate $\lim\limits_{n\to\infty}\lim\limits_{x\to\ 0}f^{(n)}(x)$ Using power series, we have $$f(x)=2\sum\limits_{k=0}^{\infty}\frac{x^k}{(2k)!}$$ so the $n$th derivative is: $$f^{(n)}(x)=2\sum\limits_{k=n}^{\infty}\frac{k!}{(k-n)!(2k)!}x^{k-n}$$ so $$\lim\limits_{x\to 0}f^{(n)}(x)=\frac{2n!}{(2n)!}$$ and hence $$\lim\limits_{n\to\infty}\lim\limits_{x\to\ 0}f^{(n)}(x)=0$$ Can one do it by finding a closed form expression for $f^{(n)}(x)$? - Are you allowed to use the Taylor series? –  Michael Jul 11 '13 at 16:36 @Michael This isn't a homework, you can use anything. You gave me an idea... –  metacompactness Jul 11 '13 at 16:46 "Can one do it by finding a closed form expression...?" But that is usually very tortuous! –  Pedro Tamaroff Jul 11 '13 at 17:20 @ThomasAndrews $$f(x^2)\longrightarrow 2xf'(x^2)\longrightarrow 2f'(x^2)+4x^2f''(x^2)$$ and the expression becomes more and more complicated. –  metacompactness Jul 11 '13 at 17:20 Yeah, I was talking about an inductive method, @metacompactness. –  Thomas Andrews Jul 11 '13 at 17:21 Maple does this in terms of a Bessel function $$2\,\sum _{k=n}^{\infty }{\frac {{x}^{k-n}k!}{ \left( k-n \right) !\, \left( 2\,k \right) !}}={\frac {n!\, {{\rm I}_{-1/2+n}\left(\sqrt {x}\right)}\Gamma \left( 1/2+n \right) { 2}^{1/2+n}}{ \left( 2\,n \right) !\,{x}^{-1/4+1/2\,n}}}$$ edit To do this by hand, recall $${\it I_q} \left( y \right) =\sum _{k=0}^{\infty }{\frac { \left( y/2 \right) ^{2 k+q}}{k!\left( k+q \right) !}}$$ where non-integer factorial is to be expressed in terms of the Gamma function. - Wow, a product of a Bessel function and a gamma function, that's something you don't see everyday; these CAS gives unexpected results sometimes. I think it's hard to prove it without the power series but can we prove it (without Maple's help) from the power series $2\sum\limits_{k=n}^{\infty}\frac{k!}{(k-n)!(2k)!}x^{k-n}$ ? –  metacompactness Jul 11 '13 at 19:46 Two factorials in the denominator suggests a Bessel series. Recall the Bessel function series for $I_q(x)$ and try to put our series into this form. –  GEdgar Jul 11 '13 at 21:03 What about the series of the gamma function? –  metacompactness Jul 11 '13 at 21:06 That Gamma function, together with the factorials, are just to get the gamma functions needed in the Bessel series. –  GEdgar Jul 11 '13 at 21:09 Using the fact that $\bigl(\sqrt x\bigr)^{(j)}=\frac{(-1)^{j-1}}{2^j}\,(2j-3)!!x^{-(2j-1)/2}$ for all $j\geq1$ (here $!!$ denotes the double factorial; in particular $(-1)!!=1$), together with Faà di Bruno's formula, we obtain, for all $n\geq1$, a formula for the $n$-th derivative of the function $g(x)=\exp(\sqrt x)$. In the formula below, the tuple $\mathbf m=(m_1,\dots,m_n)$ ranges over the tuples in $\mathbb N^n$ such that $\sum_{j=1} ^njm_j=n$ (partitions of the number $n$), and $|\mathbf m|=\sum_{j=1}^nm_j$: \begin{align*} g^{(n)}(x)=&\,g(x)\sum_{\mathbf m}\binom n{m_1,\dots,m_n}\prod_{j=1}^n\Biggl[\frac{(-1)^{j-1}(2j-3)!!x^{-(2j-1)/2}}{2^jj!}\Biggr]^{m_j}\\ =&\,g(x)\sum_{\mathbf m}a_{\mathbf m}\,\frac{x^{|\mathbf m|/2}} {x^n}\,. \end{align*} Since the value of $|\mathbf m|$ varies between $1$ and $n$ as $\mathbf m$ varies over all the partitions on $n$, it follows that for $x>0$ we can write $$g^{(n)}(x^2)=\frac{P(x)e^x}{x^{2n-1}}\,,$$ where $P(T)=P_n(T)$ is a polynomial in $T$ of degree $n-1$. On the other hand, the $n$-th derivative of the function $h(x)=\exp(-\sqrt x)$ is very similar, because of the factor $-1$ that multiplies the inner square root: $$h^{(n)}(x)=h(x)\sum_{\mathbf m}(-1)^{|\mathbf m|}\,a_{\mathbf m}\frac{x^{|\mathbf m|/2}}{x^n}\,,$$ which implies, for $x>0$: \begin{align*} h^{(n)}(x^2)=&\,e^{-x}\sum_{\mathbf m}a_{\mathbf m}\frac{(-x)^{|\mathbf m|}}{(-x)^{2n}}\\ =&\,-\frac{P(-x)e^{-x}}{x^{2n-1}}\,. \end{align*} Since we are interested at the limit $\lim_{x\to0^+}\bigl[g^{(n)}(x)+h^{(n)}(x)\bigr]$, we can change $x$ by $x^2$ (with $x>0$), and so the desired limit is equal to $$L=\lim_{x\to0^+}\frac{Q(x)-Q(-x)}{x^{2n-1}}=\lim_{x\to0^+}\frac{H(x)}{x^{2n-1}}\,,$$ where $Q(x)=P(x)e^x$ and $H(x)=Q(x)-Q(-x)$. Since $H(0)=0$ and because of the term $x^{2n-1}$, we are led to use L'Hôpital's rule, hopefully $2n-1$ times (well, not hopefully but instead certainly, because we already know the result). We have $$H^{(r)}(0)=\bigl[Q^{(r)}(x)-(-1)^rQ^{(r)}(-x)\bigr]\Bigl|_{x=0}=\begin{cases} 0,&\ \text{if}\ r\ \text{is even};\\ 2Q^{(r)}(0),&\,\ \text{if}\ r\ \text{is odd}. \end{cases}$$ Finally, if $P(T)=\sum_{k=0}^{n-1}b_kT^k$, then by general Leibniz rule we have \begin{align*} Q^{(r)}(0)=&\,\biggr[\sum_{k=0}^r\binom rkP^{(k)}(x)\ \frac{d^{r-k}\ e^x}{dx^{r-k}}\biggr]\Biggl|_{x=0}=\sum_{k=0}^r\binom rkP^{(k)}(0)\\ =&\,\sum_{k=0}^r\binom rk\,k!b_k\,.\tag{\boldsymbol\ast} \end{align*} At this point it is necessary to determine the coefficients of the polynomial $P(T)$. Remember that actually $P(T)$ is a polynomial that depends on $n$, and that for all $n\geq1$ we have $$g^{(n)}(x^2)=\frac{P_n(x)e^x}{x^{2n-1}}\,.$$ Defining $R_n(T)=2^nP_n(T)$ and using the equality $2xg^{(n)}(x^2)=\bigl[g^{(n-1)}(x^2)\bigr]^\prime$ for $n\geq2$ we obtain (exercise) the recurrence $$R_n(T)=(T-2n+3)R_{n-1}(T)+TR_{n-1}^\prime(T),\ \text{for all}\ n\geq2\,,$$ and initial value $R_1(T)=1$ (exercise). Writing $R_n(T)=\sum_{k=0}^{n-1}r_{n,k}T^k$, the recurrence becomes (exercise) \begin{align*} r_{n,n-1}=r_{n-1,n-2},&\quad\text{for}\ n\geq2;\\ r_{n,k}=(3-2n+k)r_{n-1,k}+r_{n-1,k-1},&\quad\text{for}\ n\geq2\ \text{and}\ k=1,\dots,n-2;\\ r_{n,0}=(3-2n)r_{n-1,0},&\quad\text{for}\ n\geq2\,. \end{align*} From this we see that $r_{n,n-1}=1$ for all $n\geq1$ and $r_{n,0}=(-1)^{n-1}(2n-3)!!$ for all $n\geq1$. Moreover, for all $m\geq1$ and all $t\geq2$ we get $$r_{m+t,m}=r_{t,0}+\sum_{k=1}^m(r_{k+t,k}-r_{(k-1)+t,k-1})=(-1)^{t-1}(2t-3)!!+\sum_{k=1}^m(3-2t-k)r_{k+(t-1),k}\,.$$ With this we will be able to iteratively determine the values $r_{m+t,m}$, starting with $t=2$. I used Mathematica to do this, and after some trials I discovered the following formula: $$r_{m+t,m}=\frac{(-1)^{t-1}}{(2t-2)!!}\,(m+1)\cdots(m+2t-2)=\frac{(-1)^{t-1}(m+2t-2)!}{2^{t-1}(t-1)!m!}\,,$$ which can be rewritten as \begin{align*} r_{n,k}=&\,\frac{(-1)^{n-k-1}(2n-k-2)!}{2^{n-k-1}(n-k-1)!k!}\\ =&\,(n-1)!\frac{(-1)^{n-k-1}}{2^{n-k-1}k!}\binom{2n-2-k}{n-1}\,,\ \text{for}\ n\geq2\ \text{and}\ k=1,\dots,n-2\,. \end{align*} Actually, the formula above continue to hold at the remaining cases. Therefore we have $$P_n(T)=(n-1)!\sum_{k=0}^{n-1}\frac{(-1)^{n-k-1}}{2^{2n-k-1}k!}\binom{2n-2-k}{n-1}\,T^k\,,$$ and we would like to show directly (see $(\boldsymbol\ast)$) that for all $r$ odd we have $$\sum_{k=0}^r\binom rk\,\frac{(-1)^{n-k-1}}{2^{2n-k-1}}\binom{2n-2-k}{n-1}=\,\begin{cases} 0,&\ \text{if}\ r<2n-1;\\ 1/2,&\ \text{if}\ r=2n-1. \end{cases}\tag{\boldsymbol{\ast\ast}}$$ I don't have any idea about how to prove equality $(\boldsymbol{\ast\ast})$ above. SUMMARY AND MORAL Your desired, explicit formula (that is, without using power series) for $f^{(n)}(x)$ is as follows: \begin{align*} f^{(n)}(x)=&\,g^{(n)}(x)+h^{(n)}(x)\\ =&\,\frac{(n-1)!}{x^{n-\frac12}}\sum_{k=0}^{n-1}\frac{(-1)^{n-k-1}}{2^{2n-k-1}k!}\binom{2n-2-k}{n-1}\,x^{k/2}\bigl[e^{\sqrt x}-(-1)^ke^{-\sqrt x}\,\bigr]\,. \end{align*} The moral of the story is: it is a lot better to use power series!!!!! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.982136070728302, "perplexity": 350.92153284579877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659449.65/warc/CC-MAIN-20150417045739-00049-ip-10-235-10-82.ec2.internal.warc.gz"}
https://codeforces.com/blog/entry/6416
### HolkinPV's blog By HolkinPV, 7 years ago, translation, , Hello everybody) Today is coming regular Codeforces round #161 for Div.2 participants. Traditionally the others can take part out of the competition. The problems were prepared by authors: Pavel Kholkin (HolkinPV), Nikolay Kuznetsov (NALP) and Gerald Agapov (Gerald). Traditionally thanks to Michael Mirzayanov (MikeMirzayanov) for Codeforces system and Mary Belova (Delinur) for translating the problems. Also thanks to Rakhov Artem (RAD) and Vitaly Aksenov (Aksenov239) for their help. UPD: Today it is decided to use dynamic scoring system. But the problems will be sorted from low difficulty to high by authors' opinion! We wish everyone good luck, successful hacks and high rating! UPD2: the contest is over) hope you enoy it Congratulations to winners: UPD3: the tutorial is published, you can find it here • +86 » 7 years ago, # |   +15 Hey, I believe I just saw a post where blogger and PavelKunyavskiy are the writers of the contest! • » » 7 years ago, # ^ |   +18 That was a stupid joke. • » » » 7 years ago, # ^ |   0 Strangely enough, it appeared on the front page. • » » 7 years ago, # ^ |   +5 me too • » » 7 years ago, # ^ |   +8 I don't know anything about it. » 7 years ago, # |   -8 What will be the Rating Of problems? You have not mentioned i think you have to mention it in your blog if you are going to prepare the problems for any contest. » 7 years ago, # | ← Rev. 3 →   +6 I think is time to have some more information about the score and difficulty distributions :)UPD:Thank you. » 7 years ago, # |   +23 Wish epic failures to everyone! ^.^ » 7 years ago, # | ← Rev. 2 →   0 :/ » 7 years ago, # |   +5 Can somebody explain how solve problem D in O(N)? • » » 7 years ago, # ^ |   +9 It is guaranteed that each node of the graph is connected by the edges with at least k other nodes of the graph.Therefore it's possible to form cycles from any point (I think) So I do a DFS on node 1 • » » 7 years ago, # ^ | ← Rev. 2 →   +16 Let's go from vertex 1 and build a chain. At each iteration, if we are in vertex v, if exists some vertex u, that is in our chain at distance at least k, then we go to it and end the cycle. Else some vertex u exists such that it hasn't been visited yet, so we go to it. At some moment we'll end the cycle, because it's only finite number of vertices at all. • » » 7 years ago, # ^ |   +7 Build up the cycle as a path, until the last vertex of the path is the same as first one. Start with an arbitrary vertex. For the first K+1 vertices of the cycle, use a greedy approach — K times choose one vertex which is not in the path and connected to the last vertex of the path. There will always be one (because at most K-1 vertices apart from itself are in the path, so there must be at least 1 neighbour vertex left). Now, augment this path in the same way, until you can't do it. Then, take the last K vertices of the path; the last of them will have at least one neighbour other than those, but all his neighbours are in the path, so there must be a neighbour X of the last vertex of the path; add it to the end of the path. This way, there's a simple path in the graph, which starts and ends at X, and contains at least K vertices. BTW the time complexity is O(N+M), and it's optimal. • » » » 7 years ago, # ^ |   0 Thanks everyone for explain :) » 7 years ago, # |   +8 Does the dynamic scoring system take into account unofficial participants from div 1? • » » 7 years ago, # ^ |   +5 No. • » » 7 years ago, # ^ |   0 Look here: http://codeforces.ru/blog/entry/4172 » 7 years ago, # |   0 How to solve problem C • » » 7 years ago, # ^ |   +7 Case N=5 is clear. Bruteforce N=6. For N > 6, there are vertices a,b,d,e, all connected to vertex 1, and connected in the order in which they appear on the cycle: a-b-1-d-e; among them, only b and d are also connected. So you can find b and d — the only points which have 2 common neighbors with 1. You now have a part of the cycle: b-1-d. If you have a part A-B-C, then you can find the next vertex D on the cycle (A-B-C-D-...), because it's the only common neighbor of B and C other than A. Complexity: O(N). » 7 years ago, # |   0 Very fast System Testing :P » 7 years ago, # |   +6 well done everyone and what a brilliant problem C. :) can anyone explain any easy to code in C++ :) solution for problem C? I think this question is truly common between all the contestants :) thanks everyone. » 7 years ago, # |   +13 Currently most of the fastest solutions to the problem E are written in Java, you don't see it often :D » 7 years ago, # |   +1 Thanks for a good contest! » 7 years ago, # |   +3 There are some straight-forward backtracking codes for D which are getting ACed. Really curious how this is possible :/ • » » 7 years ago, # ^ |   +2 One dfs is enough to find a solution for this problem. That's why backtracking needs only one branch to find the answer so it works in O(N + M). • » » 7 years ago, # ^ | ← Rev. 3 →   +48 At least one AC solution fails at this test: 7 8 2 1 2 2 3 3 4 4 2 1 5 5 6 6 7 7 5 My guess is than in all tests vertex number 1 is on the needed cycle. • » » » 7 years ago, # ^ |   +8 I'm guessing a lot of cases can be made where the backtracking fails. I don't know how they made their testcases :/ • » » » 7 years ago, # ^ | ← Rev. 3 →   0 we could visiting and put time stamp on it, suppose you are visiting node i, it must connect with k nodes, if one of them not visited, continue visit this node, if all the k nodes are visited, now we have the solution, the start position is the one in these k nodes which have the smallest time stamp, the length from it to node i must larger than k. • » » » 7 years ago, # ^ |   +13 I've also found a code that fails in the case gen has given above. This case definitely should be added and rejudged to maintain fairness. (no matter how many minus i get :) ) • » » » 7 years ago, # ^ |   +1 I guess it would be good if you send this testcase with the failing code(which got AC) to Gerald or contest writers.Anyway I suppose test number 18 is something like this. since I saw some of the solutions which used maximal path that were trying to create the cycle with first node of the path(instead of the last). These solutions got WA in test 18. In the testcase answer shown to us there is no "1" in the answer so I suppose in this test first node wasn't actually in the cycle. » 7 years ago, # |   +11 First 10 Minutes in the contest i was happy that i solved A and B problems, but i shocked after that :) » 7 years ago, # |   +8 persianpars did impossible. congratulation persianpars. • » » 7 years ago, # ^ |   +8 thanks i still don't believe that i finished second » 7 years ago, # |   +9 ehhh... This is my first time that I think I can solve problem C in contest. Although I fail, but it gives me a good time. thanks !!!! » 7 years ago, # |   0 Nice contest! » 7 years ago, # |   0 Problem set was really good. I really enjoyed the contest. Best round for me so far. » 7 years ago, # | ← Rev. 3 →   0 Yeaaaaahhhh !! This is the first time I solved the E problem — I guess it wasnt that hard :)) -Also it seems that in certain problems (like this one) the title is a hint (good to know :)My time complexity was O(n*m*k) , actually it was O((n-2*k)*(m-2*k)*k). Can it be done faster than this ? • » » 7 years ago, # ^ |   +3 I see something strange in your contest stats, it seem that the solution for B is still in queue o.O • » » » 7 years ago, # ^ |   0 I know. I got AC and then the status changed. I have no idea what the problem might be ... :-?? • » » 7 years ago, # ^ |   0 but the limits aren't good for this problem... if k = n / 4 the complexity is n/2*n/2*n/4 = O(n^3) which should not work in 3 sec i dont think there are faster solutions cause all the sources are like this » 7 years ago, # |   0 I have solved 1 problem in this round (in the running contest) but my rating got down from 906 to 872 .I have also solved a problem in running contest of Round #156 (Div. 2) but the authority didn't increase my rating . why???? • » » 7 years ago, # ^ | ← Rev. 2 →   +1 Your rating depends on how well other people did in the contest as well. Probably almost everyone did solve at least 1 problem. So that's why your rating did not increase.I think if you want to increase your rating, you should learn a lot of things before taking the next contest. • » » 7 years ago, # ^ |   +1 Maybe because you need to solve more than just the first problem to increase the rating :D » 7 years ago, # |   +1 Editorial is published here » 7 years ago, # |   0 for problem B somebody tell me is this a valid input? if yes what is the correct output for it. 4 3 3 3 3 3 ` • » » 7 years ago, # ^ |   +2 It's incorrect input because "It is guaranteed that all given squares are distinct." • » » » 7 years ago, # ^ |   0 Ok thanks. » 6 years ago, # |   -6 I post my solution in Chinese,anyone can view it. here is my blog » 6 years ago, # | ← Rev. 2 →   0 Hi,I am a noob and would like some help with understanding why I am getting different result for problem B than what the server throws up (as erroneous output)? This is happening for the 2nd time to my notice. Submission ID: 2961857 at test #7. My gcc gives as output exactly what is stated as the answer. If this is not the place to put these sort of questions, please direct me to the appropriate area.Thanks and Cheers! • » » 6 years ago, # ^ |   0 what gcc you using?? • » » » 6 years ago, # ^ |   0 version 4.6.3 • » » » » 6 years ago, # ^ |   0 I am also using the same version of gcc but mine get accepted :P • » » » » » 6 years ago, # ^ |   0 With the code from my submission? I've seen another submission of mine being rejected like this once before. Could this be a bug? Or am I doing something wrong? • » » » » » » 6 years ago, # ^ |   0 not bug!! May be you were doing wrong
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46275389194488525, "perplexity": 1272.8522459158808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526517.67/warc/CC-MAIN-20190720132039-20190720154039-00358.warc.gz"}
https://www.physicsforums.com/threads/coherence-in-density-matrix-formalism.788407/
# Coherence in density matrix formalism 1. Dec 19, 2014 ### HJ Farnsworth Hello, In the density matrix formalism I have read in numerous places that coherence is identified with the off-diagonal components of the density matrix. The motivation for this that is usually given is that if a state interacts with the environment in such a way that the basis state amplitudes are phase-shifted through this interaction, then the off-diagonal components will average to $0$ if the degree to which each basis amplitude is phase-shifted are uncorrelated. $\rho = \sum_{\psi}P(\psi)|\psi\rangle\langle\psi|$ in the $\{ |m\rangle \}$ representation, $|\psi\rangle = \sum_{m}|m\rangle\langle m|\psi\rangle$, it's easy to think of trivial examples where the coherence is $0$ for pure states, which to me seems absurd. It is also easy to think of examples where the coherence is basis-dependent, which also seems strange to me. For instance a pure spin-up state as written in the basis $\{ |x\uparrow \rangle , |x \downarrow \rangle \}$ versus the basis $\{ |z \uparrow \rangle , |z \downarrow \rangle \}$ gives, as expected, two different representations of the density matrix: $\rho = \begin{pmatrix}1 & 0\\0 & 0\end{pmatrix}$ in the $z$-basis and $\rho = \begin{pmatrix}1/2 & 1/2\\1/2 & 1/2\end{pmatrix}$ in the $x$-basis. So here, we have a pure state, which in one basis has a coherence of $0$, and in another has a coherence of $1/2$ - it seems to me that a good definition of coherence would identify the coherence of a pure state as $1$ regardless to basis. Furthermore, every time I have seen this definition of coherence of a state, the only examples given are for $2\times 2$ density matrices. How does the definition extend to density matrices written for larger bases - do we only define the coherence for one pair of basis amplitudes at a time? I have a guess to the answers to the above questions, but haven't been able to verify it. The coherence could be a comparative proberty between substates only, not a property of the state as a whole. In this case, a change of basis giving different coherences makes a lot more sense, and the extension of the definition to larger density matrices becomes trivial. Furthermore, the spin-$z$ having a coherence of $0$ above would also make more sense, since if the energy of the spin-up state is nonzero, then the complex amplitude of $| \uparrow \rangle$ will oscillate, while that of $| \downarrow \rangle$ won't, and instead will always remain $0$. So, the complex amplitudes of the two phases won't be correlated at all. With this, the value of coherence being zero versus nonzero has an interpretation in terms of complex amplitude correlation between substates. However, it's still hard to see an interpretation of different nonzero values for coherence - e.g., what does a coherence of $1/2$ versus $1/3$ tell you? Anyway, I think I've said plenty to illustrate my confusion. Any help on the topic would be very much appreciated. Thanks very much. -HJ Farnsworth 2. Dec 20, 2014 ### kith Talking about coherence can be a bit confusing because the term has many slightly different meanings and is often used informally. Mathematically well-defined are the terms "coherences" for the off-diagonal elements of the density matrix and "purity" for the trace of the density matrix squared. The purity of a density matrix doesn't depend on the basis. It is one for pure states and minimal for a maximally mixed state (which is represented by a density matrix which is proportional to the identity matrix). So if you want to talk about the "coherence" of a (not necessarily pure) state, you may want to use the purity. As you already suspected, the off-diagonal elements or coherences refer to the coherence between basis states. A non-zero coherence between two states means that you cannot decompose your ensemble into a merely statistical mixture of sub-ensembles such that no sub-ensemble contains both basis states. Sakurai has a number of nice examples on this in his chapter on angular momentum. He probably doesn't use modern terminology though. Last edited: Dec 20, 2014 3. Dec 22, 2014 ### HJ Farnsworth Thanks kith, that's great. So to make sure I understand you correctly, a simple interpretation that follows from the definition of coherence in the original post would be something like, "coherence between two basis states is a measure of the degree to which those two basis states must be regarded as part of the same, additively inseparable ensemble". Other concepts of coherence that I have seen relate coherence between two waves to the relative frequency and phase difference of those waves - i.e., "two waves are coherent iff they have the same frequency and constant phase difference". I think that I can relate the interpretation above to this interpretation. First, the additive inseparability interpretation motivates this guess: "The statement that the coherence between two basis states $|a \rangle$ and $|b \rangle$ in a density matrix is $0$ is equivalent to the statement that all of the quantum states that were classically weighted in writing the density matrix had either the amplitude for $|a \rangle$ equal to $0$, or that for $|b \rangle$ equal to $0$ - none of them had the amplitudes for both $|a \rangle$ and $|b \rangle$ nonzero." The reason the additive inseparability interpretation motivates this guess is that, if the states composing the ensemble have this property, then it is obvious that the coherence will be $0$ and the density matrix will be additively separable, so it is at least a sufficient condition for incoherence. Testing this guess, let $|\psi_{1}\rangle =a_{1}|a\rangle + b_{1}|b\rangle$, and write $\rho = \sum{P_{i}|\psi_{i}\rangle \langle \psi_{i} |}=\begin{pmatrix} & a_{1}b_{1}^{*}+\cdots\\a_{1}^{*}b_{1}+\cdots\end{pmatrix}.$ It is immediately obvious that the guess was wrong - e.g., if $|\psi_{2}\rangle =-a_{1}|a\rangle + b_{1}|b\rangle$, then even if both $a_{1}$ and $b_{1}$ are nonzero, we could have offdiagonal matrix elements of $0$. However, note that if this is the case, then the phase difference between the two amplitudes in $\psi_{1}$ is exactly cancelled out by that in $\psi_{2}$: $|\psi_{2}\rangle =-a_{1}|a\rangle + b_{1}|b\rangle=e^{i\pi}a_{1}|a\rangle + b_{1}|b\rangle$, vs. $|\psi_{1}\rangle =e^{i0}a_{1}|a\rangle + b_{1}|b\rangle$. The offdiagonal terms from $|\psi_{1} \rangle$ could also be cancelled out by multiple other states in the ensemble - the only requirement for this to happen is that the phase differences between $|a\rangle$ and $|b\rangle$ in each state of the ensemble combine so as to cancel each other out (I am ignoring the different relative amplitudes between$|a\rangle$ and $|b\rangle$ among states, different amplitudes will just put a sort of weight on different relative phase differences between the two basis states). Based on this, I can replace the above guess with, "The statement that the coherence between two basis states $|a \rangle$ and $|b \rangle$ in a density matrix is $0$ is equivalent to the statement that all of the quantum states that were classically weighted in writing the density matrix had relative phases between the two basis states which ultimately cancelled each other out." Relating this to the "common" coherence definition two paragraphs above, I could say something like "two basis states are [completely] coherent, in the density matrix sense, if their phase difference is constant throughout the states of the ensemble". (I brushed frequency under the rug since I'm considering everything at a single time, say $t=0\implies \omega t=0$. This is in pretty good analogy to e.g., classical optics temporal coherence between two waves = constant phase difference at a given point in space as the waves propagate: Temporal coherence between two waves throughout time is analogous to density matrix coherence between two basis states among the states in the ensemble. Any thoughts on this thought process? Decent analysis, complete BS, or am I attempting to go too far in getting an intuition on what is simply a mathematical definition? A half-follow-up and half-new-question: Regarding the first sentence in kith's response, the other two definitions of coherence that I have frequently come across are spatial/temporal coherence, defined in terms of the autocorrelation function in classical optics, and coherent states, defined as eigenstates of the HO annihilation operator in quantum mechanics. I think I have begun to answer this for myself in the above paragraph, but does anyone know the degree to which these three definitions of coherence/coherent states are related, or of a good source explaining, mathematically and interpretively, the relations among these concepts? Thanks again. Last edited: Dec 22, 2014 4. Dec 25, 2014 ### HJ Farnsworth Sorry for bumping this, but I do want to know whether people who have a bit more experience than me with this topic think that my conceptual understanding in the previous post holds water. Any thoughts? Thanks. Similar Discussions: Coherence in density matrix formalism
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9465137720108032, "perplexity": 325.62955816101226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948588072.75/warc/CC-MAIN-20171216123525-20171216145525-00329.warc.gz"}
https://www.quantstart.com/articles/Serial-Correlation-in-Time-Series-Analysis/
Serial Correlation in Time Series Analysis Serial Correlation in Time Series Analysis In last week's article we looked at Time Series Analysis as a means of helping us create trading strategies. In this article we are going to look at one of the most important aspects of time series, namely serial correlation (also known as autocorrelation). Before we dive into the definition of serial correlation we will discuss the broad purpose of time series modelling and why we're interested in serial correlation. When we are given one or more financial time series we are primarily interested in forecasting or simulating data. It is relatively straightforward to identify deterministic trends as well as seasonal variation and decompose a series into these components. However, once such a time series has been decomposed we are left with a random component. Sometimes such a time series can be well modelled by independent random variables. However, there are many situations, particularly in finance, where consecutive elements of this random component time series will possess correlation. That is, the behaviour of sequential points in the remaining series affect each other in a dependent manner. One major example occurs in mean-reverting pairs trading. Mean-reversion shows up as correlation between sequential variables in time series. Our task as quantitative modellers is to try and identify the structure of these correlations, as they will allow us to markedly improve our forecasts and thus the potential profitability of a strategy. In addition identifying the correlation structure will improve the realism of any simulated time series based on the model. This is extremely useful for improving the effectiveness of risk management components of the strategy implementation. When sequential observations of a time series are correlated in the manner described above we say that serial correlation (or autocorrelation) exists in the time series. Now that we have outlined the usefulness of studying serial correlation we need to define it in a rigourous mathematical manner. Before we can do that we must build on simpler concepts, including expectation and variance. ## Expectation, Variance and Covariance Many of these definitions will be familiar to most QuantStart readers, but I am going to outline them specifically for purposes of consistent notation. The first definition is that of the expected value or expectation: #### Expectation The expected value or expectation, $E(x)$, of a random variable $x$ is its mean average value in the population. We denote the expectation of $x$ by $\mu$, such that $E(x) = \mu$. Now that we have the definition of expectation we can define the variance, which characterises the "spread" of a random variable: #### Variance The variance of a random variable is the expectation of the squared deviations of the variable from the mean, denoted by $\sigma^2 (x) = E[(x-\mu)^2]$. Notice that the variance is always non-negative. This allows us to define the standard deviation: #### Standard Deviation The standard deviation of a random variable $x$, $\sigma (x)$, is the square root of the variance of $x$. Now that we've outlined these elementary statistical definitions we can generalise the variance to the concept of covariance between two random variables. Covariance tells us how linearly related these two variables are: #### Covariance The covariance of two random variables $x$ and $y$, each having respective expectations $\mu_x$ and $\mu_y$, is given by $\sigma(x,y) = E[(x-\mu_x)(y-\mu_y)]$. Covariance tells us how two variables move together. However since we are in a statistical situation we do not have access to the population means $\mu_x$ and $\mu_y$. Instead we must estimate the covariance from a sample. For this we use the respective sample means $\bar{x}$ and $\bar{y}$. If we consider a set of $n$ pairs of elements of random variables from $x$ and $y$, given by $(x_i, y_i)$, the sample covariance, $\text{Cov}(x,y)$ (also sometimes denoted by $q(x,y)$) is given by: \begin{eqnarray} \text{Cov}(x,y) = \frac{1}{n-1}\sum^n_{i=1} (x_i - \bar{x})(y_i - \bar{y}) \end{eqnarray} Note: Some of you may be wondering why we divide by $n-1$ in the denominator, rather than $n$. This is a valid question! The reason we choose $n-1$ is that it makes $\text{Cov}(x,y)$ an unbiased estimator. ### Example: Sample Covariance in R This is actually our first usage of R on QuantStart. I am not going to discuss the installation procedure of R here, but I will do so in later articles. Thankfully it is much more straightforward than installing Python! Assuming you have R installed you can open up the R terminal. In the following commands we are going to simulate two vectors of length 100, each with a linearly increasing sequence of integers with some normally distributed noise added. Thus we are constructing linearly associated variables by design. We will firstly construct a scatter plot and then calculate the sample covariance using the cor function. In order to ensure you see exactly the same data as I do, we will set a random seed of 1 and 2 respectively for each variable: > set.seed(1) > x <- seq(1,100) + 20.0*rnorm(1:100) > set.seed(2) > y <- seq(1,100) + 20.0*rnorm(1:100) > plot(x,y) The plot is as follows: Scatter plot of two linearly increasing variables with normally distributed noise There is a relatively clear association between the two variables. We can now calculate the sample covariance: > cov(x,y) 681.6859 The sample covariance is given as 681.6859. One drawback of using the covariance to estimate linear association between two random variables is that it is a dimensional measure. That is, it isn't normalised by the spread of the data and thus it is hard to draw comparisons between datasets with large differences in spread. This motivates another concept, namely correlation.. ## Correlation Correlation is a dimensionless measure of how two variables vary together, or "co-vary". In essence, it is the covariance of two random variables normalised by their respective spreads. The (population) correlation between two variables is often denoted by $\rho(x,y)$: \begin{eqnarray} \rho(x,y)= \frac{E[(x-\mu_x)(y-\mu_y)]}{\sigma_x \sigma_y} = \frac{\sigma(x,y)}{\sigma_x \sigma_y} \end{eqnarray} The denominator product of the two spreads will constrain the correlation to lie within the interval $[-1,1]$: • A correlation of $\rho(x,y) = +1$ indicates exact positive linear association • A correlation of $\rho(x,y) = 0$ indicates no linear association at all • A correlation of $\rho(x,y) = -1$ indicates exact negative linear association As with the covariance, we can define the sample correlation, $\text{Cor}(x,y)$: \begin{eqnarray} \text{Cor}(x,y) = \frac{\text{Cov(x,y)}}{\text{sd}(x) \text{sd}(y)} \end{eqnarray} Where $\text{Cov}(x,y)$ is the sample covariance of $x$ and $y$, while $\text{sd}(x)$ is the sample standard deviation of $x$. ### Example: Sample Correlation in R We will use the same $x$ and $y$ vectors of the previous example. The following R code will calculate the sample correlation: > cor(x,y) 0.5796604 The sample correlation is given as 0.5796604 showing a reasonably strong positive linear association between the two vectors, as expected. ## Stationarity in Time Series Now that we outlined the general definitions of expectation, variance, standard deviation, covariance and correlation we are in a position to discuss how they apply to time series data. Firstly, we will discuss a concept known as stationarity. This is an extremely important aspect of time series and much of the analysis carried out on financial time series data will concern stationarity. Once we have discussed stationarity we are in a position to talk about serial correlation and construct some correlogram plots. We will begin by trying to apply the above definitions to time series data, starting with the mean/expectation: #### Mean of a Time Series The mean of a time series $x_t$, $\mu(t)$, is given as the expectation $E(x_t)=\mu(t)$. • $\mu = \mu(t)$, i.e. the mean (in general) is a function of time. • This expectation is taken across the ensemble population of all the possible time series that could have been generated under the time series model. In particular, it is NOT the expression $(x_1 + x_2 + ... + x_k)/k$ (more on this below). This definition is useful when we are able to generate many realisations of a time series model. However in real life this is usually not the case! We are "stuck" with only one past history and as such we will often only have access to a single historical time series for a particular asset or situation. So how do we proceed if we wish to estimate the mean, given that we don't have access to these hypothetical realisations from the ensemble? Well, there are two options: • Simply estimate the mean at each point using the observed value. • Decompose the time series to remove any deterministic trends or seasonality effects, giving a residual series. Once we have this series we can make the assumption that the residual series is stationary in the mean, i.e. that $\mu(t) = \mu$, a fixed value independent of time. It then becomes possible to estimate this constant population mean using the sample mean $\bar{x} = \sum^{n}_{t=1} \frac{x_t}{n}$. #### Stationary in the Mean A time series is stationary in the mean if $\mu(t)=\mu$, a constant. Now that we've seen how we can discuss expectation values we can use this to flesh out the definition of variance. Once again we make the simplifying assumption that the time series under consideration is stationary in the mean. With that assumption we can define the variance: #### Variance of a Time Series The variance $\sigma^2 (t)$ of a time series model that is stationary in the mean is given by $\sigma^2 (t) = E[(x_t-\mu)^2]$. This is a straightforward extension of the variance defined above for random variables, except that $\sigma^2 (t)$ is a function of time. Importantly, you can see how the definition strongly relies on the fact that the time series is stationary in the mean (i.e. that $\mu$ is not time-dependent). You might notice that this definition leads to a tricky situation. If the variance itself varies with time how are we supposed to estimate it from a single time series? As before, the presence of $E(..)$ requires an ensemble of time series and yet we will often only have one! Once again, we simplify the situation by making an assumption. In particular, and as with the mean, we assume a constant population variance, denoted $\sigma^2$, which is not a function of time. Once we have made this assumption we are in a position to estimate its value using the sample variance definition above: \begin{eqnarray} \text{Var(x)} = \frac{\sum(x_t - \bar{x})^2}{n-1} \end{eqnarray} Note for this to work we need to be able to estimate the sample mean, $\bar{x}$. In addition, as with the sample covariance defined above, we must use $n-1$ in the denominator in order to make the sample variance an unbiased estimator. #### Stationary in the Variance A time series is stationary in the variance if $\sigma^2 (t)=\sigma^2$, a constant. This is where we need to be careful! With time series we are in a situation where sequential observations may be correlated. This will have the effect of biasing the estimator, i.e. over- or under-estimating the true population variance. This will be particularly problematic in time series where we are short on data and thus only have a small number of observations. In a high correlation series, such observations will be close to each other and thus will lead to bias. In practice, and particularly in high-frequency finance, we are often in a situation of having a substantial number of observations. The drawback is that we often cannot assume that financial series are truly stationary in the mean or stationary in the variance. As we progress with this article series and develop more sophisticated models, we will address these issues in order to improve our forecasts and simulations. We are now in a position to apply our time series definitions of mean and variance to that of serial correlation. ## Serial Correlation The essence of serial correlation is that we wish to see how sequential observations in a time series affect each other. If we can find structure in these observations then it will likely help us improve our forecasts and simulation accuracy. This will lead to greater profitability in our trading strategies or better risk management approaches. Firstly, another definition. If we assume, as above, that we have a time series that is stationary in the mean and stationary in the variance then we can talk about second order stationarity: #### Second Order Stationary A time series is second order stationary if the correlation between sequential observations is only a function of the lag, that is, the number of time steps separating each sequential observation. Finally, we are in a position to define serial covariance and serial correlation! #### Autocovariance of a Time Series If a time series model is second order stationary then the (population) serial covariance or autocovariance, of lag $k$, $C_k = E[(x_t-\mu)(x_{t+k}-\mu)]$. The autocovariance $C_k$ is not a function of time. This is because it involves an expectation $E(..)$, which, as before, is taken across the population ensemble of possible time series realisations. This means it is the same for all times $t$. As before this motivates the definition of serial correlation or autocorrelation, simply by dividing through by the square of the spread of the series. This is possible because the time series is stationary in the variance and thus $\sigma^2 (t) = \sigma^2$: #### Autocorrelation of a Time Series The serial correlation or autocorrelation of lag $k$, $\rho_k$, of a second order stationary time series is given by the autocovariance of the series normalised by the product of the spread. That is, $\rho_k = \frac{C_k}{\sigma^2}$. Note that $\rho_0 = \frac{C_0}{\sigma^2} = \frac{E[(x_t - \mu)^2]}{\sigma^2} = \frac{\sigma^2}{\sigma^2} = 1$. That is, the first lag of $k=0$ will always give a value of unity. As with the above definitions of covariance and correlation, we can define the sample autocovariance and sample autocorrelation. In particular, we denote the sample autocovariance with a lower-case $c$ to differentiate between the population value given by an upper-case $C$. The sample autocovariance function $c_k$ is given by: \begin{eqnarray} c_k = \frac{1}{n} \sum^{n-k}_{t=1} (x_t - \bar{x})(x_{t+k} - \bar{x}) \end{eqnarray} The sample autocorrelation function $r_k$ is given by: \begin{eqnarray} r_k = \frac{c_k}{c_0} \end{eqnarray} Now that we have defined the sample autocorrelation function we are in a position to define and plot the correlogram, an essential tool in time series analysis. ## The Correlogram A correlogram is simply a plot of the autocorrelation function for sequential values of lag $k=0,1,...,n$. It allows us to see the correlation structure in each lag. The main usage of correlograms is to detect any autocorrelation subsequent to the removal of any deterministic trends or seasonality effects. If we have fitted a time series model then the correlogram helps us justify that this model is well fitted or whether we need to further refine it to remove any additional autocorrelation. Here is an example correlogram, plotted in R using the acf function, for a sequence of normally distributed random variables. The full R code is as follows: > set.seed(1) > w <- rnorm(100) > acf(w) Correlogram plotted in R of a sequence of normally distributed random variables There are a few notable features of the correlogram plot in R: • Firstly, since the sample correlation of lag $k=0$ is given by $r_0 = \frac{c_0}{c_0} = 1$ we will always have a line of height equal to unity at lag $k=0$ on the plot. In fact, this provides us with a reference point upon which to judge the remaining autocorrelations at subsequent lags. Note also that the y-axis ACF is dimensionless, since correlation is itself dimensionless. • The dotted blue lines represent boundaries upon which if values fall outside of these, we have evidence against the null hypothesis that our correlation at lag $k$, $r_k$, is equal to zero at the 5% level. However we must take care because we should expect 5% of these lags to exceed these values anyway! Further we are displaying correlated values and hence if one lag falls outside of these boundaries then proximate sequential values are more likely to do so as well. In practice we are looking for lags that may have some underlying reason for exceeding the 5% level. For instance, in a commodity time series we may be seeing unanticipated seasonality effects at certain lags (possibly monthly, quarterly or yearly intervals). Here are a couple of examples of correlograms for sequences of data. ### Example 1 - Fixed Linear Trend The following R code generates a sequence of integers from 1 to 100 and then plots the autocorrelation: > w <- seq(1, 100) > acf(w) The plot is as follows: Correlogram plotted in R of a sequence of integers from 1 to 100 Notice that the ACF plot decreases in an almost linear fashion as the lags increase. Hence a correlogram of this type is clear indication of a trend. ### Example 2 - Repeated Sequence The following R code generates a repeated sequence of numbers with period $p=10$ and then plots the autocorrelation: > w <- rep(1:10, 10) > acf(w) The plot is as follows: Correlogram plotted in R of a sequence of integers from 1 to 10, repeated 10 times We can see that at lag 10 and 20 there are significant peaks. This makes sense, since the sequences are repeating with a period of 10. Interestingly, note that there is a negative correlation at lags 5 and 15 of exactly -0.5. This is very characteristic of seasonal time series and behaviour of this sort in a correlogram is usually indicative that seasonality/periodic effects have not fully been accounted for in a model. ## Next Steps Now that we've discussed autocorrelation and correlograms in some depth, in future articles we will be moving on to linear models and begin the process of forecasting. While linear models are far from the state of the art in time series analysis, we need to develop the theory on simpler cases before we can apply it to the more interesting non-linear models that are in use today.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9269242286682129, "perplexity": 398.2583008104195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362879.45/warc/CC-MAIN-20211203121459-20211203151459-00573.warc.gz"}
http://www.physicsforums.com/showthread.php?t=190807
# Apparent & Actual Depth by nblu Tags: actual, apparent, depth P: 56 http://physicsforums.com/archive/index.php/t-77612.html here's a link to the same question which has already been inquired before by someone else. according to the question, it asks to calculate the "actual" depth and once I have read all the replies on that page, the author seemed to calculate the answer. I've tried it myself and acquired the same answer. However, I'm a little confused with the given unit "1.0m" in the question. Is it just there to confuse the reader? because it has not been included in any of the calculations. Sorry for posting too many questions :( and thank you in advance. Mentor P: 41,477 I'm not quite understanding your concern. All the distances in the problem are given in meters. P: 56 Quote by Doc Al I'm not quite understanding your concern. All the distances in the problem are given in meters. oh what I meant was, was that 1.0m necessary for calculating the final answer? Mentor P: 41,477 Apparent & Actual Depth Ah... Are you referring to the statement: "She estimates that her eyes are about 1.0m above the water's surface"? If so, then no, that fact seems irrelevant to the problem. P: 56 I'm very sorry to hold u up Doc, I've been trying to figure out a diagram for that question, would you be able to take a look? Thank you very much and sorry T_T Mentor P: 41,477 Your diagram looks good to me. P: 56 Quote by Doc Al Your diagram looks good to me. Thank you doc I really appreciate your help! :) have a good weekend Mentor P: 41,477 My pleasure. Related Discussions Introductory Physics Homework 4 Introductory Physics Homework 1 Introductory Physics Homework 17 Introductory Physics Homework 3 Introductory Physics Homework 19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9133210778236389, "perplexity": 894.6041176983372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921550.2/warc/CC-MAIN-20140901014521-00051-ip-10-180-136-8.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007/s10802-021-00885-y
Callous-unemotional (CU) traits have been found to be closely related to the affective dimension of psychopathy (Hare & Neumann, 2008; Kimonis et al., 2015). Different authors argue that CU traits are useful for identifying a high-risk group (CU +) within children with Conduct Disorder (CD; Frick & White, 2008). This group is characterized by marked differences in neurocognitive, emotional, and behavioral functioning, including lower autonomic responsiveness to empathy-inducing stimuli (Frick, & Viding, 2009; de Wied et al., 2012), disturbances in affective theory of mind (Sebastian et al., 2012), lower sensitivity to punishment (Frick et al., 2014), and changes in brain regions involved in emotion and learning (e.g., amygdala, Blair et al., 2014). Collectively, these characteristics are assumed to contribute to the more violent, chronic, and recidivist pattern of antisocial behavior exhibited by youth with high CU traits and are an important target for intervention (Cecil et al., 2018). Furthermore, it has been demonstrated that treatment non-responders have significantly higher CU levels than responders (e.g., Falkenbach et al., 2003; Gretton et al., 2001; Hawes & Dadds, 2005; Masi et al., 2013; O'Neill et al., 2003; Spain et al., 2004; Waschbusch et al., 2007). Moreover, CU traits in school-aged children predict later criminal and antisocial behavior in adulthood, even after controlling for CD severity and onset (McMahon et al., 2010). For these reasons, the DSM-5 revisions (American Psychiatric Association, 2013) added the possibility of additional coding (descriptive features specifier) "with Limited Prosocial Emotions" (LPE) to the CD diagnosis. To warrant this additional coding, at least two of the four specifiers must occur within the same time period and across different relationships and situations: (a) lack of remorse or guilt; (b) callous lack of empathy; (c) unconcerned about performance; (d) shallow or deficient affect. These four criteria closely approximate the affective dimension of psychopathy in adult samples (Hare & Neumann, 2008). Accordingly, the DSM-5 follows a categorical conceptualization (with specifier [high risk] vs. without specifier) of CU traits. ## Latent Structure of CU Traits In addition to conceptualizing CU traits as categorical (i.e., identifying high-risk individuals who score above a certain cut-off value), they can also be understood as forming a latent continuum. A number of studies have been conducted to investigate if certain psychiatric disorders consist of discrete categories of behaviors, or if they rather form a continuum connecting extreme forms of behavioral traits on a single dimension (Haslam et al., 2020). The issue of whether a phenomenon (e.g., a mental disorder) is appropriately conceptualized as "dimensional" (i.e., as manifestations along a continuum of behavioral characteristics) or as discrete categorical entities has important implications for research, theory, and practice (Ruscio & Ruscio, 2004). For example, the latent status of a construct is important for the classification of individuals. If the underlying construct is continuous, the convention for classification into dichotomous groups (e.g., treatment vs. no treatment) must be derived based on certain criteria that are not part of the diagnosis (external validation criteria). If, on the other hand, a true categorical latent structure is present, providing clinically relevant cut-off values to differentiate the corresponding groups appears to be an essential target. Furthermore, identifying the latent structure of a phenomenon is also important to guide research into its etiopathogenesis (i.e., the cause and development of an atypical condition or disease). It can be argued that a dimensional structure is rather generated by a multitude of different risk factors through addition and interaction. On the other hand, existence of categorical latent structure can result from a specific etiology or developmental bifurcation (see Meehl, 1995; Ruscio et al., 2006). Moreover, in the context of prognostic studies (i.e., using CU traits as an explanatory factor) or studies on etiological factors (i.e., CU traits as the outcome), the latent status of a phenomenon seems to be of particular importance and should influence the selection of the appropriate statistical procedures. To determine whether the latent structure of a construct is best conceptualized as dimensional or categorical, taxometric methods are often used. Taxometric techniques were originally discussed by Paul E. Meehl to test his conjecture that a discrete latent variable ("taxon") underlies vulnerability to schizophrenia (Golden & Meehl, 1979). Meehl (1995) introduced a fundamental feature into modern taxometric analyses. Several nonredundant data-analytic procedures (see Ruscio et al., 2011 for a detailed description) are applied and the final interpretation of the latent structure of the construct are based on the convergence among these procedures. A significant methodological development in taxometric analysis represents the introduction of a systematized approach to taxometric inference by Ruscio et al. (2007). These authors developed a procedure in which taxometric plots based on observed data are compared with plots from parallel analyses of matched (e.g., sample size, marginal distributions, correlation matrix) simulated comparison datasets generated from a population of data using a taxonomic or dimensional latent structural model. In addition, the authors developed an index (Comparison Curve Fit Index, CCFI) which quantifies the similarity of the observed curves from the simulated curves. A CCFI value < 0.45 indicates a dimensional structure, a CCFI value of > 0.55 indicates a categorical structure. Values between 0.45 and 0.55 are considered ambiguous. The CCFI value can be calculated independently for the different taxometric procedures. A final interpretation is then usually based on a mean CCFI value (Ruscio et al., 2018). This method of simulated comparative data set and the use of CCFI have become almost universally accepted (Haslam et al., 2020). A number of previous taxometric studies consistently support the dimensionality of psychopathy in adolescents (Edens et al., 2011; Murrie et al., 2007; Vasey et al., 2005; Walters, 2014). However, to the authors’ best knowledge only one study has examined the latent structure of CU traits so far. Herpers et al. (2017) analyzed the data of N = 979 Dutch children and adolescents using taxometric analysis. The results of their study, namely the Comparison Curve Fit Index (CCFI; Ruscio et al., 2007), point to a dimensional latent structure of CU traits. However, a number of limitations apply to the Herpers et al. study. The authors did not provide any information on what type of indicator they used and whether the requirements for the analysis were met (i.e., within-group correlations, indicator validity, number of indicators, number of ordered categories). In addition, the estimated baseline prevalence of the possible taxon subgroup, the estimation method and subgroup analyses (e. g. regarding gender) were not provided. In the present study, we replicate the taxometric analysis of CU traits, avoiding the limitations of the study by Herpers and colleagues. Data for this analysis were obtained from a representative sample of ninth graders in Germany. Following recent developments in the methodology of taxometric analysis, we will use a new taxometric approach developed by Ruscio et al. (2018), the CCFI profile method. Rather than estimating a putative taxon base rate and using that estimate to generate the taxon comparative data, the CCFI profile method replicates the analysis with each base rate estimate between 0.025 and 0.975 in increments of 0.025. ## Method ### Sampling Method The following analysis uses child-report data from ninth grade students in Germany originating from a periodically conducted representative survey (see Kliem et al., 2020), carried out by the Criminological Research Institute of Lower Saxony in spring 2015. The Ministry of Education of Lower Saxony (this constitutes the state’s educational authority) approved the survey and provided ethics auditing. The survey was strictly anonymized – neither names, nor private or school addresses were obtained. The study was conducted in accordance with the World Medical Association’s (WMA) Declaration of Helsinki. The survey was carried out by trained test administrators within a classroom setting and was completed in a time frame of two school lessons (90 min). The students’ parents received an information leaflet beforehand, which included a request for written consent for the participation of their child and provided them with information about the institution conducting the study, as well as aims, methods and financing of the study. Furthermore, the students themselves could also independently refuse to participate, despite the existing consent given by their parents. Students were informed that their participation in the survey is entirely voluntary and anonymous and that they could withdraw their participation consent at any time without any negative consequences. Furthermore, they were informed of their right to skip individual questions within the survey and were encouraged to speak to a counsellor, school psychologist or an anonymous crisis hotline if they were to feel negatively affected by partaking in the survey. Of the N = 3,878 students who participated, 51.4% are female (n = 1,992 individuals). The mean age is M = 14.9 years (SD = 0.71), with an age range of 13 to 19 years. N = 926 (23.9%) of the respondents have a migration background (i.e., students or at least one of their parents were not born in Germany or do not have German citizenship). ### Measures The Inventory of Callous-Unemotional Traits (ICU) by Frick (2004) can be considered the current standard for assessing CU traits (e. g. Cardinale & Marsh, 2020; Frick & Ray, 2015; Frick et al., 2014; Ray & Frick, 2020). The ICU is based on four items of the CU scale of Frick and Hare’s Antisocial Process Screening Device (APSD; Frick & Hare, 2001). These four original APSD items formed the basis of the four subscales Uncaring, Unemotional, Callous and Careless. These subscales correspond to the LPE dimensions of the DSM-5 (see Kimonis et al., 2015). A German version of the ICU (Frick, 2004; German version by Essau et al., 2006) was used to record child-reported insensitive, insidious, and hard-hearted properties. On the ICU, the young people indicate how accurately each item describes their own behavior (from 0 = "not at all true" to 3 = "definitely true"). ### Statistical Methods #### Missing Values Missing values (all included items < 5% missing data) were estimated using Chained Equation Modelling (see White et al., 2011). To avoid the imputation of item values, which do not correspond to the possible characteristics of the items, estimated values are in turn replaced by the “nearest natural neighbor” (Predictive Mean Matching Method, Little, 1988). Imputation was carried out using the R package mice (Multivariate Imputation by Chained Equations in R; van Buuren & Groothuis-Oudshoorn, 2011). #### Indicator Selection We tested two different three-indicator sets based on the work of Essau et al. (2006) [Uncaring (#3, #5, #13, #15, #16, #17, #23, #24), Unemotional (#1, #6, #14, #19, #22) Callous (#2, #4, #7, #8, #9, #10, #11, #12, #18, #20, #21)] and Kimonis et al. (2015) (excluding item #2 and #10). Furthermore, we analyzed two four-indicator sets on the original model of the APSD [Uncaring (#4, #8, #12, #17, #21, #24), Unemotional (#1, #6, #10, #14, #19, #22), Callous (#2, #5, #9, #13, #16, #18), and Careless (#3, #7, #11, #15, #20, #23)] and the work of Kliem et al. (2020) (excluding item #2, #10, and #13). #### Taxometric Analysis As recommended by Ruscio et al. (2010), we applied three non-redundant taxometric procedures: Mean above minus below a cut (MAMBAC Meehl & Yonce, 1994;), maximum eigenvalue (MAXEIG; Waller & Meehl, 1998), and latent-mode factor analysis (L-MODE; Waller & Meehl, 1998). Following the suggestion by Ruscio et al., (2007; see Ruscio et al. (2011) for a comprehensive introduction), two comparison populations (each N = 100,000) using (a) the categorical model and (b) the dimensional model were generated for each of the taxometric procedures. Relevant aspects of the empirical data, such as skewness, inter-correlations, and non-normality were held constant. In a second step, random samples (K = 100; with the same sample size of the empirical data set) were drawn from both populations. The R package RTaxometrics by Ruscio and Wang (2017) was used for these simulations. All samples were then analyzed using the three different taxometric procedures (MAMBAC, MAXEIG, L-MODE). The root-mean-square distance between empirical data points on curves and data points on simulated categorical (FitCat) as well as simulated dimensional (FitDim) reference curves were calculated (smaller values indicating that both curves resemble one another more closely). Next, the comparison curve fit index (CCFI = FitDim / (FitDim + FitCat)) was calculated for each taxometric procedure. In accordance with Ruscio et al. (2010), the mean CCFI of the MAMBAC, MAXEIG, and L-MODE procedure was used to interpret the latent status of CU traits. Rather than estimating a putative taxon base rate and using that estimate to generate the taxon comparative data, we used the CCFI profile method developed by Ruscio et al. (2018). This method replicates the analysis with each base rate estimate between 2.5% and 97.5% in increments of 2.5%. If the construct is taxonic, the CCFI value should be greatest at the most accurate base rate estimation (Ruscio et al., 2018). In Monte Carlo simulations, this method provided a more accurate base rate estimation (in the case of categorical structure) as well as a particularly adequate estimate of latent structure on the basis of a CCFI profile value, whereby a CCFI profile value above 0.50 denotes a better fit for a categorical latent structure and a value below 0.50 denotes a better fit for a dimensional latent structure (Ruscio et al., 2018). We used Ruscio’s and Wang’s R package RTaxometrics (Ruscio & Wang, 2017) for the analysis. We performed CCFI profile analyses for the total sample as well as for males and females separately. #### Suitability of Data for Taxometric Analysis To check the prerequisites for taxometric analysis, assigning cases to putative groups is necessary. Based on Ruscio's, Ruscio's, and Carney's recommendations, case classification should be based on a meaningful diagnostic algorithm or valid assessment tool. It should be noted that any of these classification procedures is necessarily based on the assumption of a categorical latent structure. If taxometric results indicate a dimensional structure, this classification must however be questioned. Also, the determined base rates (see below) should then not be interpreted further. We used a group variable (taxon vs. complement) based on an algorithm presented by Kimonis et al. (2015). Four CU items (#3, #5, #6, and #8) were dichotomized (coded as present if rated 3 “definitely true”; see Kimonis et al., 2015). The following two groups were formed: Those reporting no symptoms or one symptom (i.e., not meeting CU specifier criteria) and those reporting ≥ 2 symptoms (i.e., meeting specifier criteria), reflecting the DSM-5 symptom threshold (APA, 2013). Based on this threshold, we found a base rate for the putative taxon group of 8.1% (n = 313) for the total sample, of 10.9% (n = 205) for the male sample as well as of 5.4% (n = 108) for the female sample, respectively. Taxometric analysis requires all standardized mean differences between the hypothetical categorical groups to be larger than Cohen’s d = 1.25. Furthermore, all indicators should correlate substantially with each other (mean r > 0.30), but the correlation should be substantially smaller within the hypothetical categorical groups (rwg ≤ 0.30) (Ruscio et al., 2011). ## Results ### Taxometric Analyses of CU traits #### Three-Indicator Sets The overwhelming majority of all standardized mean differences exceeded the required cut-off of d = 1.25 (see Table 1). We observed an average correlation between r = 0.28 and r = 0.35 and smaller correlation coefficients in the hypothetical categorical groups (Essau et al., 2006: between r = 0.12 and r = 0.16 [taxon], between r = 0.24 and r = 0.29 [complement]; Kimonis et al. (2015): between r = 0.11 and r = 0.17 [taxon], between r = 0.25 and r = 0.30 [complement]). Figure 1 depicts the graphical taxometric results for the CCFI profile analyses of both three-indicator sets (Essau et al., 2006; Kimonis et al., 2015). Strong support for the superiority of a dimensional model was detected regarding the total sample (Essau et al.: CCFI mean profile = 0.316; Kimonis et al.: CCFI mean profile = 0.316), the male sample (CCFI mean profile = 0.328 / 0.376), and the female sample (CCFI mean profile = 0.322 / 0.248). #### Four-Indicator Sets The majority of all standardized mean differences exceeded the required cut-off of d = 1.25 (see Table 1). We observed an average correlation between r = 0.39 and r = 0.39, and smaller correlations in the hypothetical categorical groups (APSD: between r = 0.25 and r = 0.28 [taxon], between r = 0.32 and r = 0.35 [complement]; Kliem et al., 2020: between r = 0.22 to 0.25 [taxon], between r = 0.32 and 0.35 [complement]). Figure 2 depicts the graphical taxometric results for the CCFI profile analyses of both four-indicator sets. Strong support for the superiority of a dimensional model was detected regarding the total sample (APSD: CCFI mean profile = 0.292; Kliem et al., 2020: CCFI mean profile = 0.285), the male sample (CCFI mean profile = 0.313 / 0.318), and the female sample (CCFI mean profile = 0.359 / 0.332). ## Discussion The present study evaluated the latent nature of CU traits in a large sample of German ninth graders. Results of different indicator sets clearly suggested a dimensional solution. This finding is consistent with previous studies showing the dimensionality of psychopathy in adolescents (Edens et al., 2011; Murrie et al., 2007; Walters, 2014) as well as of early disruptive behavior in preschoolers (Kliem et al., 2018). However, further studies are necessary to substantiate this result in different samples (especially in samples of adolescents with Conduct Disorder) using different measurement approaches (e.g., teacher reports, parent reports). However, a dimensional structure of CU traits has important theoretical and practical implications: First, results indicate that the process of classifying individuals in dichotomous groups (CU + risk group) needs to be considered very carefully. Second, it must be noted that people’s perceptions are affected when a construct is communicated as categorical (e.g., Prentice & Miller, 2007). For example, the term “high-risk group” implies that the condition is more enduring than a dimensional construct. Therefore, the present analysis should give reason for researchers to avoid labeling individuals in order to decrease the associated risk of stigmatization in both scientific communication and therapeutic contexts. Our finding appears to be of particular importance in the context of CU traits, since this clinical picture is generally associated with a poor prognosis (e.g., Frick & White, 2008), a negative linguistic connotation with so-called “psychopathic traits” or “evil or dark personality” (Murrie et al., 2005), as well as treatment non-response (Falkenbach et al., 2003; Gretton et al., 2001; Hawes & Dadds, 2005; Kolk & Pardini, 2010; Masi et al., 2013; O’Neill et al., 2003; Spain et al., 2004; Waschbusch et al., 2007). Furthermore, labeling juveniles may also have a punishment-enhancing effect in legal settings, especially since the term ‘psychopath’ is associated with attributes such as cold-bloodedness, evilness, a pronounced lack of remorse and particularly high risk of recidivism (e.g., Berryessa, & Wohlstetter, 2019; Petrila & Skeem, 2003). Third, for future research on CU traits, relevant implications can be drawn from the dimensionality of the construct. It seems particularly relevant that meaningful insights into the phenomenon can be derived from the study of subclinical samples. Furthermore, a dimensional structure suggests that a variety of risk factors affect the CU traits phenomenon (through addition and interaction). In this context, the polygenic nature of most psychiatric disorders should not be neglected, which are influenced by hundreds to thousands of genetic variations with very little (and interactive) effects (Moore et al., 2019). ### Limitations There are many strengths of this study, including the very large and representative sample. However, the study has some limitations. Firstly, self-reports were the only data source used, so it is possible that the results are subject to monomethod bias (e.g., Kliem et al., 2015, 2016). When attempting to replicate our findings in future studies, investigators should ensure that other data sources are used, such as other self-report-measures, teacher/parent reports, clinical interviews, and/or observational measures. Secondly, data presented here is limited to the age group of ninth graders with a mean age of 15 years. Although the data are considered suitable for taxonomic analysis, within-group indicator correlations lie above the threshold of r = 0.30. According to Ruscio et al. (2006), difficulties in selecting appropriate indicators might itself be indirect evidence of dimensionality.Footnote 1 According to Meehl (1995), a basis rate of ≥ 10% for the estimated taxon base rate (the proportion of taxon members in the sample) should be present. Our results fell below this value in some of the analyses. This is a limitation; however, it can also be pointed out that in the total sample the putative taxon group contains a very large number of cases (Ntaxon > 300). It should be remembered that it is not only the base rate but also the absolute size of each group that determines the validity of a taxometric analysis. Furthermore, it may be noted that although the rate of ambiguous results for categorical data may increase slightly at base rates between 5 and 10%, erroneous results (e.g., an incorrectly determined solution) are rarely generated (see Ruscio et al., 2011). Thus, based on the findings of our study, which very clearly support a dimensional structure, there appears to be a relatively low risk that a dimensional solution was erroneously determined. ## Conclusion In summary, the results of this study point to the need for critical reflection in defining a high-risk-group (CU +) in the context of CU traits. Although this classification may seem helpful to a clinician, it is possible that these classification systems impose clinical limitations that are not empirically defensible (see Haslam et al., 2006). With respect to the DSM-5 specifiers, the present results indicate that any classification into dichotomous groups needs to be considered very carefully. Furthermore, comparing prevalence rates across different groups (e.g., boys vs. girls, healthy vs. diseased, etc.) seems problematic. Indeed, the whole process of classifying individuals based on a sum score might be questionable (see Kliem et al., 2014).
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8136882185935974, "perplexity": 2874.148593512179}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00031.warc.gz"}
https://www.physicsforums.com/threads/the-simplist-concept-in-mathematics.9049/
# The simplist concept in mathematics 1. Nov 16, 2003 ### Loren Booda What is the simplist concept in mathematics? 2. Nov 16, 2003 ### Ambitwistor $$\emptyset$$ 3. Nov 16, 2003 ### HallsofIvy Staff Emeritus One can always get an argument between set theorists and logicians as to whether Set theory is based on logic or logic is based on set theory. That is why I would say the simplist concept is either "set" or "True-False". 4. Nov 17, 2003 ### Loren Booda Logic/illogic? 5. Nov 20, 2003 ### kai0ty um id pick addition as the simplest. your 4 basic functions (addition, subtraction, division, and multiplication) are all different forms of adding. subtraction is adding a negative number, multiplicatin is adding a number a certain about of times, and division is the inverse of multiplication. all functions can be done this way so my choice is addition. 6. Nov 20, 2003 ### brum multiplying by 0 7. Nov 20, 2003 ### ranyart I would actually say that deviding by Zero involves a lot less working out! 8. Nov 21, 2003 ### kai0ty i believe he said concept not function, i would agree multiplying by 0 is easiest function, but i like my answer better. 9. Nov 22, 2003 ### Organic Our ability to recognize and associate between opposite concepts, in my opinion this is the heart of Math language. 10. Nov 22, 2003 ### Unit_Zer0 that there is only 1 correct answer to a given problem. 11. Nov 23, 2003 ### kai0ty unit zero there are not always only 1 answer to a problem. for instance sqrt(9) could be either -3 or 3 couldnt it? 12. Nov 26, 2003 ### suyver No! sqrt(9) is always 3. Maybe you're confused with the equation x^2=9, which indeed has solutions x=3 and x=-3. In answer to the original question: I'd say that 1 (or the n x n Itentity matrix) would be the simplest concept available... Last edited: Nov 26, 2003 13. Nov 28, 2003 ### StarkyDee 1 or 0, something or nothing..binary Similar Discussions: The simplist concept in mathematics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278912901878357, "perplexity": 3853.0596169010278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825174.90/warc/CC-MAIN-20171022094207-20171022114207-00842.warc.gz"}
http://arxiv.org/abs/0709.1946
Full-text links: nucl-ex (what is this?) Title: Cross sections and beam asymmetries for $\vev{e}p \to enπ^+$ in the nucleon resonance region for $1.7 \le Q^2 \le 4.5 (GeV)^2$ Abstract: The exclusive electroproduction process $\vec{e}p \to e^\prime n \pi^+$ was measured in the range of the photon virtuality $Q^2 = 1.7 - 4.5 \rm{GeV^2}$, and the invariant mass range for the $n\pi^+$ system of $W = 1.15 - 1.7 \rm{GeV}$ using the CEBAF Large Acceptance Spectrometer. For the first time, these kinematics are probed in exclusive $\pi^+$ production from protons with nearly full coverage in the azimuthal and polar angles of the $n\pi^+$ center-of-mass system. The $n\pi^+$ channel has particular sensitivity to the isospin 1/2 excited nucleon states, and together with the $p\pi^0$ final state will serve to determine the transition form factors of a large number of resonances. The largest discrepancy between these results and present modes was seen in the $\sigma_{LT'}$ structure function. In this experiment, 31,295 cross section and 4,184 asymmetry data points were measured. Because of the large volume of data, only a reduced set of structure functions and Legendre polynomial moments can be presented that are obtained in model-independent fits to the differential cross sections. Subjects: Nuclear Experiment (nucl-ex); High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph); Nuclear Theory (nucl-th) Journal reference: Phys.Rev.C77:015208,2008 DOI: 10.1103/PhysRevC.77.015208 Cite as: arXiv:0709.1946 [nucl-ex] (or arXiv:0709.1946v2 [nucl-ex] for this version) Submission history From: Kijun Park [view email] [v1] Wed, 12 Sep 2007 14:37:46 GMT (918kb,D) [v2] Mon, 24 Sep 2007 16:53:27 GMT (874kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7948282361030579, "perplexity": 2382.3322481418445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738660957.45/warc/CC-MAIN-20160924173740-00151-ip-10-143-35-109.ec2.internal.warc.gz"}
https://ccssmathanswers.com/eureka-math-geometry-module-5-end-of-module-assessment/
# Eureka Math Geometry Module 5 End of Module Assessment Answer Key ## Engage NY Eureka Math Geometry Module 5 End of Module Assessment Answer Key ### Eureka Math Geometry Module 5 End of Module Assessment Task Answer Key Question 1. Let C be the circle in the coordinate plane that passes though the points (0,0), (0,6), and (8,0). a. What are the coordinates of the center of the circle? Since the angle formed by the points (0,6) , (0,0) , and (8,0) is a right angle, the line segment connecting (0,6) to (8,0) must be the diameter of the circle. Therefore, the center of the circle is (4,3) , the midpoint of this diameter. b. What is the area of the portion of the interior of the circle that lies in the first quadrant? (Give an exact answer in terms of π.) The distance between (0,6) and (8,0) is 10: d=$$\sqrt{6^{2}+8^{2}}$$=10 So, the circle has radius 5. The area in question is composed of half a circle and a right triangle. Area=($$\frac{1}{2}$$ ∙ 8 ∙ 6 ) + ($$\frac{1}{2}$$ π52) = $$\frac{25\pi}{2}$$ + 24 Therefore, its area is $$\frac{25\pi}{2}$$ + 24 square units. c. What is the area of the portion of the interior of the circle that lies in the second quadrant? (Give an approximate answer correct to one decimal place.) We seek the area of the region shown. We have a chord of length 6 in a circle of radius 5. Label the angle x as shown and distance d. By the Pythagorean theorem, d = 4. We also know that sin(x) = $$\frac{3}{5}$$ , so x ≈ 36.9° . The shaded area is the difference of the area of a sector and of a triangle. We have area=($$\frac{2x}{360}$$ π 52)-($$\frac{1}{2}$$ ∙ 6∙ 4 ) ≈ ($$\frac{73.8}{360}$$ ∙ 25π ) – 12 ≈ 4.1 The area is 4.1 units2. d. What is the length of the arc of the circle that lies in the first quadrant with endpoints on the axes? (Give an exact answer in terms of π.) Since this arc is a semicircle, it is half the circumference of the circle in length: $$\frac{1}{2}$$ ∙ 2π ∙ 5 = 5π The length is 5π units. e. What is the length of the arc of the circle that lies in the second quadrant with endpoints on the axes? (Give an approximate answer correct to one decimal place.) Using the notation of part (c), this length is calculated as follows: $$\frac{2x}{360}$$ ⋅ 2π ⋅ 5 ≈ $$\frac{73.8}{360}$$ ⋅10π ≈ 6.4 The length of the arc is approximately 6.4 units . f. A line of slope – 1 is tangent to the circle with point of contact in the first quadrant. What are the coordinates of that point of contact? Draw a radius from the center of the circle, (4,3) , to the point of contact, which we will denote (x,y) . This radius is perpendicular to the tangent line and has slope 1. Consequently, $$\frac{y-3}{x-4}$$ = 1 ; that is, y – 3 = x – 4 . Also, since (x,y) lies on the circle, we have (x – 4 )2 + (y – 3 )2 = 25. For both equations to hold, we must have (x – 4 )2 + (x – 4 )2 = 25 , giving x = 4 + $$\frac{5}{\sqrt{2}}$$ , or x = 4 – $$\frac{5}{\sqrt{2}}$$ . It is clear from the diagram that the point of contact we seek has its x-coordinate to the right of the x-coordinate of the center of the circle. So, choose x = 4 + $$\frac{5}{\sqrt{2}}$$ . The matching y-coordinate is y = x – 4 + 3 = x – 1 = 3 + $$\frac{5}{\sqrt{2}}$$ , so the point of contact has coordinates (4 + $$\frac{5}{\sqrt{2}}$$, 3 + $$\frac{5}{\sqrt{2}}$$ ). g. Describe a sequence of transformations that show circle C is similar to a circle with radius one centered at the origin. Circle C has center (4, 3 ) and radius 5. First, translate the circle four units to the left and three units downward. This gives a congruent circle with the origin as its center. (The radius is still 5.) Perform a dilation from the origin with scale factor $$\frac{1}{5}$$ . This will produce a similar circle centered at the origin with radius 1. h. If the same sequence of transformations is applied to the tangent line described in part (f), will the image of that line also be a line tangent to the circle of radius one centered about the origin? If so, what are the coordinates of the point of contact of this image line and this circle? Translations and dilations map straight lines to straight lines. Thus, the tangent line will still be mapped to a straight line. The mappings will not alter the fact that the circle and the line touch at one point. Thus, the image will again be a line tangent to the circle. Under the translation described in part (g), the point of contact, (4 + $$\frac{5}{\sqrt{2}}$$, 3 + $$\frac{5}{\sqrt{2}}$$), is mapped to ($$\frac{5}{\sqrt{2}}$$, $$\frac{5}{\sqrt{2}}$$). Under the dilation described, this is then mapped to ($$\frac{1}{\sqrt{2}}$$, $$\frac{1}{\sqrt{2}}$$). Question 2. In the figure below, the circle with center O circumscribes △ABC. Points A, B, and P are collinear, and the line through P and C is tangent to the circle at C. The center of the circle lies inside △ABC. a. Find two angles in the diagram that are congruent, and explain why they are congruent. Draw two radii as shown. Let m∠BAC = x . Then by the inscribed/central angle theorem, we have m∠BOC = 2x . Since △BOC is isosceles, it follows that m∠OCB = $$\frac{1}{2}$$ (180° – 2x ) = 90°- x . By the radius/tangent theorem, m∠OCP = 90° , so m∠BCP = x . We have ∠BAC ≅ ∠BCP because they intercept the same arc and have the same measure. b. If B is the midpoint of $$\overline{A P}$$ and PC = 7, what is the length of $$\overline{P B}$$? By the previous question, △ ACP and △ CBP each have an angle of measure x and share the angle at P. Thus, they are similar triangles. Since △ ACP and △ CBP are similar, matching sides come in the same ratio. Thus, $$\frac{PB}{PC}$$ =$$\frac{PC}{AP}$$. Now, AP = 2 ⋅ PB , and PC = 7 , so $$\frac{PB}{7}$$ = $$\frac{7}{2PB}$$. This gives PB = $$\frac{7}{\sqrt{2}}$$. c. If m∠BAC = 50°, and the measure of the arc AC is 130°, what is m∠P? By the inscribed/central angle theorem, arc BC has measure 100°. By the secant/tangent angle theorem, m∠P = $$\frac{130^{\circ}-100^{\circ}}{2}$$ = 15°. (One can also draw in radii and base angles in triangles to obtain the same result.) Question 3. The circumscribing circle and the inscribed circle of a triangle have the same center. a. By drawing three radii of the circumscribing circle, explain why the triangle must be equiangular and, hence, equilateral. Draw the three radii as directed, and label six angles a, b, c, d, e, and f as shown. We have a = f because they are base angles of an isosceles triangle. (We have congruent radii.) In the same way, b = c and d = e. From the construction of an inscribed circle, we know that each radius drawn is an angle bisector of the triangle. Thus, we have a = b, c = d, and e = f. It now follows that a = b = c = d = e = f. In particular, a + b = c + d = e + f, and the triangle is equiangular. Therefore, the triangle is equilateral. b. Prove again that the triangle must be equilateral, but this time by drawing three radii of the inscribed circle. By the construction of the circumscribing circle of a triangle, each radius in this picture is the perpendicular bisector of a side of the triangle. If we label the lengths a, b, c, d, e, and f as shown, it follows that b = c, d = e, and a = f. By the two-tangents theorem, we also have a = b, c = d, and e = f. Thus, a = b = c = d = e = f, and in particular, b + c = d + e = a + f; therefore, the triangle is equilateral. c. Describe a sequence of straightedge and compass constructions that allows you to draw a circle inscribed in a given equilateral triangle. The center of an inscribed circle lies at the point of intersection of any two angle bisectors of the equilateral triangle. To construct an angle bisector: 1. Draw a circle with center at one vertex P of the triangle intersecting two sides of the triangle. Call those two points of intersection A and B. 2. Setting the compass at a fixed width, draw two congruent intersecting circles, one centered at A and one centered at B. Call a point of intersection of these two circles Q. (We can assume Q is different from P.) 3. The line through P and Q is an angle bisector of the triangle. Next, construct two such angle bisectors and call their point of intersection O. This is the center of the inscribed circle. Finally, draw a line through O perpendicular to one side of the triangle. To do this: 1. Draw a circle centered at O that intersects one side of the triangle at two points. Call those points C and D. 2. Draw two congruent intersecting circles, one with center C and one with center D. 3. Draw the line through the points of intersection of those two congruent circles. This is a line through O perpendicular to the side of the triangle. Suppose this perpendicular line through O intersects the side of the triangle at the point R. Set the compass to have width equal to OR. This is the radius of the inscribed circle; so, drawing a circle of this radius with center O produces the inscribed circle. Question 4. Show that (x – 2)(x – 6) + (y – 5)(y + 11) = 0 is the equation of a circle. What is the center of this circle? What is the radius of this circle? We have (x – 2 )(x – 6 ) + (y – 5 )(y + 11 ) = 0 x2 – 8x + 12 + y2 + 6y – 55 = 0 x2 – 8x + 16 + y2 + 6y + 9 = 4 + 64 (x – 4 )2 + (y + 3 )2 = 68. This is the equation of a circle with center (4, -3 ) and radius $$\sqrt{68}$$. b. A circle has diameter with endpoints (a, b) and (c, d). Show that the equation of this circle can be written as (x – a)(x – c) + (y – b)(y – d) = 0. The midpoint of the diameter, which is ($$\frac{a+c}{2}$$, $$\frac{b+d}{2}$$), is the center of the circle; half the distance between the endpoints, which is $$\sqrt{(c-a)^{2}+(d-b)^{2}}$$ $$\frac{1}{2}$$, is the radius of the circle. Thus, the equation of the circle is (x – $$\frac{a+c}{2}$$)2 + (y – $$\frac{b-d}{2}$$)2 = $$\frac{1}{4}$$ ((c -a )2 + (d -b )2 ). Multiplying through by 4 gives (2x – a -c )2 + (2y – b -d )2 = (c – a )2 + (d – b )2 . This becomes 4 x2 + a2 + c2-4xa-4xc + 2ac + 4 y2 + b2 + d2 – 4yb – 4yd + 2bd = c2 + a2– 2ac + d2 + b2 – 2bd. That is, 4 x2 – 4xa – 4xc + 4ac + 4 y2 – 4yb – 4yd + 4bd = 0. Dividing through by 4 gives x2 – xa – xc + ac + y2 – yb – yd + bd = 0. That is, (x – a )(x – c ) + (y – b )(y – d ) = 0, as desired. Question 5. Prove that opposite angles of a cyclic quadrilateral are supplementary. x + y = $$\frac{360^{\circ}}{2}$$ = 180° .
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8707501888275146, "perplexity": 409.48882524141516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00130.warc.gz"}
http://www.texttutoring.com/car-rolling-cliff/
### How does it work? Option 1: Just fill in the form with your homework question, and someone will respond shortly with an answer. Option 2: Take a photo (or link) of your question, and email or text it to us. # A car rolling off a cliff Q: A car is parked on a cliff overlooking the ocean on an incline that makes an angle of 24.0° below the horizontal. The negligent driver leaves the car in neutral, and the emergency brakes are defective. The car rolls from rest down the incline with a constant acceleration of 4.00 m/s2 for a distance of 50.0 m to the edge of the cliff. The cliff is 30.0 m above the ocean. Find (a) the car’s position relative to the base of the cliff when the car lands in the ocean, and (b) the length of time the car is in the air. A: You can use this equation to find Vf as the car leaves the cliff and begins to fall: $2ad=V_f^2 - V_i^2$ $2(4)(50) = V_f^2 - 0$ $Vf = 20$ Now you can split this velocity up into its components. As the car leaves the cliff, it forms a triangle of velocities with 20 on the hypotenuse, and Vx = 20*cos(24), Vy = 20*sin(24) These are the initial velocities in the horizontal and vertical directions, respectively. Use $d = V_i*t + \frac{1}{2} at^2$ in the Vertical to solve for time. $-30 = -20*sin(24) * t + 1/2 (-9.8) t^2$ Rearrange and use the quadratic formula, with a = -4/9, b = -20*sin(24) and c = 30 You should find that t = 1.78 seconds. You can then use this time in D=VT in the horizontal, to solve for D. D=VT D = 20*cos(24) * 1.78 D = 32.5 m Hope that makes sense!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6661300659179688, "perplexity": 800.7418956870993}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912205597.84/warc/CC-MAIN-20190326160044-20190326182044-00011.warc.gz"}
https://www.dnasir.com/2013/10/09/display-mode-detection-for-responsive-websites-using-angularjs/
# Display Mode Detection for Responsive Websites using AngularJS I’ve recently started using AngularJS in a project I’m currently working on. So far, AngularJS has been great, and I’ve especially enjoyed building my own custom directives. Here’s one I’m using to determine the current display mode for responsive websites. angular.module('myApp', []) .directive('dnDisplayMode', function($window) { return { restrict: 'A', scope: { dnDisplayMode: '=' }, template: '<span class="mobile"></span><span class="tablet"></span><span class="tablet-landscape"></span><span class="desktop"></span>', link: function(scope, elem, attrs) { var markers = elem.find('span'); function isVisible(element) { return element && element.style.display != 'none' && element.offsetWidth && element.offsetHeight; } function update() { angular.forEach(markers, function (element) { if (isVisible(element)) { scope.dnDisplayMode = element.className; return false; } }); } var t; angular.element($window).bind('resize', function() { clearTimeout(t); t = setTimeout(function() { update(); scope.$apply(); }, 300); }); update(); } }; }); And here’s the CSS you’ll need to get this working. .display-mode { height: 0; margin: 0; overflow: hidden; padding: 0; } .display-mode span { display: inline-block; height: 1px; width: 1px; } @media only screen and (max-width: 712px) { .display-mode .tablet, .display-mode .tablet-landscape, .display-mode .desktop { display: none; } } @media only screen and (min-width: 713px) and (max-width: 954px) { .display-mode .mobile, .display-mode .tablet-landscape, .display-mode .desktop { display: none; } } @media only screen and (min-width: 955px) and (max-width: 1195px) { .display-mode .mobile, .display-mode .tablet, .display-mode .desktop { display: none; } } @media (min-width: 1196px) { .display-mode .mobile, .display-mode .tablet, .display-mode .tablet-landscape { display: none; } } And here’s how you would use it. <html ng-app="myApp"> ... <body ng-controller="MainCtrl"> <div class="display-mode" dn-display-mode="displayMode"></div> ... angular.module('myApp') .controller('MainCtrl', function($scope) { $scope.displayMode = 'mobile'; // default value$scope.\$watch('displayMode', function(value) { switch(value) { case 'mobile': // do stuff for mobile mode case 'tablet': // do stuff for tablet mode // and so on } }); }); As you can see, I’m not doing any calculations in determining the current display mode. I figured that it was far simpler to just use CSS and media queries to show or hide the elements I’ve dynamically added to the DOM. The directive just checks to see which element is visible whenever the window is resized, and sets the model value accordingly. This is pretty useful if you have widgets, like the site navigation, that operates differently depending on the display mode. Hopefully someone will find this useful. Wassalam
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2314835786819458, "perplexity": 4813.921235742966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813818.15/warc/CC-MAIN-20180221222354-20180222002354-00063.warc.gz"}
https://optimization.cbe.cornell.edu/index.php?title=Duality&oldid=886
# Duality Author: Claire Gauthier, Trent Melsheimer, Alexa Piper, Nicholas Chung, Michael Kulbacki (SysEn 6800 Fall 2020) Steward: TA's name, Fengqi You ## Introduction Every linear programming optimization problem may be viewed either from the primal or the dual, this is the principal of duality. Duality develops the relationships between one linear programming problem and another related linear programming problem. For example in economics, if the primal optimization problem deals with production and consumption levels, then the dual of that problem relates to the prices of goods and services. The dual variables in this example can be referred to as shadow prices. The shadow price of a constraint ... ## Theory, methodology, and/or algorithmic discussions ### Definition: Primal maximize ${\displaystyle z=\textstyle \sum _{j=1}^{n}\displaystyle c_{j}x_{j}}$ subject to: Dual
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7965445518493652, "perplexity": 3317.1327510997635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00060.warc.gz"}
http://math.stackexchange.com/questions/44821/orthogonal-basis-for-operatornametrab
# Orthogonal basis for $\operatorname{Tr}(AB)$ I recently stumbled across this bilinear form: $\beta(A,B)=\operatorname{Tr}(AB)$ for $A,B \in \mathbb{R}^{n,n}$. I am searching for an orthogonal basis. With many difficulties I could find one for $\mathbb{R}^{2,2}$ namely: $$B_{2,2}=\left\{\left( \begin{array}{cc} \frac{1}{\sqrt{2}} & 0 \\ 0 & \frac{1}{\sqrt{2}} \end{array} \right),\left( \begin{array}{cc} 0 & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & 0 \end{array} \right),\left( \begin{array}{cc} \frac{1}{\sqrt{2}} & 0 \\ 0 & -\frac{1}{\sqrt{2}} \end{array} \right),\left( \begin{array}{cc} 0 & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & 0 \end{array} \right)\right\}$$ This form was also made that the representation matrix of $\beta$ is very elegant: $$M_\beta=\left( \begin{array}{cccc} \beta \left(b_1,b_1\right) & \beta \left(b_1,b_2\right) & \beta \left(b_1,b_3\right) & \beta \left(b_1,b_4\right) \\ \beta \left(b_2,b_1\right) & \beta \left(b_2,b_2\right) & \beta \left(b_2,b_3\right) & \beta \left(b_2,b_4\right) \\ \beta \left(b_3,b_1\right) & \beta \left(b_3,b_2\right) & \beta \left(b_3,b_3\right) & \beta \left(b_3,b_4\right) \\ \beta \left(b_4,b_1\right) & \beta \left(b_4,b_2\right) & \beta \left(b_4,b_3\right) & \beta \left(b_4,b_4\right) \end{array} \right)=\left( \begin{array}{cccc} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{array} \right)$$ So I can read the positive index of inertia (basically the number of 1s on the diagonal) which is in this case $n_+=3$. I am looking for such bases for higher dimensions of $\mathbb{R}^{n,n}$ but could not succeed. Thank you in advance. - You can construct an orthogonal basis with only $1$ and $-1$ on the diagonal by starting from the canonical basis $(b_{kl})_{ij}=\delta_{ik}\delta_{jl}$. The products are all zero except when the two factors are transposes of each other. Thus, each basis element $b_{kl}$ with $k=l$ is already orthogonal to all the others, and the remaining basis elements form pairs of transposes. You can form linear combinations of these pairs with coefficients $\pm 1/\sqrt{2}$ like you did for $n=2$ to from orthogonal combinations of them, one with "norm" $1$ and one with $-1$. Thus, the positive index of inertia is $n+\frac{n(n-1)}{2}=\frac{n(n+1)}{2}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7654568552970886, "perplexity": 121.2535252047969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701153585.76/warc/CC-MAIN-20160205193913-00320-ip-10-236-182-209.ec2.internal.warc.gz"}
https://link.springer.com/chapter/10.1007/978-3-662-55386-2_27?error=cookies_not_supported&code=9d8c2fdb-4081-435a-b3a6-0c9d01cb6151
# Coherent Diagrammatic Reasoning in Compositional Distributional Semantics Conference paper Part of the Lecture Notes in Computer Science book series (LNCS, volume 10388) ## Abstract The framework of Categorical Compositional Distributional models of meaning [3], inspired by category theory, allows one to compute the meaning of natural language phrases, given basic meaning entities assigned to words. Composing word meanings is the result of a functorial passage from syntax to semantics. To keep one from drowning in technical details, diagrammatic reasoning is used to represent the information flow of sentences that exists independently of the concrete instantiation of the model. Not only does this serve the purpose of clarification, it moreover offers computational benefits as complex diagrams can be transformed into simpler ones, which under coherence can simplify computation on the semantic side. Until now, diagrams for compact closed categories and monoidal closed categories have been used (see [2, 3]). These correspond to the use of pregroup grammar [12] and the Lambek calculus [9] for syntactic structure, respectively. Unfortunately, the diagrammatic language of Baez and Stay [1] has not been proven coherent. In this paper, we develop a graphical language for the (categorical formulation of) the nonassociative Lambek calculus [10]. This has the benefit of modularity where extension of the system are easily incorporated in the graphical language. Moreover, we show the language is coherent with monoidal closed categories without associativity, in the style of Selinger’s survey paper [17]. ## Keywords Diagrammatic reasoning Coherence theorem Proof nets Compositional distributional semantics ## Notes ### Acknowledgements The author is greatly indebted for many fruitful discussions with Michael Moortgat during the writing of the MSc thesis on which this paper is largely based. Also, a thanks goes out to Mehrnoosh Sadrzadeh for discussions culminating in the existence of this paper. A thanks as well to John Baez and Peter Selinger for giving some advice a long time ago on the topic of diagrammatic reasoning. Finally, the author would like to thank the two anonymous referees of this paper. The author was supported by a Queen Mary Principal’s Research Studentship during the writing of this paper. ## References 1. 1. Baez, J., Stay, M.: Physics, topology, logic and computation: a rosetta stone. In: Coecke, B. (ed.) New Structures for Physics, pp. 95–172. Springer, Heidelberg (2010). doi: 2. 2. Coecke, B., Grefenstette, E., Sadrzadeh, M.: Lambek vs. Lambek: functorial vector space semantics and string diagrams for lambek calculus. Ann. Pure Appl. log. 164(11), 1079–1100 (2013) 3. 3. Coecke, B., Sadrzadeh, M., Clark, S.: Mathematical foundations for a compositional distributional model of meaning. arXiv preprint arXiv:1003.4394 (2010) 4. 4. Freyd, P., Yetter, D.N.: Coherence theorems via knot theory. J. Pure Appl. Algebr. 78(1), 49–76 (1992) 5. 5. Girard, J.Y.: Linear logic. Theor. Comput. Sci. 50(1), 1–101 (1987) 6. 6. Joyal, A., Street, R.: The geometry of tensor calculus, I. Adv. Math. 88(1), 55–112 (1991) 7. 7. Joyal, A., Street, R.: Braided tensor categories. Adv. Math. 102(1), 20–78 (1993) 8. 8. Kelly, G.M., Laplaza, M.L.: Coherence for compact closed categories. J. Pure Appl. Algebr. 19, 193–213 (1980) 9. 9. Lambek, J.: The mathematics of sentence structure. Am. Math. Mon. 65(3), 154–170 (1958) 10. 10. Lambek, J.: On the calculus of syntactic types. Struct. Lang. Math. Asp. 166, C178 (1961)Google Scholar 11. 11. Lambek, J.: Deductive systems and categories. Theory Comput. Syst. 2(4), 287–318 (1968) 12. 12. Lambek, J.: Type grammar revisited. In: Lecomte, A., Lamarche, F., Perrier, G. (eds.) LACL 1997. LNCS, vol. 1582, pp. 1–27. Springer, Heidelberg (1999). doi: 13. 13. Lambek, J., Scott, P.J.: Introduction to Higher-Order Categorical Logic, vol. 7. Cambridge University Press, Cambridge (1988) 14. 14. Moortgat, M.: Multimodal linguistic inference. J. Log. Lang. Inf. 5(3–4), 349–385 (1996) 15. 15. Moot, R.: Proof Nets for Linguistic Analysis. Ph.D. thesis, Utrecht University (2002)Google Scholar 16. 16. Penrose, R.: Applications of negative dimensional tensors. In: Combinatorial Mathematics and its Applications, vol. 1, pp. 221–244 (1971)Google Scholar 17. 17. Selinger, P.: A survey of graphical languages for monoidal categories. In: Coecke, B. (ed.) New Structures for Physics, pp. 289–355. Springer, Heidelberg (2010) 18. 18. Wijnholds, G.J.: Categorical foundations for extended compositional distributional models of meaning. MSc. thesis, University of Amsterdam (2014)Google Scholar
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8416976928710938, "perplexity": 4419.186125955508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00085.warc.gz"}
https://brilliant.org/discussions/thread/april-newsletter-2/
× # April Newsletter $$\textbf{Hello Every} \displaystyle \int_{-\infty}^0 {e^x \ dx} \\ \\ \textbf{ From the Moderators to You: Happy April!}$$ March has gone by fast! Everyone is rejoicing the end of their examinations and are active as before, and Brilliant is enjoying it. April marks the start of the Troll King Contest. Here are the highlights of the month. $\large \mathbf{ NEW} \ \ \ \mathbf{ FEATURES } \ \ \ \mathbf{ TO } \ \ \ \mathbf{ BE } \ \ \ \mathbf{ EXCITED } \ \ \ \mathbf{ ABOUT }$ The Brilliant community is always undergoing awe-inspiring growth. These new features are meant to foster an incredible experience on Brilliant. Excited to hear about what's in the spotlight this month yet? Okay, here we go: 1.The Search Toolbar This toolbar allows you access to all of the Brilliant. You can filter even better. Find anything you want. 2.DogTex Cute Dogs can teach Math better. Math and science can be difficult. We'd all like to be Level 5 in everything, but sometimes a crazy problem gets us down. But what if there was a better way to learn math and science? That's where DogTeX comes in! $\large \mathbf{ THE } \ \ \ \mathbf{ POSTS } \ \ \ \mathbf{ COMMUNITY } \ \ \ \mathbf{ LOVED }$ From problems to solutions, from notes to wikis, these are the posts that were highly appreciated by the whole community. Here's your chance to have a look at what captured our attention for the past month: 1. $$\textbf{Popular Notes:}$$ Nothing better for the JEE Aspirants other than Sandeep sir's five minutes revision notes. He has posted the Mathematical Reasoning for JEE-Mains to ensure you get 120/120 in JEE. Don't forget to check it out! 2. $$\textbf{Huge Events:}$$ There is an ongoing Troll King Contest '16, don't forget to fool others. Show off your Problem writing skills at the Problem Writing Party: March 28th - April 10th. 3. $$\textbf{Contests:}$$ Since the exam season ended, Brilliant was flooded with contests. There were Brilliant Junior Integration Contest, Brilliant Sub Junior Calculus Contest and the Brilliant Junior Number Theory Contest. Be sure to learn new techniques and ways from these AWESOME contests. 4. $$\textbf{Popular Problems:}$$ Problem Yard is the other name for Brilliant, and last month, as usual, Brilliant was showered by them. It was very hard to select and here goes nothing(actually 2 problems, not nothing :P) Fermat's Last Theorem, the image might be mesmerizing your brains but remember internet images LIE. Earth In Trouble? , is it the waves, the mass or neither which is putting earth in trouble. Go SAVE it. $\large \mathbf{ NEW } \ \ \ \mathbf{ AND } \ \ \ \mathbf{ ACTIVE } \ \ \ \mathbf{ MEMBERS }$ As time goes on, more and more people join the community, and a few of them are very active in contributing to the community by posting problems, solutions, wikis, notes, and pretty much everything else to the point that we should honor them as hardworking members. Let's give a round of applause to James Wong, Guillermo Templado, Yash Jain, Shithil Islam, Mark C and Jack Sacks. $\large \mathbf{ WHO } \ \ \ \mathbf{ TO } \ \ \ \mathbf{ FOLLOW }$ Here's the long-awaited WhoToFollow list showing off the dazzling members of our community. I'm pleased to introduce you to the wonderful people who are constantly helping to build the community. Here are the names, in no particular order: Be sure to hit the $$\color{green}{\boxed{\text{+Follow} \ } }$$ button to keep yourself updated with the amazing problems and notes posted by them. Don't forget, you can always join our Slack chat if you want to talk with our community members. Introduce yourself, share problems, have discussions, join the wiki collaboration parties too. Most of all, have fun. Oh, I just remembered a joke. Let me crack right here: A man walks into a bar. Bartender: What's your order, sir? Man: 10 pints of vodka. Bartender: Now that's an Order. Regards Moderator Team Note by Rajdeep Dhingra 9 months, 3 weeks ago ## Comments Sort by: Top Newest Thanks for mentioning my note in the newsletter. I'm very pleased. See $$\to$$ $$\ddot \smile$$ · 9 months, 3 weeks ago Log in to reply All Hail #Moderation! · 9 months, 3 weeks ago Log in to reply Congratulations to those who made their way to WhoToFollow list! And it's amazing to know that more people are joining us and they are contributing wonderfully. Thanks to all! · 9 months, 3 weeks ago Log in to reply Why was dogtex introduced? · 9 months, 3 weeks ago Log in to reply To fool everyone on the day of April fool! :P · 9 months, 3 weeks ago Log in to reply No. DogTex is a valuable tool. If people keep thinking it's just a joke, the tool will be lost. Stop giggling. · 8 months, 3 weeks ago Log in to reply Hmm ok. · 9 months, 3 weeks ago Log in to reply Wow! Thanks for mentioning my note on newsletter. I loved the newsletter! · 9 months, 3 weeks ago Log in to reply Bartender: So, an order of Vodka Magnumtude, then? · 8 months, 3 weeks ago Log in to reply How to login into Dogtex ? · 9 months, 3 weeks ago Log in to reply Bro, it was an april fool trick, there is nothing like it now :P · 9 months, 3 weeks ago Log in to reply :P :P :P :P haha · 9 months, 3 weeks ago Log in to reply When will results of JOMPC declared? · 9 months, 3 weeks ago Log in to reply Today by Night. · 9 months, 3 weeks ago Log in to reply Not Declared · 9 months, 3 weeks ago Log in to reply What does the joke mean? · 9 months, 3 weeks ago Log in to reply You what is an Oder of a number ? Eg : 21 is written as 2.1 $$\times {10}^1$$ , the order is 1. Since he said 10 , He meant an Order. Get it ? · 9 months, 3 weeks ago Log in to reply Hmm, thanks. A joke is like a frog, you can dissect it but that kills the joke, like Randall said. · 9 months, 3 weeks ago Log in to reply Correct. · 9 months, 3 weeks ago Log in to reply Exactly,but who is Randall? · 9 months, 3 weeks ago Log in to reply Nicely done #Moderation · 9 months, 3 weeks ago Log in to reply Congrats to all those who made it to the who to follow list. Great wiki - Fermats little theorem. And great note @Rajdeep Dhingra · 9 months, 3 weeks ago Log in to reply Lets try DogTex on someone's profile! · 9 months, 3 weeks ago Log in to reply Keep it going #moderation! · 9 months, 3 weeks ago Log in to reply × Problem Loading... Note Loading... Set Loading...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25120505690574646, "perplexity": 6851.822393954255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283689.98/warc/CC-MAIN-20170116095123-00101-ip-10-171-10-70.ec2.internal.warc.gz"}
https://calculator.academy/concentration-from-absorbance-calculator/
Enter the absorbance, path length, and extinction coefficient into the calculator to determine the concentration. ## Concentration from Absorbance Formula The following formula is used to calculate a concentration from absorbance. C = A / (L * e) • Where A is the absorbance • L is the path length • e is the extinction coefficient To calculate the concentration from absorbance, divide the absorbance by the product of the path length and extinction coefficient. ## Concentration Definition A concentration is defined as the total amount of a substance in a given space. ## Concentration from Absorbance Example How to calculate concentration from absorbance? 1. First, determine the absorbance. Calculate the absorbance. 2. Next, determine the path length. Measure the total path length of absorbance. 3. Next, determine the extinction coefficient. Calculate the extinction coefficient. 4. Finally, calculate the concentration. Calculate the concentration using the equation above. ## FAQ What is a concentration? A concentration is a measure of the total amount of substance contained is a certain area or volume.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9843766689300537, "perplexity": 2538.3042576126422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00432.warc.gz"}
https://worldwidescience.org/topicpages/a/aciers+austenitiques+irradies.html
#### Sample records for aciers austenitiques irradies 1. The compatibility of various austenitic steels with molten sodium (1963); Compatibilite de divers aciers austenitiques avec le sodium fondu (1963) Energy Technology Data Exchange (ETDEWEB) Champeix, L; Sannier, J; Darras, R; Graff, W; Juste, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1963-07-01 Various techniques for studying corrosion by molten sodium have been developed and applied to the case of 18/10 austenitic steels. The results obtained are discussed as a function of various parameters: type of steel, temperature, oxygen content of the sodium, surface treatment, welds, mechanical strain. In general, these steels have an excellent resistance to sodium when the oxygen content is limited by a simple purification system of the 'cold trap' type, and when an attempt is made to avoid cavitation phenomena which are particularly dangerous, as is shown by the example given. (authors) [French] Des techniques d'etude de la corrosion par le sodium fondu en circulation ont ete mises au point et appliquees au cas des aciers austenitiques 18/10. Les resultats obtenus sont discutes en fonction de differents parametres: nuance de l'acier, temperature, teneur en oxygene du sodium, traitement de surface, soudure, contrainte mecanique. D'une maniere generale, ces aciers ont une excellente tenue dans le sodium lorsque, sa teneur en oxygene est limitee par un systeme simple de purification du type ''piege froid'' et lorsque l'on fait en sorte d'eviter les phenomenes de cavitation particulierement dangereux, comme il ressort d'un exemple cite. (auteurs) 2. Microstructural characterization and model of hardening for the irradiated austenitic stainless steels of the internals of pressurized water reactors; Caracterisation microstructurale et modelisation du durcissement des aciers austenitiques irradies des structures internes des reacteurs a eau pressurisee Energy Technology Data Exchange (ETDEWEB) Pokor, C 2003-07-01 The core internals of Pressurized Water Reactors (PWR) are composed of SA 304 stainless steel plates and CW 316 stainless steel bolts. These internals undergo a neutron flux at a temperature between 280 deg C and 380 deg C which modifies their mechanical properties. These modifications are due to the changes in the microstructure of these materials under irradiation which depend on flux, dose and irradiation temperature. We have studied, by Transmission Electron Microscopy, the microstructure of stainless steels SA 304, CW 316 and CW 316Ti irradiated in a mixed flux reactor (OSIRIS at 330 deg C between 0,8 dpa et 3,4 dpa) and in a fast breeder reactor at 330 deg C (BOR-60) up to doses of 40 dpa. Moreover, samples have been irradiated at 375 deg C in a fast breeder reactor (EBR-II) up to doses of 10 dpa. The microstructure of the irradiated stainless steels consists in faulted Frank dislocation loops in the [111] planes of austenitic, with a Burgers vector of [111]. It is possible to find some voids in the solution annealed samples irradiated at 375 deg C. The evolution of the dislocations loops and voids has been simulated with a 'cluster dynamic' model. The fit of the model parameters has allowed us to have a quantitative description of our experimental results. This description of the microstructure after irradiation was coupled together with a hardening model by Frank loops that has permitted us to make a quantitative description of the hardening of SA 304, CW 316 and CW 316Ti stainless steels after irradiation at a certain dose, flux and temperature. The irradiation doses studied grow up to 90 dpa, dose of the end of life of PWR internals. (author) 3. Reaction of uranium and plutonium carbides with austenitic steels; Reaction des carbures d'uranium et de plutonium avec des aciers austenitiques Energy Technology Data Exchange (ETDEWEB) Mouchnino, M [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1967-07-01 The reaction of uranium and plutonium carbides with austenitic steels has been studied between 650 and 1050 deg. C using UC, steel and (UPu)C, steel diffusion couples. The steels are of the type CN 18.10 with or without addition of molybdenum. The carbides used are hyper-stoichiometric. Tests were also carried out with UCTi, UCMo, UPuCTi and UPuCMo. Up to 800 deg. C no marked diffusion of carbon into stainless steel is observed. Between 800 and 900 deg. C the carbon produced by the decomposition of the higher carbides diffuses into the steel. Above 900 deg. C, decomposition of the monocarbide occurs according to a reaction which can be written schematically as: (U,PuC) + (Fe,Ni,Cr) {yields} (U,Pu) Fe{sub 2} + Cr{sub 23}C{sub 6}. Above 950 deg. C the behaviour of UPuCMo and that of the titanium (CN 18.12) and nickel (NC 38. 18) steels is observed to be very satisfactory. (author) [French] La reaction des carbures d'uranium et de plutonium avec des aciers austenitiques a ete etudiee entre 650 deg. C et 1050 deg. C a partir de couples de diffusion UC, acier et (UPu)C, acier. Les aciers sont du type CN 18.10 avec ou sans addition de molybdene. Les carbures utilises sont hyper-stoechiometriques. En outre on a fait des essais avec UCTi, UCMo, UPuCTi, UPuCMo. Jusqu'a 800 deg. C on ne detecte pas de diffusion sensible du carbone dans l'acier inoxydable. Entre 800 et 900 deg. C il y a diffusion dans l'acier du carbone provenant de la decomposition des carbures superieurs. A partir de 900 deg. C il y a decomposition du monocarbure selon une reaction que l'on ecrit schematiquement: (U,PuC) + (Fe, Ni, Cr) {yields} (U,Pu)Fe{sub 2} + Cr{sub 23}C{sub 6}. Nous notons a 950 deg. C le bon comportement de UPuCMo ainsi que celui des aciers au titane (CN 18. 12) et au nickel (NC 38.18). (auteur) 4. The electrochemical aspect of the corrosion of austenitic stainless steels, in nitric acid and in the presence of hexavalent chromium (1961); Aspect electrochimique de la corrosion d'aciers inoxydables austenitiques en milieu nitrique et en presence de chrome hexavalent (1961) Energy Technology Data Exchange (ETDEWEB) Coriou, H; Hure, J; Plante, G [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1961-07-01 The corrosion of austenitic stainless steels in boiling nitric acid markedly increases when the medium contains hexavalent chromium ions. Because of several redox phenomena, the potential of the steel generally changes in course of time. Measurements show a relation between the weight loss and the potential of specimens. Additions of Mn(VII) and Ce(IV) are compared with that of Cr(VI), and show that the relation is a general one. The attack cf the metal in oxidizing media is largely intergranular, leading to exfoliation of the grains, although the steel studied is not sensitive to the classical Huey and Strauss tests. Also even in the absence of any other oxidizing reaction, the current density observed when the steel is anodically polarized under potentiostatic conditions does not correspond to the actual weight loss of the metal. (authors) [French] La corrosion d'aciers inoxydables austenitiques en milieu nitrique bouillant augmente notablement quand le milieu contient des ions chrome a l'etat hexavalent. Par suite de divers phenomenes d'oxydo-reduction, le potentiel de l'acier evolue generalement au cours du temps. Les mesures effectuees permettent d'etablir une relation entre les pertes de poids et le potentiel des echantillons. L'addition de Mn(VI) et Ce(IV) est compare a celle de Cr(VI) et montre que la relation precedente s'applique de facon generale. L'attaque du metal en milieu oxydant est en grande, partie due a une corrosion intergranulaire conduisant a un dechaussement des grains bien que l'acier etudie ne soit pas sensible aux tests classiques de Huey et de Strauss. Aussi, meme en l'absence de toute autre reaction d'oxydation l'intensite qu l'on observerait en soumettant l'acier a un potentiel anodique dans un montage potentiostatique ne correspondrait pas a la perte de poids reelle du metal. (auteurs) 5. The electrochemical aspect of the corrosion of austenitic stainless steels, in nitric acid and in the presence of hexavalent chromium (1961); Aspect electrochimique de la corrosion d'aciers inoxydables austenitiques en milieu nitrique et en presence de chrome hexavalent (1961) Energy Technology Data Exchange (ETDEWEB) Coriou, H.; Hure, J.; Plante, G. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1961-07-01 The corrosion of austenitic stainless steels in boiling nitric acid markedly increases when the medium contains hexavalent chromium ions. Because of several redox phenomena, the potential of the steel generally changes in course of time. Measurements show a relation between the weight loss and the potential of specimens. Additions of Mn(VII) and Ce(IV) are compared with that of Cr(VI), and show that the relation is a general one. The attack cf the metal in oxidizing media is largely intergranular, leading to exfoliation of the grains, although the steel studied is not sensitive to the classical Huey and Strauss tests. Also even in the absence of any other oxidizing reaction, the current density observed when the steel is anodically polarized under potentiostatic conditions does not correspond to the actual weight loss of the metal. (authors) [French] La corrosion d'aciers inoxydables austenitiques en milieu nitrique bouillant augmente notablement quand le milieu contient des ions chrome a l'etat hexavalent. Par suite de divers phenomenes d'oxydo-reduction, le potentiel de l'acier evolue generalement au cours du temps. Les mesures effectuees permettent d'etablir une relation entre les pertes de poids et le potentiel des echantillons. L'addition de Mn(VI) et Ce(IV) est compare a celle de Cr(VI) et montre que la relation precedente s'applique de facon generale. L'attaque du metal en milieu oxydant est en grande, partie due a une corrosion intergranulaire conduisant a un dechaussement des grains bien que l'acier etudie ne soit pas sensible aux tests classiques de Huey et de Strauss. Aussi, meme en l'absence de toute autre reaction d'oxydation l'intensite qu l'on observerait en soumettant l'acier a un potentiel anodique dans un montage potentiostatique ne correspondrait pas a la perte de poids reelle du metal. (auteurs) 6. Influence of plastic strain localization on the stress corrosion cracking of austenitic stainless steels; Influence de la localisation de la deformation plastique sur la CSC d'aciers austenitiques inoxydables Energy Technology Data Exchange (ETDEWEB) Cisse, S.; Tanguy, B. [CEA Saclay, DEN, SEMI, 91 - Gif-sur-Yvette (France); Andrieu, E.; Laffont, L.; Lafont, M.Ch. [Universite de Toulouse. CIRIMAT, UPS/INPT/CNRS, 31 - Toulous (France) 2010-03-15 The authors present a research study of the role of strain localization on the irradiation-assisted stress corrosion cracking (IASCC) of vessel steel in PWR-type (pressurized water reactor) environment. They study the interaction between plasticity and intergranular corrosion and/or oxidation mechanisms in austenitic stainless steels with respect to sublayer microstructure transformations. The study is performed on three austenitic stainless grades which have not been sensitized by any specific thermal treatment: the A286 structurally hardened steel, and the 304L and 316L austenitic stainless steels 7. Apparatus of irradiation of steel test pieces in the Marcoule pile G 1; Dispositifs d'irradiation d'eprouvettes d'acier dans la pile G 1 de Marcoule Energy Technology Data Exchange (ETDEWEB) Marinot, R.; Wallet, Ph. [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires 1960-07-01 Test pieces of steel were irradiated in the reactor G1 at Marcoule, in convectors replacing fuel elements, and in vertical channels in furnace-heated containers. The apparatus designed for this irradiation is described: containers, converter-rods, suspension fixtures and clamps, temperature measurement devices, lead castles and unloading set-ups. (author) [French] Des eprouvettes d'acier ont ete irradiees dans le reacteur G1 de Marcoule dans des convertisseurs mis a la place d'elements combustibles, et dans des canaux verticaux, en conteneurs chauffes par four. Nous decrivons l'appareillage etudie pour cette irradiation: conteneurs, barreaux-convertisseurs, dispositifs de suspension et d'amarrage, dispositifs de regulation et de mesure de temperature, chateaux de plomb et montages de defournement. (auteur) 8. Apparatus of irradiation of steel test pieces in the Marcoule pile G 1; Dispositifs d'irradiation d'eprouvettes d'acier dans la pile G 1 de Marcoule Energy Technology Data Exchange (ETDEWEB) Marinot, R; Wallet, Ph [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires 1960-07-01 Test pieces of steel were irradiated in the reactor G1 at Marcoule, in convectors replacing fuel elements, and in vertical channels in furnace-heated containers. The apparatus designed for this irradiation is described: containers, converter-rods, suspension fixtures and clamps, temperature measurement devices, lead castles and unloading set-ups. (author) [French] Des eprouvettes d'acier ont ete irradiees dans le reacteur G1 de Marcoule dans des convertisseurs mis a la place d'elements combustibles, et dans des canaux verticaux, en conteneurs chauffes par four. Nous decrivons l'appareillage etudie pour cette irradiation: conteneurs, barreaux-convertisseurs, dispositifs de suspension et d'amarrage, dispositifs de regulation et de mesure de temperature, chateaux de plomb et montages de defournement. (auteur) 9. Influence of localized deformation on A-286 austenitic stainless steel stress corrosion cracking in PWR primary water; Influence de la localisation de la deformation sur la corrosion sous contrainte de l'acier inoxydable austenitique A-286 en milieu primaire des REP Energy Technology Data Exchange (ETDEWEB) Savoie, M 2007-01-15 Irradiation-assisted stress corrosion cracking (IASCC) of austenitic stainless steels is known to be a critical issue for structural components of nuclear reactor cores. The deformation of irradiated austenitic stainless steels is extremely heterogeneous and localized in deformation bands that may play a significant role in IASCC. In this study, an original approach is proposed to determine the influence of localized deformation on austenitic stainless steels SCC in simulated PWR primary water. The approach consists in (i) performing low cycle fatigue tests on austenitic stainless steel A-286 strengthened by {gamma}' precipitates Ni{sub 3}(Ti,Al) in order to shear and dissolve the precipitates in intense slip bands, leading to a localization of the deformation within and in (ii) assessing the influence of these {gamma}'-free localized deformation bands on A-286 SCC by means of comparative CERT tests performed on specimens with similar yield strength, containing or not {gamma}'-free localized deformation bands. Results show that strain localization significantly promotes A-286 SCC in simulated PWR primary water at 320 and 360 C. Moreover, A-286 is a precipitation-hardening austenitic stainless steel used for applications in light water reactors. The second objective of this work is to gain insights into the influence of heat treatment and metallurgical structure on A-286 SCC susceptibility in PWR primary water. The results obtained demonstrate a strong correlation between yield strength and SCC susceptibility of A-286 in PWR primary water at 320 and 360 C. (author) 10. Thermal fatigue cracking of austenitic stainless steels; Fissuration en fatigue thermique des aciers inoxydables austenitiques Energy Technology Data Exchange (ETDEWEB) Fissolo, A 2001-07-01 11. Reheat cracking in austenitic stainless steels; Fissuration en relaxation des aciers inoxydables austenitiques Energy Technology Data Exchange (ETDEWEB) Auzoux, Q.; Allais, L. [CEA Saclay, Dept. des Materiaux pour le Nucleaire, DMN, 91 - Gif sur Yvette (France); Pineau, A.; Gourgues, A.F. [Centre des Materiaux Pierre-Marie Fourt UMR CNRS 7633, 91 - Evry (France) 2002-07-01 Intergranular cracking can occur in heat-affected zones (HAZs) of austenitic stainless steel welded joints when reheated in the temperature range from 500 to 700 deg C. At this temperature, residual stresses due to welding relax by creep flow. HAZ may not sustain this small strain if its microstructure has been sufficiently altered during welding. In order to precise which particular microstructure alteration causes such an intergranular embrittlement, type 316L(N) HAZs were examined by transmission electron microscopy. A marked increase in the dislocation density, due to plastic strain during the welding process, was revealed, which caused an increase in Vickers hardness. Type 316L(N) HAZ were then simulated by the following thermal-mechanical process: annealing treatment and work hardening (pre-strain). Creep rupture tests on smooth specimens were also carried out at 600 deg C on both base metal and simulated HAZ. Pre-straining increased creep strength but reduced ductility. Slow strain rate tests on CT specimens confirmed this trend as well as did relaxation tests on CT specimens, which led to intergranular crack propagation in the pre-strained material only. Metallography and fractography showed no qualitative difference between base metal and HAZs in the creep cavitation around intergranular carbides. Although quantitative study of damage development is not achieved yet, experiments suggest that uniaxial creep strain smaller than one percent could lead to cavity nucleation when the material is pre-strained. Pre-strain as well as stress triaxiality reduce therefore creep ductility and enhance the reheat cracking risk. (authors) 12. Welding hot cracking in an austenitic stainless steel; Fissuration a chaud en soudage d'un acier inoxydable austenitique Energy Technology Data Exchange (ETDEWEB) Kerrouault, N 2001-07-01 The occurrence of hot cracking is linked to several conditions, in particular, the composition of the material and the local strains due to clambering. The aim of this study is to better analyse the implied mechanisms and to lead to a local thermomechanical criterion for hot cracking. The example studied is an AISI 321-type stainless steel (X10CrNiTi18-12) strongly prone to cracking. Two weldability tests are studied: - the first one consists in carrying out a fusion line by the TIG process on a thin sheet. In the case of the defect occurrence, the crack is longitudinal and follows the back of the molten bath. The influence of the operating conditions welding (speed, welding heat input, width test sample) is studied. - the second one is the Varestraint test. It is widely used to evaluate the sensitivity of a material to hot cracking. It consists in loading the material by bending during a fusion line by the TIG process and in characterising the defects quantity (length, number). Various thermal and mechanical instrumentation methods were used. The possibilities of a local instrumentation instrumentation being limited because of the melting, the experimental results were complemented by a numerical modelling whose aim is to simulate the thermomechanical evolution of the loading thanks to the finite element analysis code ABAQUS. First, the heat input for thermal simulation is set by the use of an inverse method in order to optimise the energy deposit mode during welding in the calculation. Then, the mechanical simulation needs the input of a constitutive law that fits the mechanical behaviour over a wide temperature range from ambient to melting temperature. Thus, a mechanical characterization is performed by selecting strain values and strain rates representative of what the material undergoes during the tests. The results come from tensile and compressive tests and allow to settle an elasto-visco-plastic constitutive law over temperatures up to liquidus. Once validated, the thermomechanical simulation brings new interpretations of the tests observations and instrumentation results. The comparison of experimental and numerical results make it possible to determine a thermomechanical welding hot cracking criterion during solidification. This criterion simultaneously considers mechanical (strain and strain rates threshold) and thermal (temperature range, thermal gradient) parameters which give the position and orientation of the first crack initiation. The criterion precision are in good agreement with the observations on the two considered weldability tests. (author) 13. Dynamical recrystallization of high purity austenitic stainless steels; Recristallisation dynamique d'aciers inoxydables austenitiques de haute purete Energy Technology Data Exchange (ETDEWEB) Gavard, L 2001-01-01 The aim of this work is to optimize the performance of structural materials. The elementary mechanisms (strain hardening and dynamical regeneration, germination and growth of new grains) occurring during the hot working of metals and low pile defect energy alloys have been studied for austenitic stainless steels. In particular, the influence of the main experimental parameters (temperature, deformation velocity, initial grain size, impurities amount, deformation way) on the process of discontinuous dynamical recrystallization has been studied. Alloys with composition equal to those of the industrial stainless steel-304L have been fabricated from ultra-pure iron, chromium and nickel. Tests carried out in hot compression and torsion in order to cover a wide range of deformations, deformation velocities and temperatures for two very different deformation ways have allowed to determine the rheological characteristics (sensitivity to the deformation velocity, apparent activation energy) of materials as well as to characterize their microstructural deformations by optical metallography and electron back-scattered diffraction. The influence of the initial grain size and the influence of the purity of the material on the dynamical recrystallization kinetics have been determined. An analytical model for the determination of the apparent mobility of grain boundaries, a semi-analytical model for the dynamical recrystallization and at last an analytical model for the stationary state of dynamical recrystallization are proposed as well as a new criteria for the transition between the refinement state and the state of grain growth. (O.M.) 14. Sub-micron indent induced plastic deformation in copper and irradiated steel; Deformation plastique induite par l'essai d'indentation submicronique, dans le cuivre et l'acier 316L irradie Energy Technology Data Exchange (ETDEWEB) Robertson, Ch 1999-07-01 15. Study of a design criterion for 316L irradiated represented by a strain hardened material; Etude d'un critere de dimensionnement d'un acier 316L irradie represente par un materiau ecroui Energy Technology Data Exchange (ETDEWEB) Gouin, H 1999-07-01 16. Les aciers inoxydables dans les fixations CERN Document Server CETIM 2010-01-01 Cet ouvrage, qui fait la synthèse de plusieurs travaux menés par le Cetim, propose une vue d'ensemble sur les aciers inoxydables utilisés pour les fixations. Au sommaire : les normes EN, ISO et ATSM qui s'y rapportent , les désignations symboliques , les nuances et caractéristiques mécaniques , les différentes formes de corrosion, les méthodes pour les détecter , les règles du métier , les mises en oeuvre. L'ouvrage comprend plusieurs fiches matériaux et des tableaux qui présentent les équivalences entre les désignations. 17. Crack growth in an austenitic stainless steel at high temperature; Propagation de fissure a haute temperature dans un acier inoxydable austenitique Energy Technology Data Exchange (ETDEWEB) Polvora, J.P 1998-12-31 This study deals with crack propagation at 650 deg C on an austenitic stainless steel referenced by Z2 CND 17-12 (316L(NN)). It is based on an experimental work concerning two different cracked specimens: CT specimens tested at 650 deg C in fatigue, creep and creep-fatigue with load controlled conditions (27 tests), tube specimens containing an internal circumferential crack tested in four points bending with displacement controlled conditions (10 tests). Using the fracture mechanics tools (K, J and C* parameters), the purpose here is to construct a methodology of calculation in order to predict the evolution of a crack with time for each loading condition using a fracture mechanics global approach. For both specimen types, crack growth is monitored by using a specific potential drop technique. In continuous fatigue, a material Paris law at 650 deg C is used to correlate crack growth rate with the stress intensity factor range corrected with a factor U(R) in order to take into account the effects of crack closure and loading ratio R. In pure creep on CT specimens, crack growth rate is correlated to the evolution of the C* parameter (evaluated experimentally) which can be estimated numerically with FEM calculations and analytically by using a simplified method based on a reference stress approach. A modeling of creep fatigue growth rate is obtained from a simple summation of the fatigue contribution and the creep contribution to the total crack growth. Good results are obtained when C* parameter is evaluated from the simplified expression C*{sub s}. Concerning the tube specimens tested in 4 point bending conditions, a simulation based on the actual A 16 French guide procedure proposed at CEA. (authors) 104 refs. 18. Stress relief cracking by relaxation in austenitic stainless steels welded junctions; Fissuration differee par relaxation des jonctions soudes en aciers inoxydables austenitiques Energy Technology Data Exchange (ETDEWEB) Allais, L.; Auzoux, Q.; Chabaud-Reytier, M 2003-07-01 During service at high temperature (450 to 650 C), austenitic stainless steels are well known to present a risk of cracking near the welded junctions for times under the service life. This intergranular cracking in affected zones has been identified on titanium stabilized steels and is known as relief cracking by relaxation or reheat cracking. In order to control this cracking of welded junctions on titanium stabilized stainless steel AISI 321, a simulation of the affected zone has been realized. The results have been extended to non stabilized steels. (A.L.B.) 19. Study of structural modifications induced by ion implantation in austenitic stainless steel; Etude des modifications structurales induites par implantation ionique dans les aciers austenitiques Energy Technology Data Exchange (ETDEWEB) Dudognon, J 2006-12-15 Ion implantation in steels, although largely used to improve the properties of use, involves structural modifications of the surface layer, which remain still prone to controversies. Within this context, various elements (N, Ar, Cr, Mo, Ag, Xe and Pb) were implanted (with energies varying from 28 to 280 keV) in a 316LVM austenitic stainless steel. The implanted layer has a thickness limited to 80 nm and a maximum implanted element concentration lower than 10 % at. The analysis of the implanted layer by grazing incidence X ray diffraction highlights deformations of austenite lines, appearance of ferrite and amorphization of the layer. Ferritic phase which appears at the grain boundaries, whatever the implanted element, is formed above a given 'threshold' of energy (produced of fluency by the energy of an ion). The formation of ferrite as well as the amorphization of the implanted layer depends only on energy. In order to understand the deformations of austenite diffraction lines, a simulation model of these lines was elaborated. The model correctly describes the observed deformations (broadening, shift, splitting) with the assumption that the expansion of the austenitic lattice is due to the presence of implanted element and is proportional to the element concentration through a coefficient k'. This coefficient only depends on the element and varies linearly with its radius. (author) 20. Study of stress relief cracking in titanium stabilized austenitic stainless steel; Etude de la fissuration differee par relaxation d'un acier inoxydable austenitique stabilise au titane Energy Technology Data Exchange (ETDEWEB) Chabaud-Reytier, M 1999-07-01 The heat affected zone (HAZ) of titanium stabilised austenitic stainless steel welds (AISI 321) may exhibit a serious form of intercrystalline cracking during service at high temperature. This type of cracking, called 'stress relief cracking', is known to be due to work hardening but also to ageing: a fine and abundant intragranular Ti(C,N) precipitation appears near the fusion line and modifies the mechanical behaviour of the HAZ. This study aims to better know the accused mechanism and to succeed in estimating the risk of such cracking in welded junctions of 321 stainless steel. To analyse this embrittlement mechanism, and to assess the lifetime of real components, different HAZ are simulated by heat treatments applied to the base material which is submitted to various cold rolling and ageing conditions in order to reproduce the HAZ microstructure. Then, we study the effects of work hardening and ageing on the titanium carbide precipitation, on the mechanical (tensile and creep) behaviour of the resulting material and on its stress relief cracking sensitivity. It is shown that work hardening is the main parameter of the mechanism and that ageing do not favour crack initiation although it leads to titanium carbide precipitation. The role of this precipitation is also discussed. Moreover, a creep damage model is identified by a local approach to fracture. Materials sensitive to stress relief cracking are selected. Then, creep tests are carried out on notched bars in order to quantify the intergranular damage of these different materials; afterwards, these measurements are combined with calculated mechanical fields. Finally, it is shown that the model gives good results to assess crack initiation for a compact tension (CT) specimen during relaxation tests, as well as for a notched tubular specimen tested at 600 deg. C under a steady torque. (author) 1. Crack initiation at high temperature on an austenitic stainless steel; Amorcage de fissure a haute temperature dans un acier inoxydable austenitique Energy Technology Data Exchange (ETDEWEB) Laiarinandrasana, L 1994-11-25 The study deals with crack initiation at 600 and 650 degrees Celsius, on an austenitic stainless steel referenced by Z2 CND 17 12. The behaviour laws of the studied plate were update in comparison with existing data. Forty tests were carried out on CT specimens, with continuous fatigue with load or displacement controlled, pure creep, pure relaxation, creep-fatigue and creep-relaxation loadings. The practical initiation definition corresponds to a small crack growth of about the grain size. The time necessary for the crack to initiate is predicted with fracture mechanics global and local approaches, with the helps of microstructural observations and finite elements results. An identification of a Parislaw for continuous cyclic loading and of a unique correlation between the initiation time and C{sup *}{sub k} for creep tests was established. For the local approach, crack initiation by creep can be interpreted as the reaching of a critical damage level, by using a damage incremental rule. For creep-fatigue tests, crack growth rates at initiation are greater than those of Parislaw for continuous fatigue. A calculation of a transition time between elastic-plastic and creep domains shows that crack initiation can be interpreted whether by providing Parislaw with an acceleration term when the dwell period is less than the transition time, or by calculating a creep contribution which relies on C{sup *}{sub k} parameter when the dwell period and/or the initiation times are greater than the transition time. Creep relaxation tests present crack growth rates at initiation which are less than those for equivalent creep-fatigue tests. These crack growth rates when increasing hold time, but also when temperature decreases. Though, for hold times which are important enough and at lower temperature, there is no effect of the dwell period insofar as crack growth rate is equal to continuous fatigue Paris law predicted ones. (Abstract Truncated) 2. Soudage des aciers pour application mécanique CERN Document Server Deveaux, Dominique 2016-01-01 Ce guide détermine les bonnes pratiques pour comprendre les risques d’une forme d’assemblage multimatériaux : celui par soudage de nuances à forte teneur en carbone avec des éléments en acier de construction. Dans un premier temps, le rapport passe en revue l’examen des avaries sur des assemblages soudés pour l’application mécanique mettant en cause les aciers. Fissuration par fatigue, rupture fragile, rupture ductile, fissuration à chaud ou à froid sont autant de causes qui seront analysées. Dans un deuxième temps, il se concentre sur la conception des joints soudés. Du choix des nuances à la tenue vis-à-vis de la rupture fragile en passant par l’analyse en fatigue des assemblages soudés, c’est l’ensemble de la problématique qui est pris en compte. 3. Iodine-131 production by a dry method using reactor-irradiated elementary tellurium. Part 1 - Conditions for obtaining iodine emanation and its capture. Part 2 - comparative study of preparation conditions using Pyrex, stainless steel and alumina equipment. Part 3 - production on a semi-industrial scale; Production de l'iode 131 par voie seche a partir de tellure elementaire irradie a la pile. 1ere partie - Etudes des conditions pour obtenir l'emanation de l'iode et le capter. 2eme partie - Etude comparee des conditions pour effectuer cette preparation avec des appareils en Pyrex, en acier inoxydable et en alumine. 3eme partie - production a l'echelle semi-industrielle Energy Technology Data Exchange (ETDEWEB) Bardy, A; Beydon, J; Murthy, T S; Doyen, J B; Lefrancois, J [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1967-04-15 A previous report has described how iodine 131 can be prepared from elementary tellurium by a dry method which consists in treating irradiated tellurium at 400 degrees in argon. The possibility of carrying out this treatment in a stainless steel or alumina apparatus has been considered. The behavior of gaseous iodine 131 towards these materials has thus been studied. If the adsorption of iodine on stainless steel is superficial desorption is rapid at 250 degrees in oxygen or 400 degrees in argon. If the adsorption is chemical in nature it becomes necessary to heat to higher temperatures. Adsorption of iodine on alumina is very weak and the iodine can be desorbed rapidly. With these materials tests have been carried out on 300 gms of tellurium containing 41 curies of iodine 131; the yields were very satisfactory ( 98 per cent). (author) [French] La methode de preparation de l iode 131 par voie seche a partir de tellure elementaire decrite dans un precedent rapport consiste a traiter le tellure irradie a 400 degres sous argon. Nous avons examine la possibilite d effectuer ce traitement dans un appareil en acier inoxidable ou en alumine. Le comportement de l iode 131 gazeux vis a vis de ces materiaux a donc ete etudie. Si l adsorption de l iode sur l acier inoxidable est superficielle la desorption est rapide a 250 degres sous oxygene ou 400 degres sous argon. Si la fixation est de nature chimique il est necessaire de chauffer a des temperatures plus elevees. L adsorption de l iode sur l alumine est res faible et l iode peut etre desorbe rapideemnt. En employant ces materiaux des essais ont ete obtenus sur 300 g de tellure contenant 41 curies d iode 131 avec un bon rendement (98 pour cent). (auteur00. 4. Formation mechanism of solute clusters under neutron irradiation in ferritic model alloys and in a reactor pressure vessel steel: clusters of defects; Mecanismes de fragilisation sous irradiation aux neutrons d'alliages modeles ferritiques et d'un acier de cuve: amas de defauts Energy Technology Data Exchange (ETDEWEB) Meslin-Chiffon, E 2007-11-15 The embrittlement of reactor pressure vessel (RPV) under irradiation is partly due to the formation of point defects (PD) and solute clusters. The aim of this work was to gain more insight into the formation mechanisms of solute clusters in low copper ([Cu] = 0.1 wt%) FeCu and FeCuMnNi model alloys, in a copper free FeMnNi model alloy and in a low copper French RPV steel (16MND5). These materials were neutron-irradiated around 300 C in a test reactor. Solute clusters were characterized by tomographic atom probe whereas PD clusters were simulated with a rate theory numerical code calibrated under cascade damage conditions using transmission electron microscopy analysis. The confrontation between experiments and simulation reveals that a heterogeneous irradiation-induced solute precipitation/segregation probably occurs on PD clusters. (author) 5. Atom probe study of the microstructural evolution induced by irradiation in Fe-Cu ferritic alloys and pressure vessel steels; Etude a la sonde atomique de levolution microstructurale sous irradiation dalliages ferritiques Fe-Cu et daciers de cuve REP Energy Technology Data Exchange (ETDEWEB) Pareige, P 1996-04-01 Pressure vessel steels used in pressurized water reactors are low alloyed ferritic steels. They may be prone to hardening and embrittlement under neutron irradiation. The changes in mechanical properties are generally supposed to result from the formation of point defects, dislocation loops, voids and/or copper rich clusters. However, the real nature of the irradiation induced-damage in these steels has not been clearly identified yet. In order to improve our vision of this damage, we have characterized the microstructure of several steels and model alloys irradiated with electrons and neutrons. The study was performed with conventional and tomographic atom probes. The well known importance of the effects of copper upon pressure vessel steel embrittlement has led us to study Fe-Cu binary alloys. We have considered chemical aging as well as aging under electron and neutron irradiations. The resulting effects depend on whether electron or neutron irradiations ar used for thus. We carried out both kinds of irradiation concurrently so as to compare their effects. We have more particularly considered alloys with a low copper supersaturation representative of that met with the French vessel alloys (0.1% Cu). Then, we have examined steels used on French nuclear reactor pressure vessels. To characterize the microstructure of CHOOZ A steel and its evolution when exposed to neutrons, we have studied samples from the reactor surveillance program. The results achieved, especially the characterization of neutron-induced defects have been compared with those for another steel from the surveillance program of Dampierre 2. All the experiment results obtained on model and industrial steels have allowed us to consider an explanation of the way how the defects appear and grow, and to propose reasons for their influence upon steel embrittlement. (author). 3 appends. 6. RPV-1: a first virtual reactor to simulate irradiation effects in light water reactor pressure vessel steels; RPV-1: un premier reacteur virtuel pour simuler les effets d'irradiation dans les aciers de cuve des reacteurs a eau legere Energy Technology Data Exchange (ETDEWEB) Jumel, St 2005-01-15 The presented work was aimed at building a first VTR (virtual test reactor) to simulate irradiation effects in pressure vessel steels of nuclear reactor. It mainly consisted in: - modeling the formation of the irradiation induced damage in such steels, as well as their plasticity behavior - selecting codes and models to carry out the simulations of the involved mechanisms. Since the main focus was to build a first tool (rather than a perfect tool), it was decided to use, as much as possible, existing codes and models in spite of their imperfections. - developing and parameterizing two missing codes: INCAS and DUPAIR. - proposing an architecture to link the selected codes and models. - constructing and validating the tool. RPV-1 is made of five codes and two databases which are linked up so as to receive, treat and/or transmit data. A user friendly Python interface facilitates the running of the simulations and the visualization of the results. RPV-1 relies on many simplifications and approximations and has to be considered as a prototype aimed at clearing the way. According to the functionalities targeted for RPV-1, the main weakness is a bad Ni and Mn sensitivity. However, the tool can already be used for many applications (understanding of experimental results, assessment of effects of material and irradiation conditions,....). (O.M.) 7. Some problems on the aqueous corrosion of structural materials in nuclear engineering; Problemes de corrosion aqueuse de materiaux de structure dans les constructions nucleaires Energy Technology Data Exchange (ETDEWEB) Coriou, H; Grall, L [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1964-07-01 The purpose of this report is to give a comprehensive view of some aqueous corrosion studies which have been carried out with various materials for utilization either in nuclear reactors or in irradiated fuel treatment plants. The various subjects are listed below. Austenitic Fe-Ni-Cr alloys: the behaviour of austenitic Fe-Ni-Cr alloys in nitric medium and in the presence of hexavalent chromium; the stress corrosion of austenitic alloys in alkaline media at high temperatures; the stress corrosion of austenitic Fe-Ni-Cr alloys in 650 C steam. Ferritic steels: corrosion of low alloy steels in water at 25 and 360 C; zirconium alloys; the behaviour of ultrapure zirconium in water and steam at high temperature. (authors) [French] On presente un ensemble d'etudes de corrosion en milieu aqueux effectuees sur des materiaux utilises, soit dans la construction des reacteurs soit pour la realisation des usines de traitement des combustibles irradies. Les differents sujets etudies sont les suivants. Les alliages austenitiques Fer-Nickel-Chrome: comportement d'alliages austenitiques fer-nickel-chrome en milieu nitrique en presence de chrome hexavalent; Corrosion sous contrainte d'alliages austenitiques dans les milieux alcalins a haute temperature; Corrosion sous contrainte dans la vapeur a 650 C d'alliages austenitiques fer-nickel-chrome. Les aciers ferritiques; Corrosion d'aciers faiblement allies dans l'eau a 25 et 360 C; le zirconium et ses alliages; Comportement du zirconium tres pur dans l'eau et la vapeur a haute temperature. (auteurs) 8. Local approach: fracture at high temperature in an austenitic stainless steel submitted to thermomechanical loadings. Calculations and experimental validations; Approche locale: fissuration a haute temperature dans un acier inoxydable austenitique sous chargements thermomecaniques. Simulations numeriques et validations experimentales Energy Technology Data Exchange (ETDEWEB) Poquillon, D 1997-10-01 Usually, for the integrity assessment of defective components, well established rules are used: global approach to fracture. A more fundamental way to deal with these problems is based on the local approach to fracture. In this study, we choose this way and we perform numerical simulations of intergranular crack initiation and intergranular crack propagation. This type of damage can be find in components of fast breeder reactors in 316 L austenitic stainless steel which operate at high temperatures. This study deals with methods coupling partly the behaviour and the damage for crack growth in specimens submitted to various thermomechanical loadings. A new numerical method based on finite element computations and a damage model relying on quantitative observations of grain boundary damage is proposed. Numerical results of crack initiation and growth are compared with a number of experimental data obtained in previous studies. Creep and creep-fatigue crack growth are studied. Various specimen geometries are considered: compact Tension Specimens and axisymmetric notched bars tested under isothermal (600 deg C) conditions and tubular structures containing a circumferential notch tested under thermal shock. Adaptative re-meshing technique and/or node release technique are used and compared. In order to broaden our knowledge on stress triaxiality effects on creep intergranular damage, new experiments are defined and conducted on sharply notched tubular specimens in torsion. These isothermal (600 deg C) Mode II creep tests reveal severe intergranular damage and creep crack initiation. Calculated damage fields at the crack tip are compared with the experimental observations. The good agreement between calculations and experimental data shows the damage criterion used can improve the accuracy of life prediction of components submitted to intergranular creep damage. (author) 200 refs. 9. Damage study of an austenitic stainless steel in high cycle multiaxial fatigue regime;Etude de l'endommagement d'un acier inoxydable austenitique par fatigue multiaxiale a grand nombre de cycles Energy Technology Data Exchange (ETDEWEB) Poncelet, M. [CEA Saclay, DEN, SRMA, 91 - Gif-sur-Yvette (France); Barbier, G.; Raka, B.; Vincent, L.; Desmorat, R. [LMT Cachan, ENS Cachan/CNRS/UPMC/PRES Univ. Sud Paris, 94 - Cachan (France); Barbier, G. [EDF R and D / LaMSID, 92 - Clamart (France) 2010-02-15 10. Reheat cracking of austenitic stainless steels - pre-strain effect on intergranular damage; Fissuration en relaxation des aciers inoxydables austenitiques - influence de l'ecrouissage sur l'endommagement intergranulaire Energy Technology Data Exchange (ETDEWEB) Auzoux, Q 2004-01-01 Welding process induces strain in 316 stainless steel affected zones. Their microstructure was reproduce by rolling of three different steels (316L, 316L(N) et 316H). Traction, creep and relaxation tests were performed at 550 deg C and 600 deg C on smooth, notched and pre-cracked specimens. Pre-strain by rolling increases the hardness and the creep resistance because of the high dislocation density but decreases ductility because of the fast development of intergranular damage. This embrittlement leads to crack propagation during relaxation tests on pre-strained steels without distinction in respect to their carbon or nitrogen content. A new intergranular damage model was built using local micro-cracks measurements and finite elements analysis. Pre-strain effect and stress triaxiality ratio effect are reproduced by the modelling so that the reheat cracking risk near welds can now be estimated. (author) 11. Low cycle fatigue: high cycle fatigue damage accumulation in a 304L austenitic stainless steel; Endommagement et cumul de dommage en fatigue dans le domaine de l'endurance limitee d'un acier inoxydable austenitique 304L Energy Technology Data Exchange (ETDEWEB) Lehericy, Y 2007-05-15 The aim of this study was to evaluate the consequences of a Low Cycle Fatigue pre-damage on the subsequent fatigue limit of a 304L stainless steel. The effects of hardening and severe roughness (grinding) have also been investigated. In a first set of tests, the evolution of the surface damage induced by the different LCF pre-cycling was characterized. This has permitted to identify mechanisms and kinetics of damage in the plastic domain for different surface conditions. Then, pre-damaged samples were tested in the High Cycle Fatigue domain in order to establish the fatigue limits associated with each level of pre-damage. Results evidence that, in the case of polished samples, an important number of cycles is required to initiate surface cracks ant then to affect the fatigue limit of the material but, in the case of ground samples, a few number of cycles is sufficient to initiate cracks and to critically decrease the fatigue limit. The fatigue limit of pre-damaged samples can be estimated using the stress intensity factor threshold. Moreover, this detrimental effect of severe surface conditions is enhanced when fatigue tests are performed under a positive mean stress (author) 12. Modelling of microstructural creep damage in welded joints of 316L stainless steel; Modelisation de l'endommagement a haute temperature dans le metal d'apport des joints soudes d'acier inoxydable austenitique Energy Technology Data Exchange (ETDEWEB) Bouche, G 2000-07-01 Welded joints of 316L stainless steel under service conditions at elevated temperature are known to be preferential sites of creep damage, as compared to the base material. This damage results in the formation of cavities and the development of creep cracks which can lead to a premature failure of welded components. The complex two-phase microstructure of 316L welds was simulated by manually filling a mould with longitudinal deposited weld beads. The moulded material was then aged during 2000 hours at 600 deg. C. High resolution Scanning Electron Microscopy was largely used to examine the microstructure of the simulated material before and after ageing. Smooth and notched creep specimens were cut from the mould and tested at 600 deg. C under various stress levels. A comparison of the lifetime versus nominal stress curves for the base and welded materials shows a greater dependence of the welded material to creep phenomena. Observation and EBSD analysis show that damage is preferentially located along the austenite grain boundaries. The stress and strain fields in the notched specimens were calculated by finite element method. A correlation of this field to the observed damage was made in order to propose a predictive law relating the creep damage to the mechanical conditions applied locally. Further mechanical tests and simulation on CT specimens and mode II tubular specimens allowed validating the model under various multiaxial loading conditions. (author) 13. ETUDE DU COMPORTEMENT MECANIQUE DES ACIERS HYPEREUTECTOIDES DANS LE DOMAINE DE TEMPERATURE INTERCRITIQUE DYNAMIQUE Directory of Open Access Journals (Sweden) R GHERIANI 2001-06-01 Full Text Available L'étude que nous présentons contribue à une meilleure compréhension de l'influence de la vitesse de déformation et de la température sur le comportement mécanique des aciers hypereutectoïdes dans le domaine de température intercritique dynamique. Les courbes expérimentales obtenues en torsion présentent un intérêt notable dans la mesure où elles permettent de caractériser le comportement mécanique de l'acier 100C6; de plus, elles fournissent  des informations précieuses sur la capacité maximale de déformation de l'alliage. Les essais de torsion, menés jusqu'à rupture des éprouvettes, permettent d'effectuer un classement des matériaux selon leur ductilité. Les résultats obtenus sur l'acier 100C6 ont permis de préciser le comportement mécanique à tiède  de cet acier. Les aciers hypoeutectoïdes présentent, dans les domaines de température compris entre Ac1 et Ac3 en condition dynamique, une capacité de déformation élevée résultant de l'évolution, en cours de déformation, des phases a et g et de leurs mécanismes d'adoucissement. Nous nous sommes alors posé la question: quel est le comportement d'un acier hypereutectoïde, donc ne présentant  pas de domaine biphasé (a + g à l'équilibre, lorsqu'il est déformé à une température supérieure à Ac1? 14. Influence de la nuance d'acier des roues ferroviaires en Fatigue de Contact de Roulement Directory of Open Access Journals (Sweden) Langueh Amavi 2013-11-01 Full Text Available Cet article propose une méthodologie de prédiction de la durée de vie des roues ferroviaires permettant de prendre en compte les sollicitations locales via la géométrie réelle du contact roue/rail, le comportement inélastique du matériau (acier et d'intégrer un critère de fatigue. Le contexte industriel, d'étudier l'influence de la nuance d'acier sur la durabilité de la roue. Les principales étapes de l'approche sont l'identification du comportement des matériaux, la détermination des champs de contrainte-déformation stabilisés et l'application d'un critère de fatigue. L'algorithme stationnaire est utilisé pour déterminer les contraintes et déformations suivant les conditions d'exploitation. Trois aciers ont été étudiés en analysant leurs réponses mécaniques, leurs limites d'adaptation et leurs durées de vie moyenne. 15. DIAGRAMME TRC ET STRUCTURES DE TREMPE ET DE REVENU D'UN ACIER FAIBLEMENT ALLIE AU MANGANESE-CHROME Directory of Open Access Journals (Sweden) Z LAROUK 2008-06-01 Full Text Available Cette étude concerne un acier faiblement allié au manganèse et chrome. L’utilisation principale de cet acier est la fabrication des tubes sans soudure, employés pour le forage ou le transport pétrolier. Les tubes traités thermiquement doivent supporter d’importantes contraintes de tension et de compression, sans risque de rupture. Les tubes trempés à l’eau souffrent d’une hétérogénéité structurale impliquant une diminution de dureté à la surface interne. Le but de cette étude est de déterminer les structures de l’acier après différents types de traitements, au cours de refroidissement continus dans les conditions industrielles de trempe (930°C et de revenu (670°C. Les résultats montrent que la vitesse critique de trempe est de 50°C/sec et, pour éviter la formation de la ferrite, une vitesse plus grande que 12°C/sec est nécessaire. Cet acier a une bonne trempabilité (11mm. La décroissance de la dureté de la martensite revenue est remarquable lorsque la température atteint 600°C. 16. CORROSION LOCALISEE DES ACIERS API 5L-X52 DE LA LIGNE ASR/MP SOLLICITE EN SOL ALGERIEN OpenAIRE BENDJEBBOUR, Amina 2011-01-01 L'étude porte sur les défaillances par corrosion localisée dans les aciers API de grade X52 de la ligne de pipelines ASR/MP acheminant les produits pétroliers sollicités en sol corrosif après échec des systèmes de protection anticorrosion (AC) à base de liants hydrocarbonés ou encore à base d'un système en multicouche complétée par une protection cathodique. 17. TRANSFORMATION ISOTHERME D'UN ACIER A HAUTE RESISTANCE 40 CDV 13 Directory of Open Access Journals (Sweden) A BOUTEFNOUCHET 2001-06-01 Full Text Available L'étude dilatométrique du comportement de l'austénite en condition isotherme d'un acier ternaire, à haute résistance mécanique de nuance 40 CDV 13, nous a permis de tracer son diagramme TTT. L'austénitisation a été réalisée pendant 10 minutes à  qg = 950°C (utilisée dans  l'industrie. Les températures de maintien sont comprises entre Ac1 = 810°C et Ms  = 310°C. Dans ce diagramme TTT, on distingue deux domaines de transformation isotherme de l'austénite. Le domaine I (625°C £  qiso < Ac1 = 810°C dans lequel l'austénite se transforme en ferrite et en perlite, et le domaine II (325°C  £  qiso £ 475°C où l'austénite se transforme en bainite ou en ferrite probainitique. Ces transformations sont précédées pour toutes les températures de maintien isotherme d'une précipitation de carbures. En outre, ces deux domaines de transformation de l'austénite sont séparés par une large zone de stabilité de l'austénite comprise entre 500°C et 600°C. L'analyse approfondie des courbes dilatométriques enregistrées durant le maintien isotherme et le refroidissement final jusqu'à l'ambiante, nous a permis de déterminer qualitativement et quantitativement les phase mises en jeu par ces transformations isothermes de l'austénite. 18. Prediction du profil de durete de l'acier AISI 4340 traite thermiquement au laser Science.gov (United States) Maamri, Ilyes Les traitements thermiques de surfaces sont des procedes qui visent a conferer au coeur et a la surface des pieces mecaniques des proprietes differentes. Ils permettent d'ameliorer la resistance a l'usure et a la fatigue en durcissant les zones critiques superficielles par des apports thermiques courts et localises. Parmi les procedes qui se distinguent par leur capacite en terme de puissance surfacique, le traitement thermique de surface au laser offre des cycles thermiques rapides, localises et precis tout en limitant les risques de deformations indesirables. Les proprietes mecaniques de la zone durcie obtenue par ce procede dependent des proprietes physicochimiques du materiau a traiter et de plusieurs parametres du procede. Pour etre en mesure d'exploiter adequatement les ressources qu'offre ce procede, il est necessaire de developper des strategies permettant de controler et regler les parametres de maniere a produire avec precision les caracteristiques desirees pour la surface durcie sans recourir au classique long et couteux processus essai-erreur. L'objectif du projet consiste donc a developper des modeles pour predire le profil de durete dans le cas de traitement thermique de pieces en acier AISI 4340. Pour comprendre le comportement du procede et evaluer les effets des differents parametres sur la qualite du traitement, une etude de sensibilite a ete menee en se basant sur une planification experimentale structuree combinee a des techniques d'analyse statistiques eprouvees. Les resultats de cette etude ont permis l'identification des variables les plus pertinentes a exploiter pour la modelisation. Suite a cette analyse et dans le but d'elaborer un premier modele, deux techniques de modelisation ont ete considerees, soient la regression multiple et les reseaux de neurones. Les deux techniques ont conduit a des modeles de qualite acceptable avec une precision d'environ 90%. Pour ameliorer les performances des modeles a base de reseaux de neurones, deux 19. Influence de la protection cathodique sur le comportement électrochimique des couches de corrosion d'acier au carbone OpenAIRE Tran Tron Long , Mai; Sutter , Eliane; Tribollet , Bernard 2013-01-01 11 pages, www.mattech-journal.org; International audience; Les propriétés électrochimiques de la couche de dépôts de corrosion formée à la surface des coupons d'acier E24 immergés pendant plusieurs années dans l'eau de mer sous différentes conditions de protection cathodique ont éteé'tudiées à l'aide de la spectroscopie d'impédance électrochimique et de mesures globale et locale de courant. Les résultats obtenus montrent que la couche de dépôts dont le comportement est déterminé par sa sous-c... 20. Évolution de la surface de plasticité sous chargement biaxial dans un acier inoxydable duplex Science.gov (United States) Aubin, V.; Quaegebeur, P.; Degallaix, S. 2002-12-01 Nous proposons une méthodologie de mesure automatique de la surface de plasticité pendant des chargements cycliques biaxiaux. La surface de plasticité est mesurée de manière discrète avec un faible offset de déformation plastique (2 10^{-5}) et des paramètres de mesure optimisés. La méthode est appliquée à un acier inoxydable duplex soumis à un trajet de chargement non-proportionnel. Les résultats montrent une distorsion et une translation de la surface de plasticité sans changement de taille. La méthode présentée permet également de vérifier la normalité de la vitesse d'écoulement plastique par rapport à la surface de plasticité. 1. Influence de la composition chimique et de la microstructure sur le dégazage de l'hydrogène des aciers inoxydables austénitiques destinés à l'ultravide CERN Document Server Reinert, Marie-Pierre Dans les installations métalliques sous ultravide, l'hydrogène est le principal constituant de l'atmosphère résiduelle. Le flux de dégazage d'une tôle en acier inoxydable austénitique, matériau fréquemment utilisé en technologie du vide, après un étuvage sous vide, est typiquement de quelques 10-12 Torr.1/cm2.s, et est constitué principalement d'hydrogène. Dans le cadre de cette étude, un appareillage de thermodésorption sous ultravide a été conçu et mis au point pour étudier les phénomènes d'adsorption, de diffusion et de piégeage de l'hydrogène résiduel dans les aciers inoxydables austénitiques. Différents aciers ont été étudiés: l'acier 316L (avec trois modes d'élaboration différents), l'acier 316LN et d'autres aciers stabilisés au titane ou au niobium. La microstructure et la couche d'oxyde de ces aciers ont été caractérisées à l'état de réception et pendant les cycles de thermodésorption. Pendant un cycle de thermodésorption, les principales espèces désorbées... 2. A three dimensional discrete dislocation dynamics modelling of the early cycles of fatigue in an austenitic stainless steel 316L: dislocation microstructure and damage analysis; Modelisation physique des stades precurseurs de l'endommagement en fatigue dans l'acier inoxydable austenitique 316L Energy Technology Data Exchange (ETDEWEB) Depres, Ch 2005-07-01 A numerical code modelling the collective behaviour of dislocations at a mesoscopic scale (Discrete Dislocation Dynamics code) is used to analyse the cyclic plasticity that occurs in surface grains of an AISI 316L stainless steel, in order to understand the plastic mechanism involved in crack initiation in fatigue. Firstly, the analyses of both the formation and the evolution of the dislocation microstructures show the crucial role of cross-slip played in the strain localization in the form of slip bands. As the cycling proceeds, the slip bands exhibit well-organized dislocation arrangements that substitute to dislocation tangles, involving specific interaction mechanisms between primary and deviate systems. Secondly, both the surface displacements generated by plastic slip and the distortion energy induced by the dislocation microstructure have been analysed. We find that an irreversible surface relief in the form of extrusion/intrusion can be induced by cyclic slip of dislocations. The number of cycles for the crack initiation follows a Manson-Coffin type law. The analyses of the concentration of the distortion energy and its repartition in the slip bands show that beneficial energetic zones may be present at the very beginning of the cycling, and that mode-II crack propagation in the surface grains results from a succession of micro-crack initiations along primary slip plane, which is facilitated by various effects (stress concentration due to surface relief, environment effects...). Finally, a dislocation-based model for cyclic plasticity is proposed from Discrete Dislocation Dynamics results. (author) 3. Stress corrosion of austenitic steels mono and polycrystals in Mg Cl{sub 2} medium: micro fractography and study of behaviour improvements; Corrosion sous contrainte de mono et polycristaux daciers inoxydables austenitiques en milieu MgCI{sub 2}: analyse microfractographique et recherche dameliorations du comportement Energy Technology Data Exchange (ETDEWEB) Chambreuil-Paret, A 1997-09-19 The austenitic steels in a hot chlorinated medium present a rupture which is macroscopically fragile, discontinuous and formed with crystallographic facets. The interpretation of these facies crystallographic character is a key for the understanding of the stress corrosion damages. The first aim of this work is then to study into details the micro fractography of 316 L steels mono and polycrystals. Two types of rupture are observed: a very fragile rupture which stresses on the possibility of the interatomic bonds weakening by the corrosive medium Mg Cl{sub 2} and a discontinuous rupture (at the micron scale) on the sliding planes which is in good agreement with the corrosion enhanced plasticity model. The second aim of this work is to search for controlling the stress corrosion by the mean of a pre-strain hardening. Two types of pre-strain hardening have been tested. A pre-strain hardening with a monotonic strain is negative. Indeed, the first cracks starts very early and the cracks propagation velocity is increased. This is explained by the corrosion enhanced plasticity model through the intensifying of the local corrosion-deformation interactions. On the other hand, a cyclic pre-strain hardening is particularly favourable. The first micro strains starts later and the strain on breaking point levels are increased. The delay of the starting of the first strains is explained by a surface distortion structure which is very homogeneous. At last, the dislocations structure created in fatigue at saturation is a planar structure of low energy which reduces the corrosion-deformation interactions, source of micro strains. (O.M.) 139 refs. 4. Thermal fatigue of a 304L austenitic stainless steel: simulation of the initiation and of the propagation of the short cracks in isothermal and aniso-thermal fatigue; Fatigue thermique d'un acier inoxydable austenitique 304L: simulation de l'amorcage et de la croissance des fissures courtes en fatigue isotherme et anisotherme Energy Technology Data Exchange (ETDEWEB) 2003-04-01 The elbow pipes of thermal plants cooling systems are submitted to thermal variations of short range and of variable frequency. These variations bound to temperature changes of the fluids present a risk of cracks and leakages. In order to solve this problem, EDF has started the 'CRECO RNE 808' plan: 'thermal fatigue of 304L austenitic stainless steels' to study experimentally on a volume part, the initiation and the beginning of the propagation of cracks in thermal fatigue on austenitic stainless steels. The aim of this study is more particularly to compare the behaviour and the damage of the material in mechanic-thermal fatigue (cycling in temperature and cycling in deformation) and in isothermal fatigue (the utmost conditions have been determined by EDF for the metal: Tmax = 165 degrees C and Tmin = 90 degrees C; the frequency of the thermal variations can reach a Hertz). A lot of experimental results are given. A model of lifetime is introduced and validated. (O.M.) 5. Initiation and growth of thermal fatigue crack networks in an AISI 304 L type austenitic stainless steel (X2 CrNi18-09); Amorcage et propagation de reseaux de fissures de fatigue thermique dans un acier inoxydable austenitique de type X2 CrNi18-09 (AISI 304 L) Energy Technology Data Exchange (ETDEWEB) Maillot, V 2004-07-01 We studied the behaviour of a 304 L type austenitic stainless steel submitted to thermal fatigue. Using the SPLASH equipment of CEA/SRMA we tested parallelepipedal specimens on two sides: the specimens are continuously heated by Joule effect, while two opposites faces are cyclically. cooled by a mixed spray of distilled water and compressed air. This device allows the reproduction and the study of crack networks similar to those observed in nuclear power plants, on the inner side of circuits fatigued by mixed pressurized water flows at different temperatures. The crack initiation and the network constitution at the surface were observed under different thermal conditions (Tmax = 320 deg C, {delta}T between 125 and 200 deg C). The experiment produced a stress gradient in the specimen, and due to this gradient, the in-depth growth of the cracks finally stopped. The obtained crack networks were studied quantitatively by image analysis, and different parameters were studied: at the surface during the cycling, and post mortem by step-by-step layer removal by grinding. The maximal depth obtained experimentally, 2.5 mm, is relatively coherent with the finite element modelling of the SPLASH test, in which compressive stresses appear at a depth of 2 mm. Some of the crack networks obtained by thermal fatigue were also tested in isothermal fatigue crack growth under 4-point bending, at imposed load. The mechanisms of the crack selection, and the appearance of the dominating crack are described. Compared to the propagation of a single crack, the crack networks delay the propagation, depending on the severity of the crack competition for domination. The dominating crack can be at the network periphery, in that case it is not as shielded by its neighbours as a crack located in the center of the network. It can also be a straight crack surrounded by more sinuous neighbours. Indeed, on sinuous cracks, the loading is not the same all along the crack path, leading to some morphological effect instead of shielding effect. A 2-D finite element modelling of multiple crack propagation has been performed: when the morphological effects are not dominant, there is a good agreement between modelling and experimental results. (author) 6. Fragilisation par le zinc liquide des aciers haute résistance pour l'automobile Liquid zinc embrittlement of high strength automotive steels Directory of Open Access Journals (Sweden) Frappier Renaud 2013-11-01 Full Text Available Cette étude présente les investigations menées sur la fragilisation par le zinc liquide d'un acier électro-zingué. La caractérisation mécanique par essais de traction à haute température montre un important puits de ductilité entre environ 700 ∘C et environ 950 ∘C. L'observation au MEB des éprouvettes de traction indique que, dans la gamme de température observée pour laquelle il y a fragilisation, on a mouillage intergranulaire des joints de grains de l'acier à l'interface acier/revêtement par des films de Zn. La corrélation entre mouillage intergranulaire thermiquement activé d'une part, et propagation de fissure lors du chargement d'autre part, est discutée. This study deals with liquid zinc embrittlement for electro-galvanized steel. Mechanical characterization by high temperature tensile tests shows a drastic loss of ductility between 700 ∘C and 950 ∘C. SEM investigations show that steel grain boundaries under the steel/coating interface are penetrated by a liquid Zn channel, only in the temperature range of embrittlement. A correlation can be drawn between i thermal activated-grain boundary wetting and ii crack propagation in presence of external stress. 7. Quelques considérations sur l’évolution des normes de calcul des poteaux avec la section mixte acier-béton Directory of Open Access Journals (Sweden) Nicolae Chira 2010-03-01 Full Text Available Depuis près d’un siècle, le système de construction basé sur des portiques en acier ou mixtes acier béton est devenu l’un des types les plus utilisés dans le domaine du génie civil. Plusieurs générations d’ingénieurs se sont préoccupées du développement des méthodes de calcul et des technologies de fabrication relatives à ces structures. En vue d’un dimensionnement optimal des structures, les ingénieurs sont tenus de trouver un compromis entre les exigences structurales de résistance, rigidité et ductilité d’une part, et les objectifs d’utilisation et de fonction relevant d’exigences architecturales d’autre part. Cette article fait une comparaison entre différents méthodes de dimensionnement des poteaux mixtes acier béton, en tenant compte des plusieurs paramètres. 8. Oxidation of ordinary steels or alloys heated in carbon dioxide under pressure; Oxydation d'aciers ordinaires ou allies chauffes dans le gaz carbonique sous pression Energy Technology Data Exchange (ETDEWEB) Leclercq, D; Chevilliard, C; Darras, R [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires 1960-07-01 Selection tests were carried out on commercial steels from the viewpoint of their resistance to oxidation in carbon dioxide, under 25 atmospheres pressure, between 350 and 600 deg. C. Comparative curve of oxidation kinetics were obtained, from which the influence of various additive elements can be found; small amounts of aluminium particularly seem to be favourable in the case of only slightly alloyed steels. (author) [French] Des essais de selection d'aciers commerciaux ont ete effectues quant a leur resistance a l'oxydation dans le gaz carbonique, sous pression de 25 atmospheres, ente 350 et 600 deg. C. Des courbes comparatives de cinetique d'oxydation ont ete obtenues, ce qui permet de degager l'influence de divers elements d'addition; de faibles teneurs en aluminium apparaissent notamment favorables dans le cas des aciers peu allies. L'acier inoxydable 18-8 a egalement ete etudie, notamment sous forme de tubes minces. Son comportement est bon jusqu'au moins 600 deg. C dans ces conditions. (auteur) 9. New Methods and Facilities for the Measurement of Physical Properties of Reactor Components and Irradiated Materials; Nouveaux Procedes et Instruments de Mesure des Proprietes Physiques des Elements de Reacteur et des Matieres Irradiees; Novye metody i sredstva izmereniya fizicheskikh s vojstv komponentov reaktora i obluchennykh materialov; Nuevos Metodos y Equipos para Medir Propiedades Fisicas de Componentes de Reactor y de Materiales Irradiados Energy Technology Data Exchange (ETDEWEB) Foerster, F.; Mueller, P. [Institut Dr. Foerster, Reutlingen, Federal Republic of Germany (Germany) 1965-09-15 direct reading of the permeability and stainless- steel components. The correlation between permeability and {Delta} ferrite content is explained. Measurements of the {Delta} ferrite percentage across welds in stainless-steel tubes and measurements of the {Delta} ferrite precipitations as a function of the plastic strain are discussed (hammer-forging of reactor fuel-elements). (author) [French] Les auteurs decrivent un instrument permettant de mesurer et d'enregistrer automatiquement le module de Young, le module de cisaillement et la capacite d'amortissement en fonction de la temperature et du temps. On mesure le module de Young en excitant des specimens de diverses dimensions a leur frequence propre. On mesure la capacite d'amortissement d'apres la libre decroissance de la vibration ou la largeur a mi-hauteur de la courbe de resonance. Le memoire donne des exemples de mesures de la guerison apres irradiation et apres deformation inelastique, ainsi que des exemples du degre de graphitisation. Les auteurs demontrent que l'on peut detecter des defauts et des variations de densite dans les banes de graphite. Ils expliquent, en outre, une methode d'etude de la fixation de pastilles d' UO{sub 2} sur des tubes en acier austenitique a parois minces. Us decrivent un four special pour l'etude du comportement elastique ou inelastique de specimens 'chauds ' a des temperatures variant entre 20 et 1000 Degree-Sign C. Les auteurs discutent le controle de la qualite de metaux non ferreux par mesure de ia conductivite electrique au moyen de courants de Foucault et decrivent un instrument permettant de mesurer sans aucun contact la conductivite electrique de metaux non ferreux. Ils expliquent la correlation entre la conductivite electrique et l'allongement sous l'effet des contraintes dans le cas de metaux et d'alliages non ferreux. Ils s'attachent particulierement a la mesure d'echantillons de petites dimensions. Ils decrivent un dispositif pour la mesure directe a distance dans la 10. Oxidation of steel heated in CO{sub 2} medium under pressure; Oxydation d'un acier ordinaire chauffe dans le gaz carbonique sous pression Energy Technology Data Exchange (ETDEWEB) Darras, R.; Leclercq, D.; Bunard, C. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1958-07-01 Behaviour of low-alloyed steels heated in CO{sub 2} medium under pressure is reported. Tests are carried out in the range of erature reached in the CO{sub 2} cooled reactors (vessel, thermal shield, pipes). The observed weight increases are small, even after more than a thousand hours of heating at 350 deg. C, but oxidation curve looks like progressing linearly. Furthermore, the oxide formed under a pressure of 15 kg/cm{sup 2} is undoubtedly more compact and adherent than the one formed under a pressure of 1 kg/cm{sup 2}. Finally, for practical use, CO{sub 2} steel pipes surface has to be sand blast and pickled. A following phosphatizing protects it from atmospheric corrosion during assembling, but these treatments have no influence on the behaviour of these steels heated in CO{sub 2}. (author)Fren. [French] On etudie le comportement d'aciers au carbone faiblement allies, chauffes dans le gaz carbonique sous pression, aux temperatures atteintes dans les reacteurs refroidis par ce gaz (caisson, bouclier thermique, canalisations). Les augmentations de poids observees sont faibles, meme apres plus de 1000 heures de chauffage a 350 deg. C, mais l'oxydation semble se poursuivre lineairement. De plus, l'oxyde forme dans le gaz carbonique sous pression de 15 kg/cm{sup 2} est nettement plus compact et adherent que celui forme sous pression atmospherique de gaz carbonique. Enfin, dans la pratique, les surfaces d'acier du circuit de gaz carbonique sont necessairement sablees ou decapees; une phosphatation ulterieure le protege de la corrosion atmospherique pendant le montage. Ces traitements sont sans influence sur le comportement de ces aciers dans le gaz carbonique a chaud. (auteur) 11. Oxidation of steel heated in CO{sub 2} medium under pressure; Oxydation d'un acier ordinaire chauffe dans le gaz carbonique sous pression Energy Technology Data Exchange (ETDEWEB) Darras, R; Leclercq, D; Bunard, C [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1958-07-01 Behaviour of low-alloyed steels heated in CO{sub 2} medium under pressure is reported. Tests are carried out in the range of erature reached in the CO{sub 2} cooled reactors (vessel, thermal shield, pipes). The observed weight increases are small, even after more than a thousand hours of heating at 350 deg. C, but oxidation curve looks like progressing linearly. Furthermore, the oxide formed under a pressure of 15 kg/cm{sup 2} is undoubtedly more compact and adherent than the one formed under a pressure of 1 kg/cm{sup 2}. Finally, for practical use, CO{sub 2} steel pipes surface has to be sand blast and pickled. A following phosphatizing protects it from atmospheric corrosion during assembling, but these treatments have no influence on the behaviour of these steels heated in CO{sub 2}. (author)Fren. [French] On etudie le comportement d'aciers au carbone faiblement allies, chauffes dans le gaz carbonique sous pression, aux temperatures atteintes dans les reacteurs refroidis par ce gaz (caisson, bouclier thermique, canalisations). Les augmentations de poids observees sont faibles, meme apres plus de 1000 heures de chauffage a 350 deg. C, mais l'oxydation semble se poursuivre lineairement. De plus, l'oxyde forme dans le gaz carbonique sous pression de 15 kg/cm{sup 2} est nettement plus compact et adherent que celui forme sous pression atmospherique de gaz carbonique. Enfin, dans la pratique, les surfaces d'acier du circuit de gaz carbonique sont necessairement sablees ou decapees; une phosphatation ulterieure le protege de la corrosion atmospherique pendant le montage. Ces traitements sont sans influence sur le comportement de ces aciers dans le gaz carbonique a chaud. (auteur) 12. EFFET DES TRAITEMENTS THERMIQUES SUR LA REACTION ENTRE DES COUCHES MINCES DE TITANE ET DES SUBSTRATS EN ACIER Directory of Open Access Journals (Sweden) D Slimani 2015-06-01 Full Text Available Des couches minces du titane pur ont été déposées avec la méthode de pulvérisation cathodique sur des substrats en acier, type FF80 K-1 contenants ~1% mass. en carbone. La réaction entre les deux parties du système substrat-couche mince est activée avec des traitements thermiques sous vide dans l’intervalle de températures de 400 à900°Cpendant 30 minutes. Les Spectres de diffraction de rayons x confirment l’inter- diffusion des éléments  chimiques du système résultants la formation et la croissance des nouvelles phases en particulier le carbure binaire TiC ayant des caractéristiques thermomécaniques importantes. L’analyse morphologique des échantillons traités  avec le microscope électronique à balayage (MEB montre l’augmentation du flux de diffusion atomique avec la température de recuit, notamment la diffusion du manganèse et du fer vers la surface libre des échantillons aux températures élevées provoquant la dégradation des propriétés mécaniques des revêtements contrairement au premiers stades d’interaction où on a obtenu des bonnes valeurs de la microdureté. Energy Technology Data Exchange (ETDEWEB) Copeland, M.; Kato, H. [Albany Metallurgy Research Center, Bureau Of Mines, United States Department of the Interior, Albany, OR (United States) 1964-06-15 14. The Cementation of Boron to Steels by the Method of Electrolytic Deposition; Cementation Electrolytique d'Aciers par le Bore; Tsementirovanie bora v stali putem ehlektroliticheskogo osazhdeniya; Cementacion de Aceros con Boro por Deposito Electrolitico Energy Technology Data Exchange (ETDEWEB) Kawakami, M. [Tokyo Institute of Technology (Japan) 1964-06-15 This report describes a fabricating method for cementation of a control rod with boron. The cementing is carried out by electrolysis of fused borax at about 900 Degree-Sign C, the steel rod to be cemented acting as a cathode, and the graphite electrode as an anode. As the electrolysis progresses, boron is deposited on/the steel rod and diffused into it. When suitable conditions of electrolysis exist, a case of the steel rod with as much as 20% boron content can easily be obtained. The kinds of steel investigated were carbon steel, stainless steels, high chrome steel etc., and each of them showed good results. (author) [French] L'auteur expose une methode de fabrication d'une barre de commande en acier cemente par le bore. La cementation est effectuee par l'electrolyse de borax fondu a environ 900 Degree-Sign C, la barre d'acier a cementer formant cathode et une electrode en graphite constituant l'anode. Au cours du processus d 'electrolyse, le bore se depose sur la barre d'acier et est diffuse dans le metal. Lorsque l'electrolyse s'effectue dans les conditions requises, on peut obtenir facilement une barre d'acier dont la teneur en bore atteint jusqu'a 20%. Les recherches ont porte sur des aciers tels que les aciers au carbone ordinaires, les aciers inoxydables, les aciers refractaires au chrome, etc. et on a obtenu de bons resultats pour chacun de ces aciers. (author) [Spanish] La memoria describe un metodo de obtencion de barras de control cementadas con boro. La cementacion se efectua por electrolisis de borax fundido a unos 900 Degree-Sign C. Como catodo se emplea la barra de acero que se desea cementar y como anodo, un electrodo de grafito. Conforme progresa la electrolisis, el boro se deposita sobre la barra de acero y penetra en la misma por difusion. Si las condiciones en que se efectua la electrolisis son adecuadas, se logra facilmente la cementacion de la barra de acero, que alcanza un contenido de hasta 20% de boro. Entre los aceros investigados, figuran 15. Évolution des contraintes résiduelles dans la couche de diffusion d’un acier modèle Fe-Cr-C nitruré DEFF Research Database (Denmark) Jegou, Sébastien; Barrallier, Laurent; Somers, Marcel A. J. 2011-01-01 Limiter la fatigue et la corrosion des pièces est possible grâce à une nitruration. Des contraintes résiduelles en découlent. Le rôle de la diffusion du carbone sur le développement de ces contraintes a été étudié sur un acier modèle Fe-3%m.Cr-0.35%m.C.......Limiter la fatigue et la corrosion des pièces est possible grâce à une nitruration. Des contraintes résiduelles en découlent. Le rôle de la diffusion du carbone sur le développement de ces contraintes a été étudié sur un acier modèle Fe-3%m.Cr-0.35%m.C.... 16. ECHAUFFEMENT ET EVOLUTION STRUCTURALE D’UN ACIER XC 42 LORS D’UN ESSAI DE TORSION A 700 °C Directory of Open Access Journals (Sweden) R BENSAHA 2001-12-01 Full Text Available Ce travail a pour but de montrer qu'il est possible d’apprécier la température et de développer un modèle simple de calcul de la recrudescence de la température en cours de déformation pour un acier XC 42 à une température d'essai  de 700°C et pour deux vitesses de déformation généralisées différentes de 5s-1 et 30s-1.                 Cette étude prend en considération, d'une part l'enthalpie  du changement de phase a®g  qui se libère au cours de la déformation dans l'intervalle de température A1-A3, et d'autre part, des mécanismes thermiquement activées (restauration et recristallisation dynamique mis en jeu lors de la déformation du matériau. Comme nos essais étaient pratiqués à la température de 700°C, proche de celle du point de transformation A1, les structures obtenues après trempe rapide montrent bien que pendant la déformation le matériau a subi la transformation de phase a®g, provoquée par l'auto échauffement de l'acier XC42. Le degré d'austénitisation est donc fonction de l'auto échauffement du matériau qui, à grande vitesse de déformation (227°C, est plus important qu'à faible vitesse (142°C. 17. Etude des mécanismes d'interaction, au cours du procédé d'emboutissage à chaud, entre les surces atmosphériques d'hydrogène et les aciers à haute résistance revêtus d'Al-Si OpenAIRE Mandy, Mélodie; Jacques, Pascal; Georges, Cédric; Journée Jeunes Chercheurs 2015 - Commission "Corrosion sous contrainte / Fatigue-Corrosion" 2015-01-01 Dans l’industrie automobile, le défi permanent d’allègement en vue de diminuer la consommation de carburant est profitable tant au niveau économique qu’écologique. Pour permettre cette diminution de masse sans compromettre pour autant la sécurité des passagers, il est donc essentiel de développer des nuances d’aciers toujours plus résistantes. Néanmoins, ces aciers doivent également garder une certaine ductilité nécessaire à leur mise en forme, mais aussi fondamentale pour la sécurité des pas... 18. MODÉLISATION DES FLUX DE CHALEUR GÉNÉRÉS PAR FROTTEMENT GLISSANT DANS UN CONTACT CUIVRE-ACIER TRAVERSÉ PAR UN COURANT ÉLECTRIQUE Directory of Open Access Journals (Sweden) A BOUCHOUCHA 2001-06-01 Full Text Available Le problème de la conduction de la chaleur dans un contact électrique glissant cuivre–acier est étudié. Le couple fonctionne dans des conditions atmosphériques et est donc refroidi par convection naturelle à travers les faces latérales. En utilisant l'équation de la chaleur, un modèle de calcul de la température interfaciale a été élaboré. A l'aide de la méthode des volumes finis, les résultats de la température en fonction de la charge normale, la vitesse de glissement et le courant électrique sont donnés. Une comparaison avec la méthode d'Archard est faite. Les résultats montrent une bonne concordance. Une discussion globale du modèle élaboré et son application dans les contacts électriques glissants a été dégagée. 19. Effects of hydrogen on the tensile strength characteristics of stainless steels; Effets de l'hydrogene sur les caracteristiques de rupture par traction d'aciers inoxydables Energy Technology Data Exchange (ETDEWEB) Blanchard, R; Pelissier, J; Pluchery, M [Commissariat a l' Energie Atomique, Grenoble (France).Centre d' Etudes Nucleaires; Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires 1961-07-01 This paper deals with the effects of hydrogen on stainless steel, that might possibly be used as a canning material in hydrogen-cooled reactors. Apparent ultimate-tensile strength is only 80 per cent of initial value for hydrogen content about 50 cc NTP/ 100 g, and reduction in area decreases from 80 to 55 per cent. A special two-stage replica technique has been developed which allows fracture surface of small tensile specimens (about 0.1 mm diam.) to be examined in an electron microscope. All the specimens showed evidence of ductile character throughout the range of hydrogen contents investigated, but the aspect of the fracture surfaces gradually changes with increasing amounts. (author) [French] On etudie les effets de l'hydrogene sur des aciers inoxydables, qui sont des materiaux de gainage possibles pour des reacteurs utilisant l'hydrogene comme gaz de refroidissement. On montre que la charge apparente de rupture a la traction n'est plus que 80 pour cent de sa valeur initiale lorsque la teneur en hydrogene atteint 50 cc TPN/ 100 g, et que la striction passe dans ces conditions de 80 a 55 pour cent. L'examen microfractographique qui a ete effectue avec succes par une technique de double replique malgre la petitesse des echantillons (0,3 mm de diametre environ), revele que tout en gardant un caractere ductile, l'aspect des surfaces de rupture evolue notablement avec la teneur en hydrogene. (auteur) 20. Soudage hybride Laser-MAG d'un acier Hardox® Hybrid Laser Arc Welding of a Hardox® steel Directory of Open Access Journals (Sweden) Chaussé Fabrice 2013-11-01 Full Text Available Le soudage hybride laser-MAG est un procédé fortement compétitif par rapport aux procédés conventionnels notamment pour le soudage de fortes épaisseurs et les grandes longueurs de soudure. Il connait de ce fait un développement important dans l'industrie. La présente étude s'est portée sur la soudabilité de l'acier Hardox® par ce procédé. Un large panel de techniques de caractérisation a été employé (mesures thermiques, radiographie X, duretés Vickers, macrographie…. L'objectif étant de déterminer l'influence des paramètres du procédé sur la qualité de la soudure et d'étendre notre compréhension des phénomènes se déroulant lors de ce type de soudage. Hybrid Laser Arc Welding (HLAW technology is a highly competitive metal joining process especially when high productivity is needed and for the welding of thick plates. It is a really new technology but its implementation in industry accelerates thanks to recent improvements of high power laser equipment and development of integrated hybrid welding heads. This study focuses on weldability of Hardox® 450 steel by HLAW. Welding tests were conducted by making critical process parameters vary. Then a large panel of characterization techniques (X-Ray radiography, macroscopic examination and hardness mapping was used to determine process parameters influence on weldability of Hardox 450® Steel. 1. Irradiation behaviour of mixed uranium-plutonium carbides, nitrides and carbonitrides; Comportement a l'irradiation de carbures, nitrures et carbonitrures mixtes d'uranium et de plutonium Energy Technology Data Exchange (ETDEWEB) Mikailoff, H; Mustelier, J P; Bloch, J; Leclere, J; Hayet, L [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1967-07-01 In the framework of the research program of fast reactor fuels two irradiation experiments have been carried out on mixed uranium-plutonium carbides, nitrides and carbo-nitrides. In the first experiment carried out with thermal neutrons, the fuel consisted of sintered pellets sheathed in a stainless steel can with a small gap filled with helium. There were three mixed mono-carbide samples and the maximum linear power was 715 W/cm. After a burn-up slightly lower than 20000 MW day/tonne, a swelling of the fuel which had ruptured the cans was observed. In the second experiment carried out in the BR2 reactor with epithermal neutrons, the samples consisted of sintered pellets sodium bonded in a stainless steel tube. There were three samples containing different fuels and the linear power varies between 1130 and 1820 W/cm. Post-irradiation examination after a maximal burn-up of 1550 MW day/tonne showed that the behaviour of the three fuel elements was satisfactory. (authors) [French] Dans le cadre du programme d'etude des conibustiles pour reacteurs rapides, on a realise deux experiences d'irradiation de carbures, nitrures et carbonitrures mixtes d'uranium et de plutonium. Dans la premiere experience, faite en neutrons thermiques, le combustible etait constitue de,pastilles frittees gainees dans un tube d'acier inoxydable avec un faible jeu rempli d'helium. Il y avait trois echantillons de monocarbures mixtes, et la puissance lineaire maximale etait de 715 W/cm. Apres un taux de combustion legerement inferieur a 20 000 MWj/t, on a observe un gonflement des combustible qui a provoque, la rupture des gaines. Pans la seconde experience, realisee dans le reacteur BR2 en neutrons epithermiques, les echantillons etaient constitues de pastilles frittees gainees dans un tube d'acier avec un joint sodium. Il y avait trois echantillons contenant des combustibles differents, et la puissance lineaire variait de 1130 a 1820 W/cm. Les examens apres irradiation a un taux maximal de International Nuclear Information System (INIS) Soothill, R. 1987-01-01 The issue of food irradiation has become important in Australia and overseas. This article discusses the results of the Australian Consumers' Association's (ACA) Inquiry into food irradiation, commissioned by the Federal Government. Issues discussed include: what is food irradiation; why irradiate food; how much food is consumer rights; and national regulations International Nuclear Information System (INIS) Lindqvist, H. 1996-01-01 This paper is a review of food irradiation and lists plants for food irradiation in the world. Possible applications for irradiation are discussed, and changes induced in food from radiation, nutritional as well as organoleptic, are reviewed. Possible toxicological risks with irradiated food and risks from alternative methods for treatment are also brought up. Ways to analyze weather food has been irradiated or not are presented. 8 refs Energy Technology Data Exchange (ETDEWEB) Gruenewald, T 1985-01-01 5. Oxidation of iron and steels by carbon dioxide under pressure (1962); Oxydation du fer et des aciers par l'anhydride carbonique sous pression (1962) Energy Technology Data Exchange (ETDEWEB) Colombie, M [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1962-07-01 After having developed one of the first thermo-balances to operate under pressure, we have studied the influence of the pressure on the corrosion of iron and steels by carbon dioxide. The corrosion was followed by three different methods simultaneously: by the oxidation kinetics, by micrographs, and by radiocrystallography. We have been able to show that the influence of the pressure is not negligible and we have provided much experimental evidence: oxidation kinetics, micrographic aspects, surface precipitation of carbon, metal carburization, the texture of the magnetite layer. All these phenomena are certainly modified by changes in the carbon dioxide pressure. In order to interpret most of our results we have been led to believe that the phenomenon of corrosion by CO{sub 2} depends on secondary reactions localised at the oxide-gas interface. This would constitute a major difference between the oxidation by CO{sub 2} and that by oxygen. (author) [French] Apres avoir etudie et mis au point une des premieres thermobalances fonctionnant sous pression, nous avons etudie l'influence de la pression sur la corrosion du fer et des aciers par l'anhydride carbonique. Notre etude a ete conduite simultanement sur trois plans differents: etude des cinetiques d'oxydation, etude micrographique et etude radiocristallographique. Nous avons pu montrer que l'influence de la pression n'etait pas negligeable et nous en avons fourni un faisceau de preuves experimentales important: cinetiques d'oxydation, aspect micrographique, precipitation superficielle de carbone, carburation du metal, texture de la couche de magnetite. Tous ces phenomenes sont sans aucun doute modifies par une variation de pression du gaz carbonique. Pour interpreter la plupart de nos resultats, nous avons ete conduits a penser que le phenomene de corrosion par CO{sub 2} etait tributaire de reactions secondaires localisees a l'interface oxyde-gaz. Ce serait la une des differences fondamentales entre l'oxydation par 6. Comportement des poteaux composites en profils creux en acier remplis de béton Behavior of composite columns in hollow steel section filled with concrete Directory of Open Access Journals (Sweden) Othmani N. 2012-09-01 Full Text Available Le but de cet article, est la determination des rigidites flexionnelles EIx et EIy d’fune section mixte acier beton et plus precisement d’fun poteau en tube d’facier de section rectangulaire, remplie de beton, sollicitee a la flexion bi-axiale (N, Mx et My. L’festimation des rigidites sera faite a partir d’fune approche theorique par une analyse du poteau en elements finis (element barre a 4 degres de liberte, basee sur les conditions d’fequilibres a mi-portee en utilisant la relation moment-courbure (M–Φ de l’felement deforme par application de l’fequation suivante: EI=M/Φ. Le comportement des materiaux est celui comme adopte par les reglements Eurocode 2 et 3, respectivement pour le beton et l’facier. Afin de valider l’fapproche theorique utilisee dans cette etude, deux comparaisons ont ete faites : une premiere permettant de comparer les resultats des rigidites determinees par les relations moments courbures et celles calculees par l’fEurocode 4 et une deuxieme comparaison entre les charges de ruines de deux poteaux de grandeurs natures avec ceux testes au laboratoire [2]. Au vu des resultats obtenus, nous pouvons conclure que l’approche théorique utilisée dans cette étude ainsi que les modèles de comportement des matériaux sont adéquats pour ce genre de problèmes. The purpose of this paper is the determination of flexural stiffness EIx and EIy of a concrete filled rectangular cross section of a composite steel column, under biaxial bending (N, Mx and My. The rigidities will be estimated from a theoretical approach using a finite element analysis (element bar with 4 degrees of freedom, based on the equilibrium conditions at mid-span using the moment-curvature relationships (M–Φ of the deformed element by applying the following equation: EI=M/Φ. The material behavior is the one adopted by Eurocode 2 and 3, respectively, for concrete and steel. To validate the theoretical approach used, two comparisons International Nuclear Information System (INIS) Sato, Tomotaro; Aoki, Shohei 1976-01-01 Definition and significance of food irradiation were described. The details of its development and present state were also described. The effect of the irradiation on Irish potatoes, onions, wiener sausages, kamaboko (boiled fish-paste), and mandarin oranges was evaluated; and healthiness of food irradiation was discussed. Studies of the irradiation equipment for Irish potatoes in a large-sized container, and the silo-typed irradiation equipment for rice and wheat were mentioned. Shihoro RI center in Hokkaido which was put to practical use for the irradiation of Irish potatoes was introduced. The state of permission of food irradiation in foreign countries in 1975 was introduced. As a view of the food irradiation in the future, its utilization for the prevention of epidemics due to imported foods was mentioned. (Serizawa, K.) 8. Propagation of thermal neutrons in mock-up screw-shaped steel elements with water protection; Propagation des neutrons thermiques dans des fausses cartouches d'acier en helice dans une protection d'eau. Programme tournesol 3 Energy Technology Data Exchange (ETDEWEB) Devillers, C L; Lanore, J M [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1967-07-01 This report treats the streaming of thermal neutrons in a cylindrical duct in light water. The duct contains a spiral iron shield. Transmission and reflection matrices are used to describe the probabilities for the thermal neutrons to be absorbed or to be scattered on the surfaces. The neutron paths across the void are represented by geometrical matrices. The numerical resolution is performed by the Monte-Carlo method. (authors) [French] Dans ce rapport on traite un probleme de fuites de neutrons thermiques dans un canal cylindrique plonge dans l'eau et obture par un ecran helicoidal en acier. On utilise des matrices de transmission-reflexion pour decrire les probabilites d'absorption et de diffusion des neutrons sur les parois et l'helicoide et des matrices de correspondance geometrique pour representer la propagation dans le vide. La resolution numerique se fait par une methode de Monte-Carlo. (auteur) International Nuclear Information System (INIS) Simonet, G. 1986-09-01 Fiability of devices set around reactors depends on material resistance under irradiation noticeably joints, insulators, which belongs to composition of technical, safety or physical incasurement devices. The irradiated fuel elements, during their desactivation in a pool, are an interesting gamma irradiation device to simulate damages created in a nuclear environment. The existing facility at Osiris allows to generate an homogeneous rate dose in an important volume. The control of the element distances to irradiation box allows to control this dose rate [fr International Nuclear Information System (INIS) Anon. 1985-01-01 International Nuclear Information System (INIS) Beyers, M. 1977-01-01 The objectives of food irradiation are outlined. The interaction of irradiation with matter is then discussed with special reference to the major constituents of foods. The application of chemical analysis in the evaluation of the wholesomeness of irradiated foods is summarized [af International Nuclear Information System (INIS) Macklin, M. 1987-01-01 The Queensland Government has given its support the establishment of a food irradiation plant in Queensland. The decision to press ahead with a food irradiation plant is astonishing given that there are two independent inquiries being carried out into food irradiation - a Parliamentary Committee inquiry and an inquiry by the Australian Consumers Association, both of which have still to table their Reports. It is fair to assume from the Queensland Government's response to date, therefore, that the Government will proceed with its food irradiation proposals regardless of the outcomes of the various federal inquiries. The reasons for the Australian Democrats' opposition to food irradiation which are also those of concerned citizens are outlined International Nuclear Information System (INIS) Duchacek, V. 1989-01-01 The ranges of doses used for food irradiation and their effect on the processed foods are outlined. The wholesomeness of irradiated foods is discussed. The present food irradiation technology development in the world is described. A review of the irradiated foods permitted for public consumption, the purposes of food irradiaton, the doses used and a review of the commercial-scale food irradiators are tabulated. The history and the present state of food processing in Czechoslovakia are described. (author). 1 fig., 3 tabs., 13 refs International Nuclear Information System (INIS) Darrington, Hugh 1988-06-01 This special edition of 'Food Manufacture' presents papers on the following aspects of the use of irradiation in the food industry:- 1) an outline view of current technology and its potential. 2) Safety and wholesomeness of irradiated and non-irradiated foods. 3) A review of the known effects of irradiation on packaging. 4) The problems of regulating the use of irradiation and consumer protection against abuse. 5) The detection problem - current procedures. 6) Description of the Gammaster BV plant in Holland. 7) World outline review. 8) Current and future commercial activities in Europe. (U.K.) 15. Etude théorique et expérimentale de l’effet d’inhibition de la corrosion d’un acier au carbone par les dérivées de base de Schiff en milieu acide chlorhydrique OpenAIRE 2014-01-01 Dans cette étude,l'effet de l'addition de certais composés organique de dérivées de basses de schiff sur la corrosion d'un acier au carbone en milieu acide chlorhydrique a été étudié à l'aide des méthodes électrochimiques(courbes de polarisation et spectroscopie d'impédance électrochimique) et gravimétriques......... International Nuclear Information System (INIS) 1982-01-01 International Nuclear Information System (INIS) Schen, B.C.; Mella, O.; Dahl, O. 1992-01-01 In a large number of cancer patients, extensive skeletal metastases or myelomatosis induce vast suffering, such as intolerable pain and local complications of neoplastic bone destruction. Analgetic drugs frequently do not yield sufficient palliation. Irradiation of local fields often has to be repeated, because of tumour growth outside previously irradiated volumes. Wide field irradiation of the lower or upper half of the body causes significant relief of pain in most patients. Adequate pretreatment handling of patients, method of irradiation, and follow-up are of importance to reduce side effects, and are described as they are carried out at the Department of Oncology, Haukeland Hospital, Norway. 16 refs., 2 figs International Nuclear Information System (INIS) 1985-01-01 The paper discusses the need for effective and efficient technologies in improving the food handling system. It defines the basic premises for the development of food handling. The application of food irradiation technology is briefly discussed. The paper points out key considerations for the adoption of food irradiation technology in the ASEAN region (author) International Nuclear Information System (INIS) Matsuyama, Akira 1990-01-01 International Nuclear Information System (INIS) Minami, Akira 1977-01-01 Energy Technology Data Exchange (ETDEWEB) Minami, A [Osaka Kita Tsishin Hospital (Japan) 1977-06-01 International Nuclear Information System (INIS) Kobayashi, Yasuhiko; Kikuchi, Masahiro 2009-01-01 Food irradiation can have a number of beneficial effects, including prevention of sprouting; control of insects, parasites, pathogenic and spoilage bacteria, moulds and yeasts; and sterilization, which enables commodities to be stored for long periods. It is most unlikely that all these potential applications will prove commercially acceptable; the extend to which such acceptance is eventually achieved will be determined by practical and economic considerations. A review of the available scientific literature indicates that food irradiation is a thoroughly tested food technology. Safety studies have so far shown no deleterious effects. Irradiation will help to ensure a safer and more plentiful food supply by extending shelf-life and by inactivating pests and pathogens. As long as requirement for good manufacturing practice are implemented, food irradiation is safe and effective. Possible risks of food irradiation are not basically different from those resulting from misuse of other processing methods, such as canning, freezing and pasteurization. (author) Energy Technology Data Exchange (ETDEWEB) Howe, L.M 2000-07-01 There is considerable interest in irradiation effects in intermetallic compounds from both the applied and fundamental aspects. Initially, this interest was associated mainly with nuclear reactor programs but it now extends to the fields of ion-beam modification of metals, behaviour of amorphous materials, ion-beam processing of electronic materials, and ion-beam simulations of various kinds. The field of irradiation damage in intermetallic compounds is rapidly expanding, and no attempt will be made in this chapter to cover all of the various aspects. Instead, attention will be focused on some specific areas and, hopefully, through these, some insight will be given into the physical processes involved, the present state of our knowledge, and the challenge of obtaining more comprehensive understanding in the future. The specific areas that will be covered are: point defects in intermetallic compounds; irradiation-enhanced ordering and irradiation-induced disordering of ordered alloys; irradiation-induced amorphization. International Nuclear Information System (INIS) Howe, L.M. 2000-01-01 There is considerable interest in irradiation effects in intermetallic compounds from both the applied and fundamental aspects. Initially, this interest was associated mainly with nuclear reactor programs but it now extends to the fields of ion-beam modification of metals, behaviour of amorphous materials, ion-beam processing of electronic materials, and ion-beam simulations of various kinds. The field of irradiation damage in intermetallic compounds is rapidly expanding, and no attempt will be made in this chapter to cover all of the various aspects. Instead, attention will be focused on some specific areas and, hopefully, through these, some insight will be given into the physical processes involved, the present state of our knowledge, and the challenge of obtaining more comprehensive understanding in the future. The specific areas that will be covered are: point defects in intermetallic compounds; irradiation-enhanced ordering and irradiation-induced disordering of ordered alloys; irradiation-induced amorphization International Nuclear Information System (INIS) Hetherington, M. 1989-01-01 This popular-level article emphasizes that the ultimate health effects of irradiated food products are unknown. They may include vitamin loss, contamination of food by botulism bacteria, mutations in bacteria, increased production of aflatoxins, changes in food, carcinogenesis from unknown causes, presence of miscellaneous harmful chemicals, and the lack of a way of for a consumer to detect irradiated food. It is claimed that the nuclear industry is applying pressure on the Canadian government to relax labeling requirements on packages of irradiated food in order to find a market for its otherwise unnecessary products International Nuclear Information System (INIS) Luecher, O. 1979-01-01 Limitations of existing preserving methods and possibilities of improved food preservation by application of nuclear energy are explained. The latest state-of-the-art in irradiation technology in individual countries is described and corresponding recommendations of FAO, WHO and IAEA specialists are presented. The Sulzer irradiation equipment for potato sprout blocking is described, the same equipment being suitable also for the treatment of onions, garlic, rice, maize and other cereals. Systems with a higher power degree are needed for fodder preserving irradiation. (author) International Nuclear Information System (INIS) Paganini, M.C. 1991-06-01 Food treatment by means of ionizing energy, or irradiation, is an innovative method for its preservation. In order to treat important volumes of food, it is necessary to have industrial irradiation installations. The effect of radiations on food is analyzed in the present special work and a calculus scheme for an Irradiation Plant is proposed, discussing different aspects related to its project and design: ionizing radiation sources, adequate civil work, security and auxiliary systems to the installations, dosimetric methods and financing evaluation methods of the project. Finally, the conceptual design and calculus of an irradiation industrial plant of tubercles is made, based on the actual needs of a specific agricultural zone of our country. (Author) [es International Nuclear Information System (INIS) Anon. 1984-01-01 International Nuclear Information System (INIS) Anon. 1977-01-01 Food spoilage is a common problem when marketing agricultural products. Promising results have already been obtained on a number of food irradiating applications. A process is described in this paper where irradiation of sub-tropical fruits, especially mangoes and papayas, combined with conventional heat treatment results in effective insect and fungal control, delays ripening and greatly improves the quality of fruit at both export and internal markets International Nuclear Information System (INIS) Hungate, F.P.; Riemath, W.F.; Bunnell, L.R. 1975-01-01 International Nuclear Information System (INIS) Chandy, Mammen 1998-01-01 Viable lymphocytes are present in blood and cellular blood components used for transfusion. If the patient who receives a blood transfusion is immunocompetent these lymphocytes are destroyed immediately. However if the patient is immunodefficient or immunosuppressed the transfused lymphocytes survive, recognize the recipient as foreign and react producing a devastating and most often fatal syndrome of transfusion graft versus host disease [T-GVHD]. Even immunocompetent individuals can develop T-GVHD if the donor is a first degree relative since like the Trojan horse the transfused lymphocytes escape detection by the recipient's immune system, multiply and attack recipient tissues. T-GVHD can be prevented by irradiating the blood and different centers use doses ranging from 1.5 to 4.5 Gy. All transfusions where the donor is a first degree relative and transfusions to neonates, immunosuppressed patients and bone marrow transplant recipients need to be irradiated. Commercial irradiators specifically designed for irradiation of blood and cellular blood components are available: however they are expensive. India needs to have blood irradiation facilities available in all large tertiary institutions where immunosuppressed patients are treated. The Atomic Energy Commission of India needs to develop a blood irradiator which meets international standards for use in tertiary medical institutions in the country. (author) International Nuclear Information System (INIS) Migdal, W. 1995-01-01 A worldwide standard on food irradiation was adopted in 1983 by codex Alimentarius Commission of the Joint Food Standard Programme of the Food and Agriculture Organization (FAO) of the United Nations and The World Health Organization (WHO). As a result, 41 countries have approved the use of irradiation for treating one or more food items and the number is increasing. Generally, irradiation is used to: food loses, food spoilage, disinfestation, safety and hygiene. The number of countries which use irradiation for processing food for commercial purposes has been increasing steadily from 19 in 1987 to 33 today. In the frames of the national programme on the application of irradiation for food preservation and hygienization an experimental plant for electron beam processing has been established in Inst. of Nuclear Chemistry and Technology. The plant is equipped with a small research accelerator Pilot (19 MeV, 1 kW) and industrial unit Electronika (10 MeV, 10 kW). On the basis of the research there were performed at different scientific institutions in Poland, health authorities have issued permissions for irradiation for; spices, garlic, onions, mushrooms, potatoes, dry mushrooms and vegetables. (author) International Nuclear Information System (INIS) 1991-01-01 Processing of food with low levels of radiation has the potential to contribute to reducing both spoilage of food during storage - a particular problem in developing countries - and the high incidence of food-borne disease currently seen in all countries. Approval has been granted for the treatment of more than 30 products with radiation in over 30 countries but, in general, governments have been slow to authorize the use of this new technique. One reason for this slowness is a lack of understanding of what food irradiation entails. This book aims to increase understanding by providing information on the process of food irradiation in simple, non-technical language. It describes the effects that irradiation has on food, and the plant and equipment that are necessary to carry it out safely. The legislation and control mechanisms required to ensure the safety of food irradiation facilities are also discussed. Education is seen as the key to gaining the confidence of the consumers in the safety of irradiated food, and to promoting understanding of the benefits that irradiation can provide. (orig.) With 4 figs., 1 tab [de International Nuclear Information System (INIS) Suzuki, Toshimitsu. 1989-01-01 In an irradiation device for irradiating radiation rays such as electron beams to pharmaceuticals, etc., since the distribution of scanned electron rays was not monitored, the electron beam intensity could be determined only indirectly and irradiation reliability was not satisfactory. In view of the above, a plurality of monitor wires emitting secondary electrons are disposed in the scanning direction near a beam take-out window of a scanning duct, signals from the monitor wires are inputted into a display device such as a cathode ray tube, as well as signals from the monitor wires at the central portion are inputted into counting rate meters to measure the radiation dose as well. Since secondary electrons are emitted when electron beams pass through the monitor wires and the intensity thereof is in proportion with the intensity of incident electron beams, the distribution of the radiation dose can be monitored by measuring the intensity of the emitted secondary electrons. Further, uneven irradiation, etc. can also be monitored to make the radiation of irradiation rays reliable. (N.H.) 15. Contribution to the study of influences in emission spectrography on solutions. Application to a general analysis method for stainless steels (1961); Contribution a l'etude des influences en spectographie d'emission sur solution. Application a une methode generale d'analyse des aciers inoxydables (1961) Energy Technology Data Exchange (ETDEWEB) Baudin, G [Commissariat a l' Energie Atomique, Grenoble (France). Centre d' Etudes Nucleaires 1961-11-15 In order to establish a general method of analysis of stainless steels, by means of spark spectroscopy on solutions, a systematic study has been made of the factors involved. The variations in acidity of the solutions, or in the ratio of concentrations of two acids at constant pH, lead to a displacement of the calibration curve. Simple relations have been established between the concentration of the extraneous elements, and the effects produced, for the constituents Fe, Ti, Ni, Cr, Mn; a general method using abacus is proposed for steels containing only these elements. The interactions in the case of the elements Mo, Nb, Ta, W, were more complex, so that the simultaneous separation was studied with the help of ion-exchange resins. A general method of analysis is proposed for stainless steels. (author) [French] En vue d'etablir une methode generale d'analyse des aciers inoxydables par spectrographie d'etincelles sur solution, on a effectue une etude systematique des influences. Les variations de l'acidite des solutions ou du rapport des concentrations de deux acides a pH constant, entrainent un deplacement des courbes d'etalonnage. On a etabli des relations simples entre la teneur des tiers elements et les effets produits pour les constituants Fe, Ti, Ni, Cr, Mn; une methode generale avec abaques est proposee pour les aciers contenant ces seuls elements. Les influences dans le cas des elements Mo, Nb, Ta, W etant plus complexes, on eut a etudier la separation simultanee a l'aide de resines echangeuses d'ions. On propose une methode generale d'analyse des aciers inoxydables. (auteur) International Nuclear Information System (INIS) Beishon, J. 1991-01-01 Food irradiation has been the subject of concern and controversy for many years. The advantages of food irradiation include the reduction or elimination of dangerous bacterial organisms, the control of pests and insects which destroy certain foods, the extension of the shelf-life of many products, for example fruit, and its ability to treat products such as seafood which may be eaten raw. It can also replace existing methods of treatment which are believed to have hazardous side-effects. However, after examining the evidence produced by the proponents of food irradiation, the author questions whether it has any major contribution to make to the problems of foodborne diseases or world food shortages. More acceptable solutions, he suggests, may be found in educating food handlers to ensure that hygienic conditions prevail in the production, storage and serving of food. (author) International Nuclear Information System (INIS) Eymery, R. 1976-10-01 International Nuclear Information System (INIS) Ward, G.J.; Heckbert, P.S.; Technische Hogeschool Delft 1992-04-01 A new method for improving the accuracy of a diffuse interreflection calculation is introduced in a ray tracing context. The information from a hemispherical sampling of the luminous environment is interpreted in a new way to predict the change in irradiance as a function of position and surface orientation. The additional computation involved is modest and the benefit is substantial. An improved interpolation of irradiance resulting from the gradient calculation produces smoother, more accurate renderings. This result is achieved through better utilization of ray samples rather than additional samples or alternate sampling strategies. Thus, the technique is applicable to a variety of global illumination algorithms that use hemicubes or Monte Carlo sampling techniques Swift heavy ions interact predominantly through inelastic scattering while traversing any polymer medium and produce excited/ionized atoms. Here samples of the polycarbonate Makrofol of approximate thickness 20 m, spin coated on GaAs substrate were irradiated with 50 MeV Li ion (+3 charge state). Build-in ... 20. Crystalline plasticity constitutive equations for BCC steel at low temperature; Loi de comportement en plasticite cristalline pour acier a basse temperature Energy Technology Data Exchange (ETDEWEB) Monnet, G. [EDF RD, MMC, Avenue des Renardieres, Ecuelles, 77818 Moret-sur-Loing Cedex (France); Vincent, L. [CEA Saclay, DEN, SRMA, 91191 Gif-sur-Yvette Cedex (France) 2011-07-01 The prediction of the irradiation-induced evolution of the ductile-fragile transition curve of pressure vessel steels is a major research topic in the nuclear industry. Multi-scale approaches starting from ab initio scale up to macroscopic continuum mechanics are currently investigated through the European project PERFORM60. At the intermediate level of crystal plasticity, several effects need to be described accurately before considering the introduction of irradiation hardening mechanisms, such as the thermal activity of dislocations slip, the different mobilities between screw and edge dislocations at low temperature. These effects should be introduced in a crystal plasticity law used in finite-element simulations of polycrystalline aggregates. Accordingly, a new crystal plasticity law is proposed in this paper based on a critical analysis of previous numerical results obtained with a discrete dislocations dynamics code. (authors) International Nuclear Information System (INIS) Kovacs, J.; Tengumnuay, C.; Juangbhanich, C. 1970-01-01 Energy Technology Data Exchange (ETDEWEB) NONE 2018-04-01 International Nuclear Information System (INIS) Morrison, Rosanna M. 1984-01-01 Recent regulatory and commercial activity regarding food irradiation is highlighted. The effects of irradiation, used to kill insects and microorganisms which cause food spoilage, are discussed. Special attention is given to the current regulatory status of food irradiation in the USA; proposed FDA regulation regarding the use of irradiation; pending irradiation legislation in the US Congress; and industrial applications of irradiation International Nuclear Information System (INIS) Stirling, Andrew 1995-01-01 Energy Technology Data Exchange (ETDEWEB) Beerens, H [Lille-1 Univ., 59 - Villeneuve-d' Ascq (France); Saint-Lebe, L 1979-01-01 Various aspects of food treatment by cobalt 60 or caesium 137 gamma radiation are reviewed. One of the main applications of irradiation on foodstuffs lies in its ability to kill micro-organisms, lethal doses being all the lower as the organism concerned is more complex. The effect on parasites is also spectacular. Doses of 200 to 300 krad are recommended to destroy all parasites with no survival period and no resistance phenomenon has ever been observed. The action of gamma radiation on macromolecules was also investigated, the bactericide treatment giving rise to side effects by transformation of food components. Three examples were studied: starch, nucleic acids and a whole food, the egg. The organoleptic aspect of irradiation was examined for different treated foods, then the physical transformations of unpasteurized, heat-pasteurized and radio-pasteurized eggs were compared. The report ends with a brief analysis of the toxicity and conditions of application of the treatment. International Nuclear Information System (INIS) Ransohoff, J.A. 1984-01-01 International Nuclear Information System (INIS) Roberts, P.B. 1997-01-01 Food can be provided with extra beneficial properties by physical processing. These benefits include a reduced possibility of food poisoning, or an increased life of the food. We are familiar with pasteurisation of milk, drying of vegetables, and canning of fruit. These physical processes work because the food absorbs energy during treatment which brings about the changes needed. The energy absorbed in these examples is heat energy. Food irradiation is a less familiar process. It produces similar benefits to other processes and it can sometimes be applied with additional advantages over conventional processing. For example, because irradiation causes little heating, foods may look and taste more natural. Also, treatment can take place with the food in its final plastic wrappers, reducing the risk of re-contamination. (author). 1 ref., 4 figs., 1 tab International Nuclear Information System (INIS) Beerens, H.; Saint-Lebe, L. 1979-01-01 Various aspects of food treatment by cobalt 60 or caesium 137 gamma radiation are reviewed. One of the main applications of irradiation on foodstuffs lies in its ability to kill micro-organisms, lethal doses being all the lower as the organism concerned is more complex. The effect on parasites is also spectacular. Doses of 200 to 300 krad are recommended to destroy all parasites with no survival period and no resistance phenomenon has ever been observed. The action of gamma radiation on macromolecules was also investigated, the bactericide treatment giving rise to side effects by transformation of food components. Three examples were studied: starch, nucleic acids and a whole food, the egg. The organoleptic aspect of irradiation was examined for different treated foods, then the physical transformations of unpasteurized, heat-pasteurized and radio-pasteurized eggs were compared. The report ends with a brief analysis of the toxicity and conditions of application of the treatment [fr International Nuclear Information System (INIS) Galvao, M.M.; Ianhez, L.E.; Sabbaga, E. 1982-01-01 The authors analysed the clinical evolution and the result of renal transplantation some years after irradiation in 24 patients (group I) who received endolymphatic 131 I as a pre-transplantation immunesuppresive measure. The control group (group II) consisted of 24 non-irradiated patients comparable to group I in age, sex, primary disease, type of donor and immunesuppressive therapy. Significant differences were observed between the two groups regarding such factors a incidence and reversibility of rejection crises in the first 60 post-transplantation days, loss of kidney due to rejection, and dosage of azathioprine. The authors conclude that this method, besides being harmless, has prolonged immunesuppressive action, its administration being advised for receptores of cadaver kidneys, mainly those who show positive cross-match against HLA antigens for painel. (Author) [pt 10. The compatibility of chromium-aluminium steels with high pressure carbon dioxid at intermediate- temperatures; Compatibilite des aciers au chrome-aluminium avec le gaz carbonique sous pression aux temperatures moyennes Energy Technology Data Exchange (ETDEWEB) Leclercq, D; Loriers, H; David, R; Darras, E [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires 1964-07-01 With a view to their use in the exchangers of nuclear reactors of the graphite-gas or heavy water-gas types, the behaviour of chromium-aluminium steels containing up to 7 per cent chromium and 1.5 per cent aluminium has been studied in the presence of high-pressure carbon dioxide at temperatures of between 400 and 700 deg. C. The two most interesting grades of steel (2 per cent Cr - 0.35 per cent Al - 0.35 per cent Mo and 7 per cent Cr - 1.5 per cent Al - 0.6 per cent Si) are still compatible with carbon dioxide up to 550 and 600 deg. C respectively. A hot dip aluminised coating considerably increases resistance to oxidation of the first grade and should make possible its use up to temperatures of at least 600 deg. C. (authors) [French] Dans l'optique de leur emploi dans les echangeurs de reacteurs nucleaires des filieres graphite-gaz ou eau lourde-gaz, le comportement en presence de gaz carbonique sous pression d'aciers au chrome-aluminium, contenant jusqu'a 7 pour cent de chrome et 1,5 pour cent d'aluminium a ete etudie entre 400 et 700 deg. C. Les deux nuances les plus interessantes (2 pour cent Cr - 0,35 pour cent Al - 0,35 pour cent Mo et 7 pour cent Cr - 1,5 pour cent Al - 0,6 pour cent Si) restent compatibles avec le gaz carbonique jusqu'a 550 et 600 deg. C respectivement. Un revetement d'aluminium, effectue par immersion dans un bain fondu, ameliore notablement la resistance a l'oxydation de la premiere et doit permettre son empioi jusqu'a 600 deg. C au moins. (auteurs) 11. Decontamination tests using an electrolytic method on NS 22 S stainless steel discs (1963); Essais de decontamination par voie electrolytique de plaquettes en acier inoxydable NS 22 S (1963) Energy Technology Data Exchange (ETDEWEB) Boutot, P; Schipfer, P [Commissariat a l' Energie Atomique, Centre de Production de Plutonium, Marcoule (France). Centre d' Etudes Nucleaires 1963-07-01 These tests were carried out with a view to observing the results obtained by using electrolytic polishing for the decontamination operations. The experimental equipment consists essentially of a current rectifier and adjuster, and of an electrolytic cell fitted with an automatic system for thermal regulation and agitation. The samples made of type NS 22 S stainless steel are contaminated with an active solution of fission products. The activity of the discs is measured before and after each treatment using a bell counter. For each electrolyte, the parameters studied are: - the time, - the reactant concentration, - the temperature, - the current density, - the corrosion. Taking into account the necessity of transforming as little as possible the surface of the part to be treated, the best results are generally obtained using intermediate concentrations of the reactant, low temperatures, and high current densities. (authors) [French] Ces essais ont ete effectues dans le but d'observer les resultats obtenus en utilisant le procede de polissage electrolytique pour les operations de decontamination. Le dispositif experimental consiste essentiellement en un redresseur-variateur de courant et en une cellule electrolytique munie d'un systeme automatique de regulation thermique et d'agitation. Les echantillons en acier inoxydable du type NS 22 S sont contamines par une solution active de produits de fission. L'activite des plaquettes est mesuree avant et apres chaque traitement au moyen d'un compteur cloche. Pour chaque electrolyte, les parametres etudies sont: - le temps, - la concentration du reactif, - la temperature, - la densite de courant, - la corrosion. En tenant compte de la necessite de transformer au minimum l'etat de surface de la piece a traiter, les resultats les meilleurs sont generalement obtenus pour des concentrations moyennes de reactif, des temperatures basses et des densites de courant elevees. (auteurs) 12. Etude métallurgique du soudage par friction malaxage sur un acier à haute limite élastique destiné à la construction navale : le 80 HLES Metallurgical study of friction stir welding on a steel high yield for shipbuilding: The 80 HLES Directory of Open Access Journals (Sweden) Allart Marion 2013-11-01 Full Text Available Le soudage par friction malaxage est un procédé de soudage relativement récent (début des années 90. Il est aujourd'hui utilisé couramment sur des alliages légers mais ne l'est que depuis peu sur les aciers. L'objectif de nos travaux est de chercher à caractériser la microstructure métallurgique et l'état de déformation et de contrainte après soudage par friction malaxage sur des échantillons d'aciers à haute limite élastique utilisés dans l'industrie navale. Nous chercherons à comprendre les phénomènes métallurgiques qui interviennent en cours de soudage. The friction stir welding is a welding process relatively recent (early 90s. It is now commonly used on light alloys but is only recently on steels. The objective of our work is to try to characterize the metallurgical microstructure and state of stress and strain after friction stir welding on samples of high strength steels used in the shipbuilding industry. We seek to understand the metallurgical phenomena that occur during welding. 13. Microstructure and embrittlement of VVER 440 reactor pressure vessel steels; Microstructure et fragilisation des aciers de cuve des reacteurs nucleaires VVER 440 Energy Technology Data Exchange (ETDEWEB) Hennion, A 1999-03-15 27 VVER 440 pressurised water reactors operate in former Soviet Union and in Eastern Europe. The pressure vessel, is made of Cr-Mo-V steel. It contains a circumferential arc weld in front of the nuclear core. This weld undergoes a high neutron flux and contains large amounts of copper and phosphorus, elements well known for their embrittlement potency under irradiation. The embrittlement kinetic of the steel is accelerated, reducing the lifetime of the reactor. In order to get informations on the microstructure and mechanical properties of these steels, base metals, HAZ, and weld metals have been characterized. The high amount of phosphorus in weld metals promotes the reverse temper embrittlement that occurs during post-weld heat treatment. The radiation damage structure has been identified by small angle neutron scattering, atomic probe, and transmission electron microscopy. Nanometer-sized clusters of solute atoms, rich in copper with almost the same characteristics as in western pressure vessels steels, and an evolution of the size distribution of vanadium carbides, which are present on dislocation structure, are observed. These defects disappear during post-irradiation tempering. As in western steels, the embrittlement is due to both hardening and reduction of interphase cohesion. The radiation damage specificity of VVER steels arises from their high amount of phosphorus and from their significant density of fine vanadium carbides. (author) International Nuclear Information System (INIS) Murray, D.R. 1990-01-01 The author presents his arguments for food scientists and biologists that the hazards of food irradiation outweigh the benefits. The subject is discussed in the following sections: introduction (units, mutagenesis, seed viability), history of food irradiation, effects of irradiation on organoleptic qualities of staple foods, radiolytic products and selective destruction of nutrients, production of microbial toxins in stored irradiated foods and loss of quality in wheat, deleterious consequences of eating irradiated foods, misrepresentation of the facts about food irradiation. (author) 15. Wear studies in the shearing process by means of irradiated tools; Etudes d'usure dans les operations de cisaillement, au moyen d'outils irradies; Issledovaniya problemy iznosa v protsesse skalyvaniya posredstvom obluchennykh instrumentov; Estudios de desgaste en las operaciones de cizallamiento, realizados con ayuda de herramientas irradiadas Energy Technology Data Exchange (ETDEWEB) Sata, Toshio; Abe, Kunio; Nakajima, Kiyoshi [Institute of Physical and Chemical Research, Komagome, Bunkyo-Ku, Tokyo (Japan) 1962-01-15 CERN Document Server Gkotse, Blerina; Carbonez, Pierre; Danzeca, Salvatore; Fabich, Adrian; Garcia, Alia, Ruben; Glaser, Maurice; Gorine, Georgi; Jaekel, Martin, Richard; Mateu,Suau, Isidre; Pezzullo, Giuseppe; Pozzi, Fabio; Ravotti, Federico; Silari, Marco; Tali, Maris 2017-01-01 CERN provides unique irradiation facilities for applications in many scientific fields. This paper summarizes the facilities currently operating for proton, gamma, mixed-field and electron irradiations, including their main usage, characteristics and information about their operation. The new CERN irradiation facilities database is also presented. This includes not only CERN facilities but also irradiation facilities available worldwide. International Nuclear Information System (INIS) Newsome, R.L. 1987-01-01 A brief review summarizes current scientific information on the safety and efficacy of irradiation processing of foods. Attention is focused on: specifics of the irradiation process and its effectiveness in food preservation; the historical development of food irradiation technology in the US; the response of the Institute of Food Technologists to proposed FDA guidelines for food irradiation; the potential uses of irradiation in the US food industry; and the findings of the absence of toxins and of unaltered nutrient density (except possibly for fats) in irradiated foods. The misconceptions of consumers concerning perceived hazards associated with food irradiation, as related to consumer acceptance, also are addressed Energy Technology Data Exchange (ETDEWEB) Shinohara, K 1969-12-20 The efficiency of an electron beam irradiating device is heightened by improving the irradiation atmosphere and the method of cooling the irradiation window. An irradiation chamber one side of which incorporates the irradiation windows provided at the lower end of the scanner is surrounded by a suitable cooling system such as a coolant piping network so as to cool the interior of the chamber which is provided with circulating means at each corner to circulate and thus cool an inert gas charged therewithin. The inert gas, chosen from a group of such gases which will not deleteriously react with the irradiating equipment, forms a flowing stream across the irradiation window to effect its cooling and does not contaminate the vacuum exhaust system or oxidize the filament when penetrating the equipment through any holes which the foil at the irradiation window may incur during the irradiating procedure. International Nuclear Information System (INIS) Huebner, G. 1992-01-01 The necessary dose and the dosage limits to be observed depend on the kind of product and the purpose of irradiation. Product density and density distribution, product dimensions, but also packaging, transport and storage conditions are specific parameters influencing the conditions of irradiation. The kind of irradiation plant - electron accelerator or gamma plant - , its capacity, transport system and geometric arrangement of the radiation field are factors influencing the irradiation conditions as well. This is exemplified by the irradiation of 3 different products, onions, deep-frozen chicken and high-protein feed. Feasibilities and limits of the irradiation technology are demonstrated. (orig.) [de International Nuclear Information System (INIS) 1991-01-01 This fact sheet gives the cost of a typical food irradiation facility (US $1 million to US$3 million) and of the food irradiation process (US $10-15 per tonne for low-dose applications; US$100-250 per tonne for high-dose applications). These treatments also bring consumer benefits in terms of availability, storage life and improved hygiene. 2 refs International Nuclear Information System (INIS) Zhu Jiang 1994-01-01 In this paper, the author discussed the recent situation of food irradiation in China, its history, facilities, clearance, commercialization, and with emphasis on market testing and public acceptance of irradiated food. (author) CERN Document Server Lecomte, Pierre 2005-01-01 Before shipment to CMS, all PbWO4 crystals produced in China are irradiated there with 60 Co , in order to insure that the induced absorption coefficient is within specifications. Acceptance tests at CERNand at ENEA also include irradiation with gamma rays from 60 Co sources. There were initially discrepancies in quoted doses and doserates as well as in induced absorption coefficients. The present work resolves the discrepancies in irradiation measurements and defines common dosimetry methods for consistency checks between irradiation facilities. International Nuclear Information System (INIS) Reineccius, G.A. 1992-01-01 Flavor will not be a significant factor in determining the success of irradiated foods entering the U.S. market. The initial applications will use low levels of irradiation that may well result in products with flavor superior to that of products from alternative processing techniques (thermal treatment or chemical fumigation). The success of shelf-stable foods produced via irradiation may be much more dependent upon our ability to deal with the flavor aspects of high levels of irradiation International Nuclear Information System (INIS) Kooij, J. van 1984-01-01 In the past fifteen years, food irradiation processing policies and programmes have been developed both by a number of individual countries, and through projects supported by FAO, IAEA and WHO. These aim at achieving general acceptance and practical implementation of food irradiation through rigorous investigations of its wholesomeness, technological and economic feasibility, and efforts to achieve the unimpeded movement of irradiated foods in international trade. Food irradiation processing has many uses International Nuclear Information System (INIS) Bolumen, S.; Espinosa, R. 1997-01-01 The preservation of food by irradiation is promising technology which increases industrial application. Packaging of irradiated foods is an integral part of the process. Judicious selection of the package material for successful trade is essential. In this paper is presented a brief review of important aspects of packaging in food irradiation [es International Nuclear Information System (INIS) Sjoeberg, A.M. 1993-01-01 Foodstuffs are irradiated to make them keep better. The ionizing radiation is not so strong as to cause radioactivity in the foodstuffs. At least so far, irradiation has not gained acceptance among consumers, although it has been shown to be a completely safe method of preservation. Irradiation causes only slight chemical changes in food. What irradiation does, however, is to damage living organisms, such as bacteria, DNA and proteins, thereby making the food keep longer. Irradiation can be detected from the food afterwards; thus it can be controlled effectively. (orig.) 7. Prévision de l'épaisseur du film passif d'un acier inoxydable 316L soumis au fretting corrosion grâce au Point Defect Model, PDM Predicting the steady state thickness of passive films with the Point Defect Model in fretting corrosion experiments Directory of Open Access Journals (Sweden) Geringer Jean 2013-11-01 Full Text Available Les implants orthopédiques de hanche ont une durée de vie d'environ 15 ans. Par exemple, la tige fémorale d'un tel implant peut être réalisée en acier inoxydable 316L ou 316LN. Le fretting corrosion, frottement sous petits déplacements, peut se produire pendant la marche humaine en raison des chargements répétés entre le métal de la prothèse et l'os. Plusieurs investigations expérimentales du fretting corrosion ont été entreprises. Cette couche passive de quelques nanomètres, à température ambiante, est le point clef sur lequel repose le développement de notre civilisation, selon certains auteurs. Ce travail vise à prédire les épaisseurs de cette couche passive de l'acier inoxydable soumis au fretting corrosion, avec une attention spécifique sur le rôle des protéines. Le modèle utilisé est basé sur le Point Defect Model, PDM (à une échelle microscopique et une amélioration de ce modèle en prenant en compte le processus de frottement sous petits débattements. L'algorithme génétique a été utilisé pour optimiser la convergence du problème. Les résultats les plus importants sont, comme démontré avec les essais expérimentaux, que l'albumine, la protéine étudiée, empêche les dégradations de l'acier inoxydable aux plus faibles concentrations d'ions chlorure ; ensuite, aux plus fortes concentrations de chlorures, un temps d'incubation est nécessaire pour détruire le film passif. Some implants have approximately a lifetime of 15 years. The femoral stem, for example, should be made of 316L/316LN stainless steel. Fretting corrosion, friction under small displacements, should occur during human gait, due to repeated loadings and un-loadings, between stainless steel and bone for instance. Some experimental investigations of fretting corrosion have been practiced. As well known, metallic alloys and especially stainless steels are covered with a passive film that prevents from the corrosion and degradation 8. Measurement of the in-pile core temperature of an EL-4 pencil element, first charge (can of type-347 stainless steel, 0.4 mm thick, UO{sub 2} fuel, 11 mm diameter). Determination of the apparent thermal conductivity integral of in-pile UO{sub 2}; Mesure de la temperature a coeur en pile d'un crayon EL-4 1er jeu (gaine acier inoxydable, nuance 347 - epaisseur 0,4 mm - combustible UO{sub 2} - diametre 11 mm). Determination de l'integrale de conductibilite thermique apparente de l'UO{sub 2} en pile Energy Technology Data Exchange (ETDEWEB) Lavaud, B; Ringot, C; Vignesoult, N [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France) 1966-11-01 'un element combustible EL-4, type premier jeu, a gaine en acier inoxydable. On mesure cette temperature au coeur du crayon en pile a l'aide d'un thermocouple pour haute temperature: tungstene-rhenium a gaine tantale. Le crayon est place dans des conditions de fonctionnement analogues a celles de EL-4, tant en ce qui concerne la puissance specifique et la temperature sur gaine que la pression externe sur la gaine. La puissance specifique est obtenue dans le reacteur EL-3 avec un enrichissement de l'UO{sub 2} legerement superieur a celui normalement prevu pour EL-4. La temperature de gaine et la pression visees sont realisees a l'aide d'un conteneur d'irradiation en zircaloy-2 et a remplissage NaK, adapte, aux conditions du reacteur EL-3. - Les temperatures de l'UO{sub 2} a coeur, et de la surface de la gaine etant mesurees; - La puissance etant calculee a partir des echanges thermiques dans le conteneur etalonne en laboratoire; - La chute de temperature au contact UO{sub 2}-gaine etant deduite de mesures faites en laboratoire dans des conditions de flux calorifique comparables et sous une atmosphere de gaz correspondant au debut de la vie de l'element combustible; on peut tracer la courbe integrale de conductibilite. Les examens micro-graphiques de la structure de l'oxyde permettent de verifier la repartition des temperatures dans l'oxyde, deduite de l'integrale de conductibilite thermique. (auteurs) 9. Sensory properties of irradiated foods International Nuclear Information System (INIS) Plestenjak, A. 1997-01-01 Food irradiation is a simple and effective preservation technique. The changes caused by irradiation depend on composition of food, on the absorbed dose, the water content and temperature during and after irradiation. In this paper the changes of food components caused by irradiation, doses for various food irradiation treatments, foods and countries where the irradiation is allowed, and sensory properties of irradiated food are reviewed 10. Irradiation - who needs it? International Nuclear Information System (INIS) Scoular, C. 1994-01-01 In this paper the public's attitudes to the irradiation of food to ensure it is bacteria free and to prolong shelf-life are considered. The need to label irradiated food and to educate the public about its implications are emphasised. The opinions of the large food retailers who maintain that high standards in food processing, hygiene and refrigeration eliminate the need for food irradiation are discussed. (UK) International Nuclear Information System (INIS) Spiegelberg, A.; Heide, L.; Boegl, K.W. 1990-01-01 Frozen chicken and chicken parts were irradiated at a dose of 5 kGy with Co-60. The irradiated chicken and chicken parts were identified by determination of three radiation-induced hydrocarbons from the lipid fraction. Isolation was carried out by high-vacuum distillation with a cold-finger apparatus. The detection of the hydrocarbons was possible in all irradiated samples by gaschromatography/mass spectrometry. (orig.) [de International Nuclear Information System (INIS) Basson, R.A. 1989-01-01 Food irradiation technology in South Africa is about to take its rightful place next to existing food preservation methods in protecting food supplies. This is as a result of several factors, the most important of which is the decision by the Department of Health and Population Development to introduce compulsory labelling of food irradiation. The factors influencing food irradiation technology in South Africa are discussed International Nuclear Information System (INIS) Anon. 1981-01-01 This project is designed to improve the techniques of blood irradiation through the development of improved and portable blood irradiators. A portable blood irradiator, consisting of a vitreous carbon body and thulium-170 radiation source, was attached to dogs via a carotid-jugular shunt, and its effects on the immune system measured. The device has demonstrated both significant suppression of circulating lymphocytes and prolonged retention of skin allografts 14. Étude expérimentale du comportement cyclique d'un acier du type 316 L sous chargement multiaxial complexe en traction-torsion-pressions interne et externe Science.gov (United States) Bocher, L.; Delobelle, P. 1997-09-01 are very rich in informations and lead to classify the different types of loading, with two or three cyclic components, with respect to the observed supplementary hardening. This classification was established as follows: i) The in-phase tests with two or three components (δ = \\varphi = 0^circ); no supplementary hardening is observed. ii) The tension-pressure tests such as r_1 = 1, \\varphi = 90^circ and r_1 = - 1, \\varphi = 60^circ, the hardening is slightly inferior to that of tension-torsion tests. iii) The tension-torsion tests such as r_2 = 1 and δ = 90^circ, where a substantial additionnal hardening takes place. iv) The tension-torsion-pressure tests where the three components are strongly shifted, namely: r_1 = r_2 = 1, δ = 90^circ and \\varphi = 60^circ, and r_2 = 1, r_1 = -1, δ = 41.4^circ and \\varphi = 82.8^circ. The hardening is slightly superior to the one recorded in tension-torsion. A more thorough study is in preparation which considers all the possible combinations in tension-torsion-pressures, and will be performed on the same material. The early results tend to validate the observations presented in this article. Cette étude réside dans la détermination expérimentale du comportement à la température ambiante de l'acier inoxydable 316 L sous chargement cyclique non proportionnel en traction-torsion-pressions interne et externe. Les deux ou trois déformations sinusoïdales appliquées sont soit en phase, soit hors-phase et l'on étudie l'amplitude du durcissement supplémentaire en fonction du degré de multiaxialité. On présente quelques boucles stabilisées typiques. Par rapport au durcissement supplémentaire maximal, les différents essais peuvent être classés comme suit: essais en phase (pas de durcissement supplémentaire), essais de traction-pressions hors-phase, essais de traction-torsion hors phase et essais de traction-torsion-pressions avec déphasages conséquents. International Nuclear Information System (INIS) Lindell, B.; Danielsson-Tham, M.L.; Hoel, C. 1983-01-01 A committee has on instructions from the swedish government made an inquiry into the possible effects on health and working environment from irradition of food. In this report, a review is presented on the known positiv and negative effects of food irradiation Costs, availabilty, shelf life and quality of irradiated food are also discussed. According to the report, the production of radiolysis products during irradiation is not easily evaluated. The health risks from irradiation of spices are estimated to be lower than the risks associated with the ethenoxid treatment presently used. (L.E.) International Nuclear Information System (INIS) Foeldiak, Gabor; Stenger, Vilmos. 1983-01-01 The main parameters and the preparation procedures of the gamma radiation sources frequently applied for irradiation purposes are discussed. In addition to 60 Co and 137 Cs sources also the nuclear power plants offer further opportunities: spent fuel elements and products of certain (n,γ) reactions can serve as irradiation sources. Laboratory scale equipments, pilot plant facilities for batch or continuous operation, continuous industrial irradiators and special multipurpose, mobile and panorama type facilities are reviewed including those in Canada, USA, India, the Soviet Union, Hungary, UK, Japan and Australia. For irradiator design the source geometry dependence of the spatial distribution of dose rates can be calculated. (V.N.) International Nuclear Information System (INIS) Beaumariage, M.L.; Hiesche, K.; Revesz, L.; Haot, J. 1975-01-01 In sublethally irradiated CBA mice, the relative and absolute numbers of spontaneous rosette forming cells against sheep erythrocytes are markedly decreased in bone marrow. The decrease of the absolute number of spontaneous RFC is also important in the spleen in spite of an increase of the RFC relative number above the normal values between the 8th and 12th day after irradiation. The graft of normal bone marrow cells immediately after irradiation or the shielding of a medullary area during irradiation promotes the recovery of the immunocytoadherence capacity of the bone marrow cells but not of the spleen cells [fr International Nuclear Information System (INIS) Gulis, I.G.; Evdokimenko, V.M.; Lapkovskij, M.P.; Petrov, P.T.; Gulis, I.M.; Markevich, S.V. 1977-01-01 A visible fluorescence has been found out in γ-irradiated aqueous of carbohydrates. Two bands have been distinguished in fluorescence spectra of the irradiated solution of dextran: a short-wave band lambdasub(max)=140 nm (where lambda is a wave length) at lambdasub(β)=380 nm and a long-wave band with lambdasub(max)=540 nm at lambdasub(β)=430 nm. A similar form of the spectrum has been obtained for irradiated solutions of starch, amylopectin, lowmolecular glucose. It has been concluded that a macromolecule of polysaccharides includes fluorescent centres. A relation between fluorescence and α-oxiketon groups formed under irradiation has been pointed out Energy Technology Data Exchange (ETDEWEB) Caha, A; Krystof, V [Vyzkumny Ustav Klinicke a Experimentalni Onkologie, Brno (Czechoslovakia) 1979-07-01 The principles are discussed of the planning of irradiation, ie., the use of the various methods of location of a pathological focus and the possibility of semiautomatic transmission of the obtained data on a two-dimensional or spatial model. An efficient equipment is proposed for large irradiation centres which should cooperate with smaller irradiation departments for which also a range of apparatus is proposed. Irradiation planning currently applied at the Research Institute of Clinical and Experimental Oncology in Brno is described. In conclusion, some of the construction principles of semi-automatic operation of radiotherapy departments are discussed. 20. Food irradiation: fiction and reality International Nuclear Information System (INIS) 1991-01-01 International Nuclear Information System (INIS) 1991-01-01 This fact sheet addresses the safety of irradiated food. The irradiation process produces very little chemical change in food, and laboratory experiments have shown no harmful effects in animals fed with irradiated milk powder. 3 refs CSIR Research Space (South Africa) Kok, S 2011-01-01 Full Text Available A new method is proposed to predict the irradiation induced property changes in nuclear; graphite, including the effect of a change in irradiation temperature. The currently used method; to account for changes in irradiation temperature, the scaled... International Nuclear Information System (INIS) Bugyaki, L. 1977-01-01 International Nuclear Information System (INIS) Chmielewski, A.G. 2007-01-01 Application of radiation in pharmaceutical sciences and cosmetology, polymer materials, food industry, environment, health camre products and packing production is described. Nano-technology is described more detailed, because it is less known as irradiation using technology. Economic influence of the irradiation on the materials value addition is shown International Nuclear Information System (INIS) Colomez, Gerard; Veyrat, J.F. 1981-01-01 Irradiation trials conducted on materials-testing reactors should provide a better understanding of the phenomena which characterize the working and evolution in time of electricity-generating nuclear reactors. The authors begin by outlining the objectives behind experimental irradiation (applied to the various nuclear chains) and then describe the special techniques deployed to achieve these objectives [fr International Nuclear Information System (INIS) Webb, Tony; Lang, Tim 1987-01-01 The London Food Commission summarizes its concerns about the use of food irradiation in the U.K. resulting from its working group surveys of general public opinion, trading standard officers and the food industry in the U.K., and from experience in countries already permitting irradiation to a variety of foods. (U.K.) Energy Technology Data Exchange (ETDEWEB) 1982-01-01 The volume contains reports from 19 countries on the state of the project in the field of food irradiation (fruit, vegetables, meat, spices) by means of gamma rays. The tests ran up to 1982. Microbiological radiosensitivity and mutagenicity tests provide a yard stick for irradiation efficiency. International Nuclear Information System (INIS) Hamilton, M. 1990-01-01 The author explains in simple question and answer form what is entailed in the irradiation of food and attempts to dispel some of the anxieties surrounding the process. Benefits and limitations, controls, labelling safety, and tests for the detection of the use irradiation in food preparation are some of the topics dealt with in outline. (author) International Nuclear Information System (INIS) Anon. 1987-01-01 Recent US Food and Drug Administration approval of irradiation treatment for fruit, vegetables and pork has stimulated considerable discussion in the popular press on the safety and efficacy of irradiation processing of food. This perspective is designed to summarize the current scientific information available on this issue Energy Technology Data Exchange (ETDEWEB) Kawabata, T. 1981-09-15 International Nuclear Information System (INIS) Vestey, J.P.; Hunter, J.A.A.; Mallet, R.B.; Rodger, A. 1989-01-01 The authors have recently seen 3 patients affected by a widespread eruption of minute keratoses confined to areas of irradiated skin with clinical and histologial features of which they have been unable to find previous literary descriptions. A fourth patient with similar clinical and histopathological features occurring after exposure only to actinic irradiation is described. (author) Energy Technology Data Exchange (ETDEWEB) Vestey, J.P.; Hunter, J.A.A. (Royal Infirmary, Edinburgh (UK)); Mallet, R.B. (Westminster Hospital, London (UK)); Rodger, A. (Western General Hospital, Edinburgh (UK)) 1989-03-01 The authors have recently seen 3 patients affected by a widespread eruption of minute keratoses confined to areas of irradiated skin with clinical and histologial features of which they have been unable to find previous literary descriptions. A fourth patient with similar clinical and histopathological features occurring after exposure only to actinic irradiation is described. (author). International Nuclear Information System (INIS) Quere, Y. 1989-01-01 Most superconductors are quite sensitive to irradiation defects. Critical temperatures may be depressed, critical currents may be increased, by irradiation, but other behaviours may be encountered. In compounds, the sublattice in which defects are created is of significant importance. 24 refs International Nuclear Information System (INIS) Labots, H.; Huis in 't Veld, G.J.P.; Verrips, C.T. 1985-01-01 After a review of several methods for the preservation of food and the routes of food infections, the following chapters are devoted to the preservation by irradiation. Applications and legal aspects of food irradiation are described. Special reference is made to the international situation. (Auth.) International Nuclear Information System (INIS) Ley, F.J. 1988-01-01 A brief review is given of the control and monitoring of food irradiation with particular emphasis on the UK situation. After describing legal aspects, various applications of food irradiation in different countries are listed. Other topics discussed include code of practice for general control for both gamma radiation and electron beam facilities, dose specification, depth dose distribution and dosimetry. (U.K.) International Nuclear Information System (INIS) Fowler, S.L. 1979-01-01 Irradiated film having substantial uniformity in the radiation dosage profile is produced by irradiating the film within a trough having lateral deflection blocks disposed adjacent the film edges for deflecting electrons toward the surface of the trough bottom for further deflecting the electrons toward the film edge Energy Technology Data Exchange (ETDEWEB) Ubic, Rick; Butt, Darryl; Windes, William 2014-03-13 An understanding of the underlying mechanisms of irradiation creep in graphite material is required to correctly interpret experimental data, explain micromechanical modeling results, and predict whole-core behavior. This project will focus on experimental microscopic data to demonstrate the mechanism of irradiation creep. High-resolution transmission electron microscopy should be able to image both the dislocations in graphite and the irradiation-induced interstitial clusters that pin those dislocations. The team will first prepare and characterize nanoscale samples of virgin nuclear graphite in a transmission electron microscope. Additional samples will be irradiated to varying degrees at the Advanced Test Reactor (ATR) facility and similarly characterized. Researchers will record microstructures and crystal defects and suggest a mechanism for irradiation creep based on the results. In addition, the purchase of a tensile holder for a transmission electron microscope will allow, for the first time, in situ observation of creep behavior on the microstructure and crystallographic defects. International Nuclear Information System (INIS) Lunt, R.E. 1987-01-01 Mechanical handling apparatus is adapted to handle goods, such as boxed fruit, during a process of irradiation, in palletized form. Palletized goods are loaded onto wheeled vehicles in a loading zone. Four vehicles are wheeled on a track into an irradiation zone via a door in a concrete shield. The vehicles are arranged in orthogonal relationship around a source of square section. Turntables are positioned at corners of the square shaped rail truck around the source selectively to turn the vehicles to align then with track sections. Mechanical manipulating devices are positioned in the track sections opposed to sides of the source. During irradiation, the vehicles and their palletized goods are cylically moved toward the source to offer first sides of the goods for irradiation and are retraced from the source and are pivoted through 90 0 to persent succeeding sides of the goods for irradiation International Nuclear Information System (INIS) Kilcast, D. 1990-01-01 Food irradiation is used to improve the safety of food by killing insects and microorganisms, to inhibit sprouting in crops such as onions and potatoes and to control ripening in agricultural produce. In order to prevent re-infestation and re-contamination it is essential that the food is suitably packed. Consequently, the packaging material is irradiated whilst in contact with the food, and it is important that the material is resistant to radiation-induced changes. In this paper the nature of the irradiation process is reviewed briefly, together with the known effects of irradiation on packaging materials and their implications for the effective application of food irradiation. Recent research carried out at the Leatherhead Food RA on the possibility of taint transfer into food is described. (author) International Nuclear Information System (INIS) Mills, S. 1987-04-01 This discussion paper has two goals: first, to raise public awareness of food irradiation, an emerging technology in which Canada has the potential to build a new industry, mainly oriented to promising overseas markets; and second, to help build consensus among government and private sector decision makers about what has to be done to realize the domestic and export potential. The following pages discuss the potential of food irradiation; indicate how food is irradiated; outline the uses of food irradiation; examine questions of the safety of the equipment and both the safety and nutritional value of irradiated food; look at international commercial developments; assess the current and emerging domestic scene; and finally, draw some conclusions and offer suggestions for action International Nuclear Information System (INIS) Vijayaprabhu, N.; Saravanan, K.S.; Gunaseelan; Vivekanandam, S.; Reddy, K.S.; Parthasarathy; Mourougan, S.; Elangovan, K. 2008-01-01 Extracorporeal irradiation (ECI) involves irradiation of body tissues, particularly malignant bones of the extremities, outside the body. This involves en bloc resection of the tumour, extracorporeal irradiation of the bone segment with a single dose of 50 Gy or more, and reimplantation of the irradiated bone with fixation devices. Bone tumours like Ewing's Sarcoma, Chondrosarcoma and Oesteosarcoma; in the involved sites like femur, tibia, humerus, ilium and sacrum can be treated with ECI. The reimplanted bone simply acts as a framework for appositional bone growth from surrounding healthy bones. The conventional indications for postoperative irradiation are still applied. The major advantages of ECI are the precise anatomic fit of the reimplanted bone segment, preservation of joint mobility and its potential in avoiding the growth discrepancy commonly seen in prosthetic replacement. The use of ECI was first described in 1968 and practiced in Australia since 1996. In our center, we have completed six ECIs Energy Technology Data Exchange (ETDEWEB) Birtcher, R.C.; Ewing, R.C.; Matzke, Hj.; Meldrum, A.; Newcomer, P.P.; Wang, L.M.; Wang, S.X.; Weber, W.J. 1999-08-09 International Nuclear Information System (INIS) Narvaiz, Patricia 2009-01-01 International Nuclear Information System (INIS) Zarling, J.P.; Swanson, R.B.; Logan, R.R. 1988-01-01 The ninety-ninth US Congress commissioned a six-state food irradiation research and development program to evaluate the commercial potential of this technology. Hawaii, Washington, Iowa, Oklahoma and Florida as well as Alaska have participated in the national program; various food products including fishery products, red meats, tropical and citrus fruits and vegetables have been studied. The purpose of the Alaskan study was to review and evaluate those factors related to the technical and economic feasibility of an irradiator in Alaska. This options analysis study will serve as a basis for determining the state's further involvement in the development of food irradiation technology. 40 refs., 50 figs., 53 tabs International Nuclear Information System (INIS) Stevanovic, M. 1965-10-01 Based on the review of the available literature concerned with UO 2 irradiation, this paper describes and explains the phenomena initiated by irradiation of the UO 2 fuel in a reactor dependent on the burnup level and temperature. A comprehensive review of UO 2 radiation damage studies is given as a broad research program. This part includes the abilities of our reactor as well as needed elements for such study. The third part includes the definitions of the specific power, burnup level and temperature in the center of the fuel element needed for planning and performing the irradiation. Methods for calculating these parameters are included [sr International Nuclear Information System (INIS) Meier, W. 1991-01-01 Foods, e.g. chicken, shrimps, frog legs, spices, different dried vegetables, potatoes and fruits are legally irradiated in many countries and are probably also exported into countries, which do not permit irradiation of any food. Therefore all countries need analytical methods to determine whether food has been irradiated or not. Up to now, two physical (ESR-spectroscopy and thermoluminescence) and two chemical methods (o-tyrosine and volatile compounds) are available for routine analysis. Several results of the application of these four mentioned methods on different foods are presented and a short outlook on other methods (chemiluminescence, DNA-changes, biological assays, viscometric method and photostimulated luminescence) will be given. (author) Energy Technology Data Exchange (ETDEWEB) Chouraqui, A; Creuzillet, C; Barrat, J [Hopital Saint-Antoine, 75 - Paris (France) 1985-04-21 Energy Technology Data Exchange (ETDEWEB) Hering, H; Perio, P; Seguin, M [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires 1959-07-01 While fast neutrons only are effective in damaging graphite, results of irradiations are more or less universally expressed in terms of thermal neutron fluxes. This paper attempts to correlate irradiations made in different reactors, i.e., in fluxes of different spectral compositions. Those attempts are based on comparison of 1) bulk length change and volume expansion, and 2) crystalline properties (e.g., lattice parameter C, magnetic susceptibility, stored energy, etc.). The methods used by various authors for determining the lattice constants of irradiated graphite are discussed. (author) International Nuclear Information System (INIS) 1991-01-01 This fact sheet discusses market testing of irradiate food, consumer response to irradiated products has always been positive, and in some countries commercial quantities of some irradiated food items have been sold on a regular basis. Consumers have shown no reluctance to buy irradiated food products. 4 refs Energy Technology Data Exchange (ETDEWEB) Chmielewski, A G [Institute of Nuclear Chemistry and Technology, Warsaw (Poland) 2006-07-01 Joint FAO/IAEA/WHO Expert Committee approved the use of radiation treatment of foods. Nowadays food packaging are mostly made of plastics, natural or synthetic, therefore effect of irradiation on these materials is crucial for packing engineering for food irradiation technology. By selecting the right polymer materials for food packaging it can be ensured that the critical elements of material and product performance are not compromised. When packaging materials are in contact with food at the time of irradiation that regulatory approvals sometimes apply. The review of the R-and-D and technical papers regarding material selection, testing and approval is presented in the report. The most information come from the USA where this subject is well elaborated, the International Atomic Energy Agency (IAEA) reports are reviewed as well. The report can be useful for scientists and food irradiation plants operators. (author) International Nuclear Information System (INIS) Chmielewski, A.G. 2006-01-01 Joint FAO/IAEA/WHO Expert Committee approved the use of radiation treatment of foods. Nowadays food packaging are mostly made of plastics, natural or synthetic, therefore effect of irradiation on these materials is crucial for packing engineering for food irradiation technology. By selecting the right polymer materials for food packaging it can be ensured that the critical elements of material and product performance are not compromised. When packaging materials are in contact with food at the time of irradiation that regulatory approvals sometimes apply. The review of the R-and-D and technical papers regarding material selection, testing and approval is presented in the report. The most information come from the USA where this subject is well elaborated, the International Atomic Energy Agency (IAEA) reports are reviewed as well. The report can be useful for scientists and food irradiation plants operators. (author) International Nuclear Information System (INIS) Oztasiran, I. 1984-01-01 Irradiation is a physical process for treating food and as such it is comparable to other processing techniques such as heating or freezing foods for preservation. The energy level used in food irradiation is always below that producing radioactivity in the treated food, hence this aspect can be totally excluded in wholesomeness evaluations. Water is readily ionized and may be the primary source of ionization in foods with secondary effects on other molecules, possibly more a result of water ionization than of direct hits. In the presence of oxygen, highly reactive compounds may be produced, such as H, H 3 0+ and H 2 O 2 . Radiation at the energy flux levels used for food (<2 MeV) does not induce radioactivity. Food irradiation applications are already technically and economically feasible and that food so treated is suitable for consumption. Food irradiation techniques can play an important role for an improved preservation, storage and distribution of food products. (author) International Nuclear Information System (INIS) Martin, G.; Bellon, P.; Soisson, F. 1997-01-01 During the last two decades, some effort has been devoted to establishing a phenomenology for alloys under irradiation. Theoretically, the effects of the defect supersaturation, sustained defect fluxes and ballistic mixing on solid solubility under irradiation can now be formulated in a unified manner, at least for the most simple cases: coherent phase transformations and nearest-neighbor ballistic jumps. Even under such restrictive conditions, several intriguing features documented experimentally can be rationalized, sometimes in a quantitative manner and simple qualitative rules for alloy stability as a function of irradiation conditions can be formulated. A quasi-thermodynamic formalism can be proposed for alloys under irradiation. However, this point of view has limits illustrated by recent computer simulations. (orig.) International Nuclear Information System (INIS) Henon, Y.M. 1995-01-01 Food irradiation already has a long history of hopes and disappointments. Nowhere in the world it plays the role that it should have, including in the much needed prevention of foodborne diseases. Irradiated food sold well wherever consumers were given a chance to buy them. Differences between national regulations do not allow the international trade of irradiated foods. While in many countries food irradiation is still illegal, in most others it is regulated as a food additive and based on the knowledge of the sixties. Until 1980, wholesomeness was the big issue. Then the ''prerequisite'' became detection methods. Large amounts of money have been spent to design and validate tests which, in fact, aim at enforcing unjustified restrictions on the use of the process. In spite of all the difficulties, it is believed that the efforts of various UN organizations and a growing legitimate demand for food safety should in the end lead to recognition and acceptance. (Author) International Nuclear Information System (INIS) Deitch, J. 1982-01-01 This article examines the cost competitiveness of the food irradiation process. An analysis of the principal factors--the product, physical plant, irradiation source, and financing--that impact on cost is made. Equations are developed and used to calculate the size of the source for planned product throughput, efficiency factors, power requirements, and operating costs of sources, radionuclides, and accelerators. Methods of financing and capital investment are discussed. A series of tables show cost breakdowns of sources, buildings, equipment, and essential support facilities for both a cobalt-60 and a 10-MeV electron accelerator facility. Additional tables present irradiation costs as functions of a number of parameters--power input, source size, dose, and hours of annual operation. The use of the numbers in the tables are explained by examples of calculations of the irradiation costs for disinfestation of grains and radicidation of feed International Nuclear Information System (INIS) 1982-01-01 From the start the Netherlands has made an important contribution to the irradiation of food through microbiological and toxicological research as well as through the setting-up of a pilot plant by the government and through the practical application of 'Gammaster' on a commercial basis. The proceedings of this tenth anniversary symposium of 'Gammaster' present all aspects of food irradiation and will undoubtedly help to remove the many misunderstandings. They offer information and indicate to the potential user a method that can make an important contribution to the prevention of decay and spoilage of foodstuffs and to the exclusion of food-borne infections and food poisoning in man. The book includes 8 contributions and 4 panel discussions in the field of microbiology; technology; legal aspects; and consumer aspects of food irradiation. As an appendix, the report 'Wholesomeness of irradiated food' of a joint FAO/IAEA/WHO Expert Committee has been added. (orig./G.J.P.) International Nuclear Information System (INIS) Reyes Frias, L. 1992-01-01 Since 1980 the National Institute of Nuclear Research counts with an Industrial Gamma Irradiator, for the sterilization of raw materials and finished products. Through several means has been promoted the use of this technology as alternative to conventional methods of sterilization as well as steam treatment and ethylene oxide. As a result of the made promotion this irradiator has come to its saturation limit being the sterilization irradiation one of the main services that National Institute of Nuclear Research offers to producer enterprises of disposable materials of medical use also of raw materials for the elaboration of cosmetic products and pharmaceuticals as well as dehydrated foods. It is presented the trend to the sterilization service by irradiation showed by the compilation data in a survey made by potential customers. (Author) International Nuclear Information System (INIS) Kilcast, David 1988-01-01 This outline review was written for 'Food Manufacture'. It deals with the known effects of irradiation on current packaging materials (glass, cellulosics, organic polymers and metals), and their implications for the effective application of the process. (U.K.) International Nuclear Information System (INIS) Uda, I.; Kozima, K.; Suzuki, S.; Tada, S.; Torisu, S.; Veno, K. 1984-01-01 Rubber insulated wires are still useful for internal wiring in motor vehicles and electrical equipment because of flexibility and toughness. Irradiated cross-linked rubber materials have been successfully introduced for use with fusible link wire and helically coiled cord Energy Technology Data Exchange (ETDEWEB) Petersen, C. E-mail: [email protected]; Shamardin, V.; Fedoseev, A.; Shimansky, G.; Efimov, V.; Rensman, J 2002-12-01 The irradiation project 'ARBOR', for 'Associated Reactor Irradiation in BOR 60', includes 150 mini-tensile/low cycle fatigue specimens and 150 mini-Charpy (KLST) specimens of nine different RAFM steels. Specimens began irradiation on 22 November 2000 in an specially designed irradiation rig in BOR 60, in a fast neutron flux (>0.1 MeV) of 1.8x10{sup 15} n/cm{sup 2} s and with direct sodium cooling at a temperature less than 340 deg. C. Tensile, low cycle fatigue and Charpy specimens of the following materials are included: EUROFER 97, F82H mod., OPTIFER IVc, EUROFER 97 with different boron contents, ODS-EUROFER 97, as well as EUROFER 97 electron-beam welded and reference bulk material, from NRG, Petten. International Nuclear Information System (INIS) Petersen, C.; Shamardin, V.; Fedoseev, A.; Shimansky, G.; Efimov, V.; Rensman, J. 2002-01-01 The irradiation project 'ARBOR', for 'Associated Reactor Irradiation in BOR 60', includes 150 mini-tensile/low cycle fatigue specimens and 150 mini-Charpy (KLST) specimens of nine different RAFM steels. Specimens began irradiation on 22 November 2000 in an specially designed irradiation rig in BOR 60, in a fast neutron flux (>0.1 MeV) of 1.8x10 15 n/cm 2 s and with direct sodium cooling at a temperature less than 340 deg. C. Tensile, low cycle fatigue and Charpy specimens of the following materials are included: EUROFER 97, F82H mod., OPTIFER IVc, EUROFER 97 with different boron contents, ODS-EUROFER 97, as well as EUROFER 97 electron-beam welded and reference bulk material, from NRG, Petten International Nuclear Information System (INIS) Thomas, A.C.; Beyers, M. 1976-01-01 Irradiation can be used to eliminate harmful bacteria in frozen products without thawing them. It can also replace chemicals or extended cold storage as a means of killing insect pests in export commodities International Nuclear Information System (INIS) Goto, Michiko; Yamazaki, Masao; Sekiguchi, Masayuki; Todoriki, Setsuko; Miyahara, Makoto 2007-01-01 International Nuclear Information System (INIS) Wilson, B.K. 1985-01-01 The subject is discussed under the headings: food irradiation regulatory situation in Canada; non-regulatory developments (poultry irradiation; fish irradiation; Government willingness to fund industry initiated projects; Government willingness to establish food irradiation research and pilot plant facilities; food industry interest is increasing significantly; Canadian Consumers Association positive response; the emergence of new consulting and entrepreneurial firms). (U.K.) International Nuclear Information System (INIS) Kilcast, David 1990-01-01 Recent legislation will permit the introduction of food irradiation in the UK. This development has been met with protests from consumer groups, and some wariness among retailers. David Kilcast, of the Leatherhead Food Research Association, explains the basic principles and applications of food irradiation, and argues that a test marketing campaign should be initiated. The consumer, he says, will have the final say in the matter. (author) International Nuclear Information System (INIS) Roberts, P.B. 1985-04-01 Chilled, vacuum-packed New Zealand lamb loins have been irradiated at doses between 1-8 kGy. The report outlines the methods used and provides dosimetry details. An appendix summarises the results of a taste trial conducted on the irradiated meat by the Meat Industry Research Institute of New Zealand. This showed that, even at 1 kGy, detectable flavours were induced by the radiation treatment International Nuclear Information System (INIS) Mohd Ghazali Hj Abd Rahman. 1985-01-01 Energy Technology Data Exchange (ETDEWEB) Rohrbaugh, David Thomas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Windes, William [Idaho National Lab. (INL), Idaho Falls, ID (United States); Swank, W. David [Idaho National Lab. (INL), Idaho Falls, ID (United States) 2016-06-01 The Next Generation Nuclear Plant (NGNP) will be a helium-cooled, very high temperature reactor (VHTR) with a large graphite core. In past applications, graphite has been used effectively as a structural and moderator material in both research and commercial high temperature gas cooled reactor (HTGR) designs.[ , ] Nuclear graphite H 451, used previously in the United States for nuclear reactor graphite components, is no longer available. New nuclear graphites have been developed and are considered suitable candidates for the new NGNP reactor design. To support the design and licensing of NGNP core components within a commercial reactor, a complete properties database must be developed for these current grades of graphite. Quantitative data on in service material performance are required for the physical, mechanical, and thermal properties of each graphite grade with a specific emphasis on data related to the life limiting effects of irradiation creep on key physical properties of the NGNP candidate graphites. Based on experience with previous graphite core components, the phenomenon of irradiation induced creep within the graphite has been shown to be critical to the total useful lifetime of graphite components. Irradiation induced creep occurs under the simultaneous application of high temperatures, neutron irradiation, and applied stresses within the graphite components. Significant internal stresses within the graphite components can result from a second phenomenon—irradiation induced dimensional change. In this case, the graphite physically changes i.e., first shrinking and then expanding with increasing neutron dose. This disparity in material volume change can induce significant internal stresses within graphite components. Irradiation induced creep relaxes these large internal stresses, thus reducing the risk of crack formation and component failure. Obviously, higher irradiation creep levels tend to relieve more internal stress, thus allowing the International Nuclear Information System (INIS) Hungate, F.P.; Riemath, W.F.; Bunnell, L.R. 1980-01-01 A fully portable blood irradiator was developed using the beta emitter thulium-170 as the radiation source and vitreous carbon as the body of the irradiator, matrix for isotope encapsulation, and blood interface material. These units were placed in exteriorized arteriovenous shunts in goats, sheep, and dogs and the effects on circulating lymphocytes and on skin allograft retention times measured. The present work extends these studies by establishing baseline data for skin graft rejection times in untreated animals International Nuclear Information System (INIS) Vinning, G. 1988-01-01 As a commercial activity, food irradiation is twenty years old, but is backed by nearly eighty years of research on gamma irradiation and sixty years knowledge of application of the technology to food. An overview is given of the global boom and then the hiatus in its legislative and commercial applications. It is emphasised that in Australia, the overseas experience provides a number of models for proceeding further for food manufacturers, consumers and Government. 13 refs International Nuclear Information System (INIS) Machi, Sueo 1995-01-01 International Nuclear Information System (INIS) Ito, Hitoshi 1995-01-01 The basic research on food irradiation in Japan was begun around 1955 by universities and national laboratories. In 1967, food irradiation was designated to the specific general research on atomic energy, and the national project on large scale was continued until 1983. As the result, the treatment of germination prevention for potatoes was approved by the Ministry of Health and Welfare in 1972. The Co-60 gamma ray irradiation facility of Shihoro Agricultural Cooperative is famous as the facility that succeeded in the practical use of food irradiation for the first time in the world. But the practical use of food irradiation stagnates and the research activities were reduced in Japan due to the circumstances thereafter. The effect of radiation to foods and living things is explained. The features of the radiation treatment of foods are small temperature rise, large transmissivity, no residue, the small loss of nutrition and large quantity, continuous treatment. The safety of irradiated foods is explained. The subjects for hereafter are discussed. (K.I.) International Nuclear Information System (INIS) Sutherland, D.E.; Ferguson, R.M.; Simmons, R.L.; Kim, T.H.; Slavin, S.; Najarian, J.S. 1983-01-01 Total lymphoid irradiation by itself can produce sufficient immunosuppression to prolong the survival of a variety of organ allografts in experimental animals. The degree of prolongation is dose-dependent and is limited by the toxicity that occurs with higher doses. Total lymphoid irradiation is more effective before transplantation than after, but when used after transplantation can be combined with pharmacologic immunosuppression to achieve a positive effect. In some animal models, total lymphoid irradiation induces an environment in which fully allogeneic bone marrow will engraft and induce permanent chimerism in the recipients who are then tolerant to organ allografts from the donor strain. If total lymphoid irradiation is ever to have clinical applicability on a large scale, it would seem that it would have to be under circumstances in which tolerance can be induced. However, in some animal models graft-versus-host disease occurs following bone marrow transplantation, and methods to obviate its occurrence probably will be needed if this approach is to be applied clinically. In recent years, patient and graft survival rates in renal allograft recipients treated with conventional immunosuppression have improved considerably, and thus the impetus to utilize total lymphoid irradiation for its immunosuppressive effect alone is less compelling. The future of total lymphoid irradiation probably lies in devising protocols in which maintenance immunosuppression can be eliminated, or nearly eliminated, altogether. Such protocols are effective in rodents. Whether they can be applied to clinical transplantation remains to be seen International Nuclear Information System (INIS) Austin, J.R.; Brown, M.J.; Loan, L.D. 1975-01-01 International Nuclear Information System (INIS) Benk, V.; Habrand, J.L.; Bloch Michel, E.; Soussaline, M.; Sarrazin, D. 1993-01-01 From 1975 to 1985, 34 children with a non-metastatic retinoblastoma were irradiated at the Institut Gustave-Roussy. After enucleation, 19 bilateral tumors were irradiated by two lateral opposed fields and 15 unilateral tumors by one lateral and anterior field, in the case of optic nerve being histologically positive. Dose was 45 Gy, 1.8 Gy per fraction. The 10-year-survival rate for unilateral and bilateral retinoblastomas was 79%. Long term sequels were available for 25 patients: 88% retained one functional eye. Three children with bilateral retinoblastomas developed a cataract in the residual eye between 2 and 5 years after irradiation, none with unilateral tumor. Nine patients (36%), seven with unilateral and two with bilateral tumor developed a cosmetical problem that required multiple surgical rehabilitation between 3 and 14 years after irradiation. Nine children (36%), five with unilateral and four with bilateral tumors developed growth hormone deficit between 2 and 8 years after irradiation that required hormone replacement. Their pituitary gland received 22 to 40 Gy. No osteosarcoma occurred in this population. Among long-term sequels, following irradiation for retinoblastoma, cosmetical deformities represent disabling sequels that could justify new approaches in radiotherapy, as protontherapy combined with 3-D-treatment planning 16. food irradiation: activities and potentialities International Nuclear Information System (INIS) Doellstaedt, R.; Huebner, G. 1985-01-01 After the acceptance of food irradiation up to an overall average dose of 10 kGy recommended by the Joint FAO/IAEA/WHO Expert Committee on the Wholesomeness of Irradiated Food in October 1980, the G.D.R. started a programme for the development of techniques for food irradiation. A special onion irradiator was designed and built as a pilot plant for studying technological and economic parameters of the irradiation of onions. (author) 17. Detection methods for irradiated foods International Nuclear Information System (INIS) Dyakova, A.; Tsvetkova, E.; Nikolova, R. 2005-01-01 In connection with the ongoing world application of irradiation as a technology in Food industry for increasing food safety, it became a need for methods of identification of irradiation. It was required to control international trade of irradiated foods, because of the certain that legally imposed food laws are not violated; supervise correct labeling; avoid multiple irradiation. Physical, chemical and biological methods for detection of irradiated foods as well principle that are based, are introducing in this summary 18. Blood irradiation: Rationale and technique International Nuclear Information System (INIS) Lewis, M.C. 1990-01-01 Upon request by the local American Red Cross, the Savannah Regional Center for Cancer Care irradiates whole blood or blood components to prevent post-transfusion graft-versus-host reaction in patients who have severely depressed immune systems. The rationale for blood irradiation, the total absorbed dose, the type of patients who require irradiated blood, and the regulations that apply to irradiated blood are presented. A method of irradiating blood using a linear accelerator is described 19. Irradiated produce reaches Midwest market International Nuclear Information System (INIS) Pszczola, D.E. 1992-01-01 In March 1992, the Chicago-area store gave its shoppers a choice between purchasing irradiated and nonirradiated fruits. The irradiated fruits were treated at Vindicator Inc., the first U.S. food irradiation facility (starting up on January 10, 1992). The plant, located in Mulberry, Fla., then shipped the fruits in trucks to the store where they were displayed under a hand-lettered sign describing the irradiated fruits and showing the irradiation logo International Nuclear Information System (INIS) Lushbaugh, C.C.; Brown, D.G.; Frome, E.L. 1986-01-01 International Nuclear Information System (INIS) Braby, L.A. 1985-01-01 International Nuclear Information System (INIS) Thongphasuk, Jarunee; Thongphasuk, Piyanuch; Eamsiri, Jarurut; Pongpat, Suchada 2009-07-01 Full text: Microbial contamination of medicinal herbs can be effectively reduced by gamma irradiation. Since irradiation may cause carcinogenicity of the irradiated herbs, the objective of this research is to study the effect of gamma irradiation (10 and 25 kGy) from cobalt-60 on carcinogenicity. The herbs studied were Pueraria candollei Grah., Curcuma longa Linn. Zingiber montanum, Senna alexandrina P. Miller, Eurycoma Longifolia Jack, Gymnostema pentaphylum Makino, Ginkgo biloba, Houttuynia cordata T., Andrographis paniculata, Thunbergia laurifolia L., Garcinia atroviridis G., and Cinnamomum verum J.S.Presl. The results showed that gamma irradiation at the dose of 10 and 25 kGy did not cause carcinogenicity of the irradiated herbs International Nuclear Information System (INIS) Barlas, O.; Bayindir, C.; Can, M. 2000-01-01 The results of interstitial irradiation treatment for craniopharyngioma in two patients with six year follow-ups are presented. Stereotactic interstitial irradiation with iodine-125 sources as sole therapy was employed in two adult patients who refused surgical resection. The diagnoses were confirmed by stereotactic biopsy. The first tumour which underwent interstitial irradiation was solid and 4 cm in diameter, and the second, 2.7 cm in diameter, had both cystic and solid components. The implanted iodine-125 seeds delivered 67 Gy and 60 Gy to tumour periphery at the rate of 12 and 14 cGy/h, respectively, were removed at the end of designated radiation periods. Tumour shrinkage and central hypo density, first observed 3 months after irradiation, continued until one tumour shrank to less than 1 cm at 12 months, and the other disappeared completely at 24 months. In both cases functional integrity was restored, and neither radiation induced toxicity nor recurrence has occurred six years after treatment. The results in these two cases suggest that solid craniopharyngiomas are sensitive to interstitial irradiation. (author) International Nuclear Information System (INIS) Meerwaldt, J.H. 1984-01-01 In radiotherapy of pelvic cancers, the X-ray dose to be delivered to the tumour is limited by the tolerance of healthy surrounding tissue. In recent years, a number of serious complications of irradiation of pelvic organs were encountered. Modern radiotherapy necessitates the acceptance of a calculated risk of complications in order to achieve a better cure rate. To calculate these risks, one has to know the radiation dose-effect relationship of normal tissues. Of the normal tissues most at risk when treating pelvic tumours only the bowel is studied. In the literature regarding post-irradiation bowel complications, severe and mild complications are often mixed. In the present investigation the author concentrated on the group of patients with relatively mild symptoms. He studied the incidence and course of post-irradiation diarrhea in 196 patients treated for carcinoma of the uterine cervix or endometrium. The aims of the present study were: 1) to determine the incidence, course and prognostic significance of post-irradiation diarrhea; 2) to assess the influence of radiotherapy factors; 3) to study the relation of bile acid metabolism to post-irradiation diarrhea; 4) to investigate whether local factors (reservoir function) were primarily responsible. (Auth.) 5. Modélisation du procédé de soudage hybride Arc / Laser par une approche level set application aux toles d'aciers de fortes épaisseurs A level-set approach for the modelling of hybrid arc/laser welding process application for high thickness steel sheets joining Directory of Open Access Journals (Sweden) Desmaison Olivier 2013-11-01 Full Text Available Le procédé de soudage hybride Arc/Laser est une solution aux assemblages difficiles de tôles de fortes épaisseurs. Ce procédé innovant associe deux sources de chaleur : un arc électrique produit par une torche MIG et une source laser placée en amont. Ce couplage améliore le rendement du procédé, la qualité du cordon et les déformations finales. La modélisation de ce procédé par une approche Level Set permet une prédiction du développement du cordon et du champ de température associé. La simulation du soudage multi-passes d'une nuance d'acier 18MnNiMo5 est présentée ici et les résultats sont comparés aux observations expérimentales. The hybrid arc/laser welding process has been developed in order to overcome the difficulties encountered for joining high thickness steel sheets. This innovative process gathers two heat sources: an arc source developed by a MIG torch and a pre-located laser source. This coupling improves the efficiency of the process, the weld bead quality and the final deformations. The Level-Set approach for the modelling of this process enables the prediction of the weld bead development and the temperature field evolution. The simulation of the multi-passes welding of a 18MnNiMo5 steel grade is detailed and the results are compared to the experimental observations. 6. Volatility of ruthenium during vitrification operations on fission products. part 1. nitric solutions distillation concentrates calcination. part 2. fixation on a steel tube. decomposition of the peroxide; Volatilite du ruthenium au cours des operations de vitrification des produits de fission. 1. partie distillation des solutions nitriques calcination des concentrats 2. partie fixation sur un tube d'acier decomposition du peroxyde Energy Technology Data Exchange (ETDEWEB) Ortins de Bettencourt, A; Jouan, A [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1969-07-01 fission, un gros pourcentage de ruthenium initialement present dans ces solutions sous forme de nitrates de nitrosylruthenium est volatilise en donnant du peroxyde qui se decompose lui-meme en bioxyde de ruthenium. Ce travail a pour but l'etude de la volatilite du ruthenium au cours des procedes de vitrification : Durant la distillation des solutions nitriques, nous avons etudie en particulier l'influence sur la volatilite de la temperature, de la forme chimique du ruthenium introduit, du barbotage d'un gaz a travers la solution, de la concentration nitrique et de la concentration en nitrates. Durant la calcination, nous avons observe l'influence de la temperature, du temps, du debit et de la nature du gaz d'entrainement ainsi que l'action du bioxyde de ruthenium et de l'oxyde de fer sur la volatilite du ruthenium. Partie 2. Ce rapport concerne l'etude de la decomposition thermique du peroxyde de ruthenium, RuO{sub 4}, et de son depot sur les conduites en acier. Apres un rappel bibliographique des diverses proprietes de ce corps, nous etudions, dans une premiere partie, son depot sur un tube d'acier. Pour cela nous faisons passer un courant gazeux contenant du RuO{sub 4} marque au {sup 106}Ru dans un tube en acier inoxydable soumis a un gradient de temperature decroissant dans le sens du debit gazeux. Nous determinons la temperature a laquelle RuO{sub 4} se depose ou se fixe sur le tube et nous etudions l'influence de la vitesse des gaz sur ce depot. Dans une deuxieme partie nous essayons d'etudier par une methode statique la cinetique de la reaction de decomposition du peroxyde de ruthenium en son bioxyde: RuO{sub 4} {yields} RuO{sub 2} + O{sub 2}. Nous essayons pour cela d'introduire RuO{sub 4} gazeux dans un recipient place dans un four electrique et tentons de suivre l'evolution de la reaction par comptage {gamma}. (auteur) 7. Volatility of ruthenium during vitrification operations on fission products. part 1. nitric solutions distillation concentrates calcination. part 2. fixation on a steel tube. decomposition of the peroxide; Volatilite du ruthenium au cours des operations de vitrification des produits de fission. 1. partie distillation des solutions nitriques calcination des concentrats 2. partie fixation sur un tube d'acier decomposition du peroxyde Energy Technology Data Exchange (ETDEWEB) Ortins de Bettencourt, A.; Jouan, A. [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1969-07-01 fission, un gros pourcentage de ruthenium initialement present dans ces solutions sous forme de nitrates de nitrosylruthenium est volatilise en donnant du peroxyde qui se decompose lui-meme en bioxyde de ruthenium. Ce travail a pour but l'etude de la volatilite du ruthenium au cours des procedes de vitrification : Durant la distillation des solutions nitriques, nous avons etudie en particulier l'influence sur la volatilite de la temperature, de la forme chimique du ruthenium introduit, du barbotage d'un gaz a travers la solution, de la concentration nitrique et de la concentration en nitrates. Durant la calcination, nous avons observe l'influence de la temperature, du temps, du debit et de la nature du gaz d'entrainement ainsi que l'action du bioxyde de ruthenium et de l'oxyde de fer sur la volatilite du ruthenium. Partie 2. Ce rapport concerne l'etude de la decomposition thermique du peroxyde de ruthenium, RuO{sub 4}, et de son depot sur les conduites en acier. Apres un rappel bibliographique des diverses proprietes de ce corps, nous etudions, dans une premiere partie, son depot sur un tube d'acier. Pour cela nous faisons passer un courant gazeux contenant du RuO{sub 4} marque au {sup 106}Ru dans un tube en acier inoxydable soumis a un gradient de temperature decroissant dans le sens du debit gazeux. Nous determinons la temperature a laquelle RuO{sub 4} se depose ou se fixe sur le tube et nous etudions l'influence de la vitesse des gaz sur ce depot. Dans une deuxieme partie nous essayons d'etudier par une methode statique la cinetique de la reaction de decomposition du peroxyde de ruthenium en son bioxyde: RuO{sub 4} {yields} RuO{sub 2} + O{sub 2}. Nous essayons pour cela d'introduire RuO{sub 4} gazeux dans un recipient place dans un four electrique et tentons de suivre l'evolution de la reaction par comptage {gamma}. (auteur) International Nuclear Information System (INIS) Gottschalk, M. 1978-01-01 In November, 1977, an International Symposium on Food Preservation by Irradiation was held at Wageningen, the Netherlands. About 200 participants attended the Symposium which was organised by the International Atomic Energy Agency, the Food and Agriculture Organization of the United Nations and the World Health Organization; a reflection of the active interest which is being shown in food irradiation processing, particularly among developing countries. The 75 papers presented provided an excellent review of the current status of food irradiation on a wide range of different topics, and the Symposium also afforded the valuable opportunity for informal discussion among the participants and for developing personal contacts. A brief survey of the salient aspects discussed during the course of the meeting are reported on. (orig.) [de 9. Food irradiation - general aspects International Nuclear Information System (INIS) Ley, F.J. 1985-01-01 This paper describes research and development experience in food irradiation followed by commercial utilisation of multi-purpose plants. The main design objectives should be high efficiency and uniform dose. Particular care must be given to dosimetry and the use of plastic dosimeters is described. Capital outlay for a 1 MCi Cobalt 60 irradiator is estimated to be 2.5 million dollars giving a unit processing cost of 0.566 dollars/ft 3 of throughput for 8000 hour/year use at a dose of 25 kGy. (2.5 Mrad). The sale of irradiated food for human consumption in Britain is not yet permitted but it is expected that enabling legislation will be introduced towards the end of 1985 International Nuclear Information System (INIS) Brynjolfsson, A. 1978-01-01 The energy used in food systems in the US amounts to about 16.5% of total US energy. An analysis has been made of the energy used in the many steps of the food-irradiation process. It is found that irradiation pasteurization uses only 21kJ/kg and radappertization 157kJ/kg, which is much less than the energy used in the other food processes. A comparison has also been made with other methods of preserving, distributing and preparing the meat for servings. It is found that the food irradiation can save significant amounts of energy. In the case of heat-sterilized and radiation-sterilized meats the largest fraction of the energy is used in the packaging, while in the frozen meats the largest energy consumption is by refrigeration in the distribution channels and in the home. (author) International Nuclear Information System (INIS) Chung, H.M. 1985-10-01 Precipitates in high-burnup (>20 MWd/kg U) Zircaloy spent-fuel cladding discharged from commercial boiling- and pressurized-water reactors have been characterized by TEM-HVEM. Three classes of primary precipitates were observed in the irradiated Zircaloys: Zr 3 O (2 to 6 nm), cubic-ZrO 2 (greater than or equal to 10 nm), and delta-hydride (35 to 100 nm). The former two precipitations appears to be irradiation induced in nature. Zr(Fe/sub x/Cr/sub 1-x/) 2 and Zr 2 (Fe/sub x/Ni/sub 1-x/) intermetallics, which are the primary precipitates in unirradiated Zircaloys, were largely dissolved after the high burnup. It seems, therefore, that the influence of the size and distribution of the intermetallics on the corrosion behavior may be quite different for the irradiated Zircaloys CERN Document Server Dervan, P; Hodgson, P; Marin-Reyes, H; Wilson, J 2013-01-01 At the end of 2012 the proton irradiation facility at the CERN PS [1] will shut down for two years. With this in mind, we have been developing a new ATLAS scanning facility at the University of Birmingham Medical Physics cyclotron. With proton beams of energy approximately 30 MeV, fluences corresponding to those of the upgraded Large Hadron Collider (HL-LHC) can be reached conveniently. The facility can be used to irradiate silicon sensors, optical components and mechanical structures (e.g. carbon fibre sandwiches) for the LHC upgrade programme. Irradiations of silicon sensors can be carried out in a temperature controlled cold box that can be scanned through the beam. The facility is described in detail along with the first tests carried out with mini (1 x 1 cm^2 ) silicon sensors. International Nuclear Information System (INIS) Dervan, P.; French, R.; Hodgson, P.; Marin-Reyes, H.; Wilson, J. 2013-01-01 At the end of 2012 the proton irradiation facility at the CERN PS will shut down for two years. With this in mind, we have been developing a new ATLAS scanning facility at the University of Birmingham Medical Physics cyclotron. With proton beams of energy approximately 30 MeV, fluences corresponding to those of the upgraded Large Hadron Collider (HL-LHC) can be reached conveniently. The facility can be used to irradiate silicon sensors, optical components and mechanical structures (e.g. carbon fibre sandwiches) for the LHC upgrade programme. Irradiations of silicon sensors can be carried out in a temperature controlled cold box that can be scanned through the beam. The facility is described in detail along with the first tests carried out with mini (1×1 cm 2 ) silicon sensors International Nuclear Information System (INIS) Barrachina, M. 1985-01-01 The aim of food irradiation is to extend shelf-life of food commodities by delaying fruit ripening, inhibition of vegetable sprouting, desinfestation of grains and seeds, and in general by controlling microbial or parasitic food-transmitted infections. It was stated by the 1980 Joint FAO/IAEA/WHO Expert Committee that food irradiated up to 10 kGy does not pose any human health or nutritional problems. Following this recommendation, irradiation programmes are being developed at a good pace in several countries. It is hoped that commercial drawbacks now existing, such as psychological apprehension of consumers to radiation-treated products and innovative inertia to changes of the food chain, will be removed through appropriate information schemes and legislative advancement. (author) International Nuclear Information System (INIS) MacGregor, J.; Stanbrook, I.; Shersby, M. 1989-01-01 The House of Commons was asked to support the Government's intention to allow the use of the irradiation of foodstuffs under conditions that will fully safeguard the interests of the consumer. The Government, it was stated, regards this process as a useful additional way to ensure food safety. The effect of the radiation in killing bacteria will enhance safety standards in poultry meat, in some shell-fish and in herbs and spices. The problem of informing the public when the food has been irradiated, especially as there is no test to detect the irradiation, was raised. The subject was debated for an hour and a half and is reported verbatim. The main point raised was over whether the method gave safer food as not all bacteria were killed in the process. The motion was carried. (U.K.) International Nuclear Information System (INIS) Kooij, J. van 1981-01-01 Twenty-five years of development work on the preservation of food by irradiation have shown that this technology has the potential to reduce post-harvest losses and to produce safe foods. The technological feasibility has been established but general acceptance of food irradiation by national regulatory bodies and consumers requires attention. The positive aspects of food preservation by irradiation include: the food keeps its freshness and its physical state, agents which cause spoilage (bacteria, etc.) are eliminated, recontamination does not take place, provided packaging materials are impermeable to bacteria and insects. It inhibits sprouting of root crops, kills insects and parasites, inactivates bacteria, spores and moulds, delays ripening of fruit, improves the technological properties of food. It makes foods biologically safe, allows the production of shelf-stable foods and is excellent for quarantine treatment, and generally improves food hygiene. The dose ranges needed for effective treatment are given International Nuclear Information System (INIS) Makra, Zs. 1978-01-01 The results obtained by determining the irradiation dose during the spaceflights of Apollo as well as the Sojouz-3 and Sojouz-9 spacecrafts have been compared in the form of tables. In case of Apollo astronauts the irradiation dose was determined by two methods and its sources were also pointed out, in tables. During Sojouz spacetravels the cosmonauts were exposed to a negligible dose. In spite of this fact the radiation danger is considerable. The small irradiation doses noticed so far are due to the fact that during the spaceflights there was no big proturberance. However, during the future long-range spacetravels a better radiation shielding than the one used up to now will be necessary. (P.J.) 18. Studies of blood irradiator application International Nuclear Information System (INIS) Li Wenhong; Lu Yangqiao 2004-01-01 Transfusion is an important means for medical treatment, but it has many syndromes such as transfusion-associated graft-versus-host disease, it's occurrence rate of 5% and above 90% death-rate. Now many experts think the only proven method is using blood irradiator to prevent this disease. It can make lymphocyte of blood product inactive, so that it can not attack human body. Therefore, using irradiation blood is a trend, and blood irradiator may play an important role in medical field. This article summarized study of blood irradiator application, including the meaning of blood irradiation, selection of the dose for blood irradiation and so on International Nuclear Information System (INIS) 1991-01-01 This fact sheet briefly considers the nutritional value of irradiated foods. Micronutrients, especially vitamins, are sensitive to any food processing method, but irradiation does not cause any special nutritional problems in food. 4 refs Energy Technology Data Exchange (ETDEWEB) Hashim, Hatijah bt; Gnanamuthu, E 1986-12-31 Irradiation of food has been promoted as a new technology in the preservation of food. Several countries have already introduced the technology for selected food items. However, there remain several questions that have yet to be answered. Foremost is the question of its safety. Proponents have argued that it is safe. Others cast doubts on these studies and the interpretations of their results. Second is the question of the nutritive value of the food that is irradiated. These and many other questions related to safety will be discussed in this paper International Nuclear Information System (INIS) Ashby, R.; Tesh, J.M. 1982-11-01 Groups of 40 male and 40 female CD rats were fed powdered rodent diet containing 25% (w/w) of either non-irradiated, irradiated or fumigated cocoa beans. The diets were supplemented with certain essential dietary constituents designed to satisfy normal nutritional requirements. An additional 40 male and 40 female rats received basal rodent diet alone (ground) and acted as an untreated control. After 70 days of treatment, 15 male and 15 female rats from each group were used to assess reproductive function of the F 0 animals and growth and development of the F 1 offspring up to weaning; the remaining animals were killed after 91 days of treatment. (orig.) International Nuclear Information System (INIS) Seim, O.S.; Hutter, E. 1975-01-01 A subassembly for use in a nuclear reactor is described which incorporates a loose bundle of fuel or irradiation pins enclosed within an inner tube which in turn is enclosed within an outer coolant tube and includes a locking comb consisting of a head extending through one side of the inner sleeve and a plurality of teeth which extend through the other side of the inner sleeve while engaging annular undercut portions in the bottom portion of the fuel or irradiation pins to prevent movement of the pins International Nuclear Information System (INIS) 1987-05-01 The Canadian Irradiation Centre is a non-profit cooperative project between Atomic Energy of Canada Limited, Radiochemical Company and Universite du Quebec, Institut Armand-Frappier, Centre for Applied Research in Food Science. The Centre's objectives are to develop, demonstrate and promote Canada's radiation processing technology and its applications by conducting applied research; training technical, professional and scientific personnel; educating industry and government; demonstrating operational and scientific procedures; developing processing procedures and standards, and performing product and market acceptance trials. This pamphlet outlines the history of radoation technology and the services offered by the Canadian Irradiation Centre Energy Technology Data Exchange (ETDEWEB) Saputra, T S; Harsoyo,; Sudarman, H [National Atomic Energy Agency, Jakarta (Indonesia). Pasar Djumat Research Centre 1982-07-01 An experiment has been done to determine the effect of irradiation and reduction of moisture content on the keeping quality of commercial spices, i.e. nutmeg (Myristica fragrans), black and white pepper (Piper ningrum). The results showed that a dose of 5 kGy could reduce the microbial load of spices as much as 2-4 log cycles for the total plate count and 1-3 log cycles for the total mould and yeast counts. The microbial reduction due to the irradiation treatment was found to be lower in more humid products. Prolonged storage enhanced the microbial reduction. International Nuclear Information System (INIS) Robertson, I.M. 1995-01-01 The focus of the symposium was on the changes produced in the microstructure of metals, ceramics, and semiconductors by irradiation with energetic particles. the symposium brought together those working in the different material systems, which revealed that there are a remarkable number of similarities in the irradiation-produced microstructures in the different classes of materials. Experimental, computational and theoretical contributions were intermixed in all of the sessions. This provided an opportunity for these groups, which should interact, to do so. Separate abstracts were prepared for 58 papers in this book International Nuclear Information System (INIS) Esterhuyse, A; Esterhuizen, T. 1985-01-01 The reason for radurization was to decreased the microbial count of dehydrated vegetables. The average absorbed irradiation dose range between 2kGy and 15kGy. The product catagories include a) Green vegetables b) White vegetables c) Powders of a) and b). The microbiological aspects were: Declining curves for the different products of T.P.C., Coliforms, E. Coli, Stap. areus, Yeast + Mold at different doses. The organoleptical aspects were: change in taste, flavour, texture, colour and moisture. The aim is the marketing of irradiated dehydrated vegetables national and international basis International Nuclear Information System (INIS) Kunstadt, P.; Steeves, C.; Beaulieu, D. 1993-01-01 The number of products being radiation processed worldwide is constantly increasing and today includes such diverse items as medical disposables, fruits and vegetables, spices, meats, seafoods and waste products. This range of products to be processed has resulted in a wide range of irradiator designs and capital and operating cost requirements. This paper discusses the economics of low dose food irradiation applications and the effects of various parameters on unit processing costs. It provides a model for calculating specific unit processing costs by correlating known capital costs with annual operating costs and annual throughputs. It is intended to provide the reader with a general knowledge of how unit processing costs are derived. (author) International Nuclear Information System (INIS) Hashim, Hatijah bt; Gnanamuthu, E. 1985-01-01 Irradiation of food has been promoted as a new technology in the preservation of food. Several countries have already introduced the technology for selected food items. However, there remain several questions that have yet to be answered. Foremost is the question of its safety. Proponents have argued that it is safe. Others cast doubts on these studies and the interpretations of their results. Second is the question of the nutritive value of the food that is irradiated. These and many other questions related to safety will be discussed in this paper International Nuclear Information System (INIS) Curzio, O.A.; Croci, C.A. 1998-01-01 Two surveys were carried out in Buenos Aires of consumer attitudes towards irradiated onions [no data given]. The first investigated the general level of consumer knowledge concerning food irradiation, whilst the second (which covered consumers who had actually bought irradiated onions) examined reasons for purchase and consumer satisfaction. Results reveal that more than 90% of consumers surveyed had a very limited knowledge of food irradiation 10. Economics of gamma irradiation processing International Nuclear Information System (INIS) Tani, Toshio 1980-01-01 International Nuclear Information System (INIS) Kwon, Oh Hyun; Eom, Kyong Bo; Kim, Jae Ik; Suh, Jung Min; Jeon, Kyeong Lak 2011-01-01 12. Progress in food irradiation: Uruguay Energy Technology Data Exchange (ETDEWEB) Merino, F G 1978-12-01 Durability and tolerability of several vegetable sorts such as potatoes, onions, and garlick after irradiation with gamma radiation are investigated. In questioning the consumers, a positive attitude of the consumers towards irradiated products was noted. 13. Food irradiation and consumer values International Nuclear Information System (INIS) Bruhn, C.M.; Schutz, H.G.; Sommer, R. 1988-01-01 14. Progress in food irradiation: Uruguay International Nuclear Information System (INIS) Merino, F.G. 1978-01-01 Durability and tolerability of several vegetable sorts such as potatoes, onions, and garlick after irradiation with gamma radiation are investigated. In questioning the consumers, a positive attitude of the consumers towards irradiated products was noted. (AJ) [de 15. Regulatory aspect of food irradiation International Nuclear Information System (INIS) Harrison Aziz 1985-01-01 Interest in the process of food irradiation is reviewed once again internationally. Although food irradiation has been thoroughly investigated, global acceptance is still lacking. Factors which impede the progress of the technology are discussed here. (author) International Nuclear Information System (INIS) Lushbaugh, C.C.; Brown, D.G.; Frome, E.L. 1984-01-01 International Nuclear Information System (INIS) Campbell, J.W.; Todd, J.L. 1975-01-01 The design of a prototype safeguards instrument for determining the number of irradiated fuel assemblies leaving an on-power refueled reactor is described. Design details include radiation detection techniques, data processing and display, unattended operation capabilities and data security methods. Development and operating history of the bundle counter is reported. (U.S.) International Nuclear Information System (INIS) Campbell, J.W.; Todd, J.L. 1975-01-01 The design of a prototype safeguards instrument for determining the number of irradiated fuel assemblies leaving an on-power refueled reactor is described. Design details include radiation detection techniques, data processing and display, unattended operation capabilities and data security methods. Development and operating history of the bundle counter is reported Science.gov (United States) Pozzi, Fabio; Garcia Alia, Ruben; Brugger, Markus; Carbonez, Pierre; Danzeca, Salvatore; Gkotse, Blerina; Richard Jaekel, Martin; Ravotti, Federico; Silari, Marco; Tali, Maris 2017-09-28 International Nuclear Information System (INIS) Pruzinec, J.; Hola, O. 1987-01-01 The effect of high energy irradiation on various starch samples was studied. The radiation dose varied between 43 and 200.9 kGy. The viscosity of starch samples were determined by Hoeppler's method. The percentual solubility of the matter in dry starch was evaluated. The viscosity and solubility values are presented. (author) 14 refs International Nuclear Information System (INIS) Delincee, H.; Ehlermann, D.; Gruenewald, T.; Harmuth-Hoene, A.E.; Muenzner, R. 1978-01-01 The present issue of the bibliographic series contains 227 items. The main headings of the content are basics of food irradiation, applications at low dose levels, applications at higher dose levels, effects on foods and on components of foods, and microbiology. (MG) [de International Nuclear Information System (INIS) Gal, I. 1961-12-01 Task concerned with reprocessing of irradiated uranium covered the following activities: implementing the method and constructing the cell for uranium dissolving; implementing the procedure for extraction of uranium, plutonium and fission products from radioactive uranium solutions; studying the possibilities for using inorganic ion exchangers and adsorbers for separation of U, Pu and fission products Energy Technology Data Exchange (ETDEWEB) Cole, James Irvin [Idaho National Lab. (INL), Idaho Falls, ID (United States) 2015-09-01 The Nuclear Science User Facilities has been in the process of establishing an innovative Irradiated Materials Library concept for maximizing the value of previous and on-going materials and nuclear fuels irradiation test campaigns, including utilization of real-world components retrieved from current and decommissioned reactors. When the ATR national scientific user facility was established in 2007 one of the goals of the program was to establish a library of irradiated samples for users to access and conduct research through competitively reviewed proposal process. As part of the initial effort, staff at the user facility identified legacy materials from previous programs that are still being stored in laboratories and hot-cell facilities at the INL. In addition other materials of interest were identified that are being stored outside the INL that the current owners have volunteered to enter into the library. Finally, over the course of the last several years, the ATR NSUF has irradiated more than 3500 specimens as part of NSUF competitively awarded research projects. The Logistics of managing this large inventory of highly radioactive poses unique challenges. This document will describe materials in the library, outline the policy for accessing these materials and put forth a strategy for making new additions to the library as well as establishing guidelines for minimum pedigree needed to be included in the library to limit the amount of material stored indefinitely without identified value. International Nuclear Information System (INIS) Bustos R, M.E.; Gonzalez F, C.; Liceaga C, G.; Ortiz A, G. 1997-01-01 In any industrial process it is seek an attractive profit from the contractor and the social points of view. The use of the irradiation technology in foods allows keep their hygienically, which aid to food supply without risks for health, an increment of new markets and a losses reduction. In other products -cosmetics or disposable for medical use- which are sterilized by irradiation, this process allows their secure use by the consumers. The investment cost of an irradiation plant depends mainly of the plant size and the radioactive material reload that principally is Cobalt 60, these two parameters are in function of the type of products for irradiation and the selected doses. In this work it is presented the economic calculus and the financial costs for different products and capacities of plants. In general terms is determined an adequate utility that indicates that this process is profitable. According to the economic and commercial conditions in the country were considered two types of credits for the financing of this projects. One utilizing International credit resources and other with national sources. (Author) 5. Centurion -- a revolutionary irradiator International Nuclear Information System (INIS) McKinney, Dan; Perrins, Robert 2000-01-01 The facility characteristics for irradiation of red meat and poultry differ significantly from those of medical disposables. This paper presents the results of the market requirement definition which resulted in an innovative conceptual design. The process and the 'state of the art tools' used to bring this abstract idea into a proof of concept are presented. (author) 6. Process for irradiation of polyethylene International Nuclear Information System (INIS) White, George. 1983-01-01 Irradiation of polyethylene affects its processabiltiy in the fabrication of products and affects the properties of products already fabricated. The present invention relates to a process for the irradiation of polyethylene, and especially to a process for the irradiation of homopolymers of ethylene and copolymers of ethylene and higher α-olefins, in the form of granules, with low levels of electron or gamma irradiation in the presence of an atomsphere of steam 7. Sewage sludge irradiation with electrons International Nuclear Information System (INIS) Tauber, M. 1976-01-01 The disinfection of sewage sludge by irradiation has been discussed very intensively in the last few months. Powerful electron accelerators are now available and the main features of the irradiation of sewage sludge with fast electrons are discussed and the design parameters of such installations described. AEG-Telefunken is building an irradiation plant with a 1.5 MeV, 25 mA electron accelerator, to study the main features of electron irradiation of sewage sludge. (author) 8. Potato irradiation technology in Japan International Nuclear Information System (INIS) Takehisa, M. 1981-01-01 After the National research program on potato irradiation, the public consumption of potatoes irradiated to a maximum of 15 krad was authorized by the Ministry of Welfare. Shihoro Agricultural Cooperative Association, one of the largest potato producers in Japan with an annual production of 200,000 tons, intended an application of the irradiation to their potato storage system. This paper describes the technological background of the potato irradiation facility and operational experience. (author) International Nuclear Information System (INIS) Cetinkaya, N. 1999-01-01 10. Market trials of irradiated chicken International Nuclear Information System (INIS) Fox, John A.; Olson, Dennis G. 1998-01-01 The potential market for irradiated chicken breasts was investigated using a mail survey and a retail trial. Results from the mail survey suggested a significantly higher level of acceptability of irradiated chicken than did the retail trial. A subsequent market experiment involving actual purchases showed levels of acceptability similar to that of the mail survey when similar information about food irradiation was provided 11. EPR measurements in irradiated polyacetylene International Nuclear Information System (INIS) Hola, O.; Foeldesova, M. 1990-01-01 The influence of γ-irradiation on the paramagnetic properties of polyacetylene, and the dependence of the EPR spectra on the radiation dose in samples of irradiated polyacetylene were studied. The measurements show that no essential changes of the spin mobility occurred during irradiation. (author) 3 refs.; 2 figs 12. Onion irradiation - a case study International Nuclear Information System (INIS) Huebner, G. 1988-01-01 The irradiation of onions (Allium cepa L.) serves to prevent sprouting associated with long-term storage or transport and storage of onions in climatic conditions which stimulate sprouting. JECFI the Joint Expert Committee for Food Irradiation of FAO/IAEA/WHO, recommended the application of an irradiation dose of up to 150 Gy for sprout inhibition with onions. (author) 13. Food irradiation: Activities and potentialities Science.gov (United States) Doellstaedt, R.; Huebner, G. International Nuclear Information System (INIS) 1988-01-01 Canada has been in the forefront of irradiation technology for some 30 years. Nearly 90 of the 140 irradiators used worldwide are Canadian-built, yet Canadian food processors have been very slow to use the technology. The food irradiation regulatory situation in Canada, the factors that influence it, and some significant non-regulatory developments are reviewed. (author) International Nuclear Information System (INIS) Yang, Myung Seung; Jung, I. H.; Moon, J. S. and others 2001-12-01 The post-irradiation examination of irradiated DUPIC (Direct Use of Spent PWR Fuel in CANDU Reactors) simulated fuel in HANARO was performed at IMEF (Irradiated Material Examination Facility) in KAERI during 6 months from October 1999 to March 2000. The objectives of this post-irradiation test are i) the integrity of the capsule to be used for DUPIC fuel, ii) ensuring the irradiation requirements of DUPIC fuel at HANARO, iii) performance verification in-core behavior at HANARO of DUPIC simulated fuel, iv) establishing and improvement the data base for DUPIC fuel performance verification codes, and v) establishing the irradiation procedure in HANARO for DUPIC fuel. The post-irradiation examination performed are γ-scanning, profilometry, density, hardness, observation the microstructure and fission product distribution by optical microscope and electron probe microanalyser (EPMA) 16. Commercial food irradiation in practice International Nuclear Information System (INIS) Leemhorst, J.G. 1990-01-01 Dutch research showed great interest in the potential of food irradiation at an early stage. The positive research results and the potential applications for industry encouraged the Ministry of Agriculture and Fisheries to construct a Pilot Plant for Food Irradiation. In 1967 the Pilot Plant for Food Irradiation in Wageningen came into operation. The objectives of the plant were: research into applications of irradiation technology in the food industry and agricultural industry; testing irradiated products and test marketing; information transfer to the public. (author) 17. The return of food irradiation International Nuclear Information System (INIS) Hammerton, K. 1992-01-01 In discussing the need for food irradiation the author examines the problems that arise in processing foods of different kinds: spices, meat, fruits and vegetables. It is demonstrated that the relatively low dose of radiation required to eliminate the reproductive capacity of the pest can be tolerated by most fruits and vegetables without damage. Moreover the safety of irradiated food is acknowledged by major national and international food organizations and committees. The author agreed that when food irradiation has been approved by a country, consumers should be able to choose between irradiated and non-irradiated food. To enable the choice, clear and unambiguous labelling must be enforced. 13 refs., 1 tab., ills 18. Market Trials of Irradiated Spices International Nuclear Information System (INIS) Charoen, Saovapong; Eemsiri, Jaruratana; Sajjabut, Surasak 2009-07-01 19. Vitamin A in irradiated foodstuffs Energy Technology Data Exchange (ETDEWEB) Diehl, J F [Bundesforschungsanstalt fuer Ernaehrung, Karlsruhe (Germany, F.R.) 1979-01-01 20. Vitamin A in irradiated foodstuffs Energy Technology Data Exchange (ETDEWEB) Diehl, J F [Bundesforschungsanstalt fuer Ernaehrung, Karlsruhe (Germany, F.R.) 1979-01-01 1. Vitamin A in irradiated foodstuffs International Nuclear Information System (INIS) Diehl, J.F. 1979-01-01 2. Vitamin A in irradiated foodstuffs International Nuclear Information System (INIS) Diehl, J.F. 1979-01-01 International Nuclear Information System (INIS) Pai, J.S. 2001-01-01 Although irradiation is being investigated for the last more than 50 years for the application in preservation of food, it has not yet been exploited commercially in some countries like India. No other food processing technique has undergone such close scrutiny. There are many advantages to this process, which few others can claim. The temperature remains ambient during the process and the form of the food does not change resulting in very few changes in the sensory and nutritive quality of the food product. At the same time the microorganisms are effectively destroyed. Most of the spoilage and pathogenic organisms are sensitive to irradiation. Fortunately, most governments are supportive for the process and enacting laws permitting the process for foods International Nuclear Information System (INIS) Schultz, M.A. 1976-01-01 An apparatus for collecting pollutants in which a passageway is formed to define a path for industrial gases passing therethrough is described. A plurality of isotope sources extend along at least a portion of the path followed by the industrial gases to provide a continuing irradiation zone for pollutants in the gases. Collecting electrode plates are associated with such an irradiation zone to efficiently collect particulates as a result of an electrostatic field established between such plates, particularly very small particulates. The series of isotope sources are extended for a length sufficient to attain material improvement in the efficiency of collecting the pollutants. Such an effective length is established along a substantially unidirectional path of the gases, or preferably a reversing path in a folded conduit assembly to attain further efficiency by allowing more compact apparatus structures International Nuclear Information System (INIS) Wagner, U.; Helle, N.; Boegl, K.W.; Schreiber, G.A. 1993-01-01 This report describes developments and applications of the thermoluminescence (TL) analysis of mineral contaminants in foods. Procedures are presented to obtain minerals from most different products such as pepper, mangos, shrimps and mussels. The effect of light exposure during the storage of foods on the TL intensity of minerals is examined and corresponding conclusions for routine control are drawn. It is also shown that the normalization of TL intensities - the essential step to identify irradiated samples - can not only be achieved by γ, X or β rays but also by UV radiation. The results allow the conclusion that a clear identification of any food which has been irradiated with more than 1 kGy is possible if enough minerals can be isolated. (orig.) International Nuclear Information System (INIS) Bellamy, B.A. 1988-01-01 Papers presented at the UKAEA Conference on Materials Analysis by Physical Techniques (1987) covered a wide range of techniques as applied to the analysis of irradiated materials. These varied from reactor component materials, materials associated with the Authority's radwaste disposal programme, fission products and products associated with the decommissioning of nuclear reactors. An invited paper giving a very comprehensive review of Laser Ablation Microprobe Mass Spectroscopy (LAMMS) was included in the programme. (author) International Nuclear Information System (INIS) Ito, Yasuo 1999-01-01 The present state of art of applications of neutron irradiation is overviewed taking neutron activation analysis, prompt gamma-ray analysis, fission/alpha track methods, boron neutron capture therapy as examples. What is common among them is that the technologies are nearly matured for wide use by non- nuclear scientists. But the environment around research reactors is not prospective. These applications should be encouraged by incorporating in the neutron science society. (author) International Nuclear Information System (INIS) Verdejo S, M. 1997-01-01 The standing legislation in Mexico on food irradiation matter has its basis on the Constitutional Policy of the Mexican United States on the 4 Th. article by its refers to Secretary of Health, 27 Th. article to the Secretary of Energy and 123 Th. of the Secretary of Work and Social Security. The laws and regulations emanated of the proper Constitution establishing the general features which gives the normative frame to this activity. The general regulations of Radiological Safety expedited by the National Commission for Nuclear Safety and Safeguards to state the specifications which must be fulfill the industrial installations which utilizing ionizing radiations, between this line is founded, just as the requirements for the responsible of the radiological protection and the operation of these establishments. The project of Regulation of the General Health Law in matter of Sanitary Control of Benefits and Services, that in short time will be officialized, include a specific chapter on food irradiation which considers the International Organizations Recommendations and the pertaining harmonization stated for Latin America, which elaboration was in charge of specialized group where Mexico was participant. Additionally, the Secretary of Health has a Mexican Official Standard NOM-033-SSA1-1993 named 'Food irradiation; permissible doses in foods, raw materials and support additives' standing from the year 1995, where is established the associated requirements to the control registers, service constancies and dose limits for different groups of foods, moreover of the specific guidelines for its process. This standard will be adequate considering the updating Regulation of Benefits and Services and the limits established the Regulation for Latin America. The associated laws that cover in general terms it would be the requirements for food irradiation although such term is not manageable. (Author) International Nuclear Information System (INIS) Steinfeld, A.D. 1978-01-01 Many patients as children or adolescents received treatment for nonmalignant conditions with therapeutic-range doses of ionizing radiation. Great interest exists in this group of patients because of the possible long-term adverse effects of such irradiation. A thorough physical examination should be performed, and a careful history, listing any instances of radiation exposure, should be taken. Photographic documentation of skin and/or mucosal changes is beneficial International Nuclear Information System (INIS) Burdett, C.F.; Rahmatalla, H. 1977-01-01 The results of Simpson et al (Simpson, H.M., Sosin, A., Johnston, D.F., Phys.Rev. B, 5:1393 (1972)) on the damping produced during electron irradiation of copper are re-examined and it is shown that they can be explained in terms of the model of Granato and Lucke (Granato, A., Lucke, K., J.Appl.Phys., 27:583,789 (1958)). (author) International Nuclear Information System (INIS) Salame, M.; Steingiser, S. 1982-01-01 The process of forming containers from preforms of thermoplastic material comprising at least 20 weight percent of polymerized nitrile group containing monomer is claimed. The preforms are exposed to low dosage electron beam radiation and then, while at molding temperature, distended into containers in a mold. The radiaton causes polymerization of nitrile group containing monomers and the distending causes HCN generated during irradiation to be reduced in the thermoplastic material International Nuclear Information System (INIS) Wang, S.X.; Wang, L.M.; Ewing, R.C. 1999-01-01 Three different zeolites (analcime, natrolite, and zeolite-Y) were irradiated with 200 keV and 400 keV electrons. All zeolites amorphized under a relatively low electron fluence. The transformation from the crystalline-to-amorphous state was continuous and homogeneous. The electron fluences for amorphization of the three zeolites at room temperature were: 7.0 x 10 19 e - /cm 2 (analcime), 1.8 x 10 20 e - /cm 2 (natrolite), and 3.4 x 10 20 e - /cm 2 (zeolite-Y). The different susceptibilities to amorphization are attributed to the different channel sizes in the structures which are the pathways for the release of water molecules and Na + . Natrolite formed bubbles under electron irradiation, even before complete amorphization. Analcime formed bubbles after amorphization. Zeolite-Y did not form bubbles under irradiation. The differences in bubble formation are attributed to the different channel sizes of the three zeolites. The amorphization dose was also measured at different temperatures. An inverse temperature dependence of amorphization dose was observed for all three zeolites: electron dose for amorphization decreased with increasing temperature. This unique temperature effect is attributed to the fact that zeolites are thermally unstable. A semi-empirical model was derived to describe the temperature effect of amorphization in these zeolites International Nuclear Information System (INIS) Novack, D.H.; Kiley, J.P. 1987-01-01 International Nuclear Information System (INIS) Farkas, J.; Al-Charchafchy, F.; Al-Shaikhaly, M.H.; Mirjan, J.; Auda, H. 1974-01-01 International Nuclear Information System (INIS) Rehn, L.E.; Lam, N.Q. 1985-10-01 Gibbsian adsorption is known to alter the surface composition of many alloys. During irradiation, four additional processes that affect the near-surface alloy composition become operative: preferential sputtering, displacement mixing, radiation-enhanced diffusion and radiation-induced segregation. Because of the mutual competition of these five processes, near-surface compositional changes in an irradiation environment can be extremely complex. Although ion-beam induced surface compositional changes were noted as long as fifty years ago, it is only during the past several years that individual mechanisms have been clearly identified. In this paper, a simple physical description of each of the processes is given, and selected examples of recent important progress are discussed. With the notable exception of preferential sputtering, it is shown that a reasonable qualitative understanding of the relative contributions from the individual processes under various irradiation conditions has been attained. However, considerably more effort will be required before a quantitative, predictive capability can be achieved. 29 refs., 8 figs International Nuclear Information System (INIS) Whitburn, K.D.; Hoffman, M.Z.; Taub, I.A. 1982-01-01 In ''A Re-Evaluation of the Products of Gamma Irradiation of Beef Ferrimyoglobin'', J. Food Sci. 46:1814 (1981), authors Whitburn, Hoffman and Taub state that color pigment myoglobin (Mb) undergoes chemical changes during irradiation that cause color changes in meat. They also state that they are in disagreement with Giddings and Markakis, J. Food Sci. 47:361 (1972) in regard to generation of MbO 2 in deaerated solutions, claiming their analysis demonstrates only Mb and Mb(IV) production. Giddings, in a letter, suggests that Whitburn, et al may have used differing systems and approaches which critically changed the radiation chemistry. He also states that radiation sterilization of aerobically packaged meats affects color only slightly. Whitburn, in a reply, shares Dr. Giddings concern for caution in interpretation of results for this system. The compositional changes are dependent on identity of free radicals, dose, O 2 and the time of analysis after irradiation. The quantification of these parameters in pure systems, sarcoplasma extracts and in meat samples should lead to a better understanding of color change mechanisms and how to minimize them International Nuclear Information System (INIS) Anon. 1980-01-01 An outline review notes recent work on total lymphoid irradiation (TLI) as a means of preparing patients for grafts and particularly for bone-marrow transplantation. T.L.I. has proved immunosuppressive in rats, mice, dogs, monkeys and baboons; when given before bone-marrow transplantation, engraftment took place without, or with delayed rejection or graft-versus-host disease. Work with mice has indicated that the thymus needs to be included within the irradiation field, since screening of the thymus reduced skin-graft survival from 50 to 18 days, though irradiation of the thymus alone has proved ineffective. A more lasting tolerance has been observed when T.L.I. is followed by an injection of donor bone marrow. 50% of mice treated in this way accepted allogenic skin grafts for more than 100 days, the animals proving to be stable chimeras with 50% of their peripheral blood lymphocytes being of donor origin. Experiments of a similar nature with dogs and baboons were not so successful. (U.K.) International Nuclear Information System (INIS) Hacker-Klom, U.B.; Goehde, W. 2001-01-01 CERN Document Server Stephan, I; Prokert, F; Scholz, A 2003-01-01 The temperature monitoring within the irradiation programme Rheinsberg II was performed by diamond powder monitors. The method bases on the effect of temperature on the irradiation-induced increase of the diamond lattice constant. The method is described by a Russian code. In order to determine the irradiation temperature, the lattice constant is measured by means of a X-ray diffractometer after irradiation and subsequent isochronic annealing. The kink of the linearized temperature-lattice constant curves provides a value for the irradiation temperature. It has to be corrected according to the local neutron flux. The results of the lattice constant measurements show strong scatter. Furthermore there is a systematic error. The results of temperature monitoring by diamond powder are not satisfying. The most probable value lays within 255 C and 265 C and is near the value estimated from the thermal condition of the irradiation experiments. International Nuclear Information System (INIS) 1987-01-01 Safety measures for nuclear reactors require that the energy which might be liberated in a reactor core during an accident should be contained within the reactor pressure vessel, even after very long irradiation periods. Hence the need to know the mechanical properties at high deformation velocity of structure materials that have received irradiation damage due to their utilization. The stainless steels used in the structures of reactors undergo damage by both thermal and fast neutrons, causing important changes in the mechanical properties of these materials. Various austenitic steels available as structural materials were irradiated or are under irradiation in various reactors in order to study the evolution of the mechanical properties at high deformation velocity as a function of the irradiation damage rate. The experiment called AUSTIN (AUstenitic STeel IrradiatioN) 02 was performed by the JRC Petten Establishment on behalf of Ispra in support of the reactor safety programme 1. Consumer acceptance of irradiated foods International Nuclear Information System (INIS) Feenstra, M.H.; Scholten, A.H. 1991-01-01 2. Food irradiation development in Japan International Nuclear Information System (INIS) Kawabata, T. 1981-01-01 3. Gemstone dedicated gamma irradiation development Energy Technology Data Exchange (ETDEWEB) Omi, Nelson M.; Rela, Paulo R. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)]. E-mails: [email protected]; [email protected] 2007-07-01 4. The wholesomeness of irradiated food International Nuclear Information System (INIS) Elias, P.S.; Matsuyama, A. 1978-01-01 5. Soudage par explosion thermique sous charge de cermets poreux à base de TiC-Ni sur substrat en acier-comportement tribologique Welding of porous TiC–Ni based cermets on substrate steel by thermal explosion under load-tribological behaviour Directory of Open Access Journals (Sweden) Lemboub Samia 2013-11-01 Full Text Available Dans ce travail, nous nous intéressons à l'élaboration de cermets à base de TiC-Ni par dispersion de particules de carbures, oxydes ou borures dans une matrice de nickel, grâce à la technique de l'explosion thermique sous une charge de 20 MPa. La combustion de mélanges actifs (Ti-C-Ni-An où An = Al2O3, MgO, SiC, TiB2, WC, basée sur la réaction de synthèse de TiC (ΔHf298K = −184 kJ/mole, génère des cermets complexes. Un court maintien sous charge du cermet à 1373 K, après l'explosion thermique, permet son soudage sur un substrat en acier XC55. Les cermets obtenus dans ces conditions demeurent poreux et conservent une porosité de l'ordre de 25–35 %. La densité relative du cermet, sa dureté et son comportement tribologique, dépendront de la nature de l'addition dans les mélanges de départ. Porous TiC-Ni based cermets were obtained by dispersion of carbides, oxides or borides particles in a nickel matrix thanks to the thermal explosion technique realized under a load of 20 MPa. The combustion of active mixtures (Ti-C-Ni-An where An = Al2O3, MgO, SiC, TiB2 or WC based on the titanium carbide reaction synthesis (ΔHf = −184 kJ/mol, generates porous complex cermets. After the thermal explosion, a short maintenance under load at 1373 K of the combustion product, allows at the same time the cermets welding on a carbon steel substrate. The obtained cermets under these conditions preserve a porosity of about 25–35%. The relative density, hardness and tribological behaviour of the complex cermets depend on the additions nature (An in the starting mixtures. International Nuclear Information System (INIS) 1991-01-01 This fact sheet considers the microbiological safety of irradiated food, with especial reference to Clostridium botulinum. Irradiated food, as food treated by any ''sub-sterilizing'' process, must be handled, packaged and stored following good manufacturing practices to prevent growth and toxin production of C. botulinum. Food irradiation does not lead to increased microbiological hazards, nor can it be used to save already spoiled foods. 4 refs International Nuclear Information System (INIS) Price, R.J.; Haag, G. 1979-07-01 Design data for irradiated graphite are usually presented as families of isothermal curves showing the change in physical property as a function of fast neutron fluence. In this report, procedures for combining isothermal curves to predict behavior under changing irradiation temperatures are compared with experimental data on irradiation-induced changes in dimensions, Young's modulus, thermal conductivity, and thermal expansivity. The suggested procedure fits the data quite well and is physically realistic International Nuclear Information System (INIS) 1981-01-01 A process for irradiating film is described, which consists of passing the film through an electron irradiation zone having an electron reflection surface disposed behind and generally parallel to the film; and disposing within the irradiation zone adjacent the edges of the film a lateral reflection member for reflecting the electrons toward the reflection surface to further reflect the reflected electrons towards the adjacent edges of the film. (author) 9. Neutron irradiation facility and its characteristics International Nuclear Information System (INIS) Oyama, Yukio; Noda, Kenji 1995-01-01 A neutron irradiation facility utilizing spallation reactions with high energy protons is conceived as one of the facilities in 'Proton Engineering center (PEC)' proposed at JAERI. Characteristics of neutron irradiation field of the facility for material irradiation studies are described in terms of material damage parameters, influence of the pulse irradiation, irradiation environments other than neutronics features, etc., comparing with the other sorts of neutron irradiation facilities. Some perspectives for materials irradiation studies using PEC are presented. (author) International Nuclear Information System (INIS) Smutny, S.; Kupca, L.; Beno, P.; Stubna, M.; Mrva, V.; Chmelo, P. 1975-09-01 The survey and assessment are given of the tasks carried out in the years 1971 to 1975 within the development of methods for structural materials irradiation and of a probe for the irradiation thereof in the A-1 reactor. The programme and implementation of laboratory tests of the irradiation probe are described. In the actual reactor irradiation, the pulse tube length between the pressure governor and the irradiation probe is approximately 20 m, the diameter is 2.2 mm. Temperature reaches 800 degC while the pressure control system operates at 20 degC. The laboratory tests (carried out at 20 degC) showed that the response time of the pressure control system to a stepwise pressure change in the irradiation probe from 0 to 22 at. is 0.5 s. Pressure changes were also studied in the irradiation probe and in the entire system resulting from temperature changes in the irradiation probe. Temperature distribution in the body of the irradiation probe heating furnace was determined. (B.S.) 11. Results of the irradiation of mixed UO{sub 2} - PuO{sub 2} oxide fuel elements; Resultats d'irradiation d'elements combustibles en oxyde mixte UO{sub 2} - PuO{sub 2} Energy Technology Data Exchange (ETDEWEB) Mikailoff, H.; Mustelier, J.P.; Bloch, J.; Ezran, L.; Hayet, L. [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1966-07-01 gaine en acier inoxydable etait compris entre 0,06 et 0,27 mm. Les puissances specifiques ont varie de 1230 a 2700 W/cm{sup 3} et la temperature de la gaine etait situee entre 450 et 630 C, Le taux de combustion maximal atteint a ete de 22000 MWj/t. L'examen des aiguilles (metrologie, radiographie et spectrographie {gamma}) a revele certaines modifications macroscopiques, et l'evolution du combustible a ete mise en evidence par la micrographie. On a utilise ces observations, avec les resultats des mesures de flux, pour calculer la repartition des temperatures a l'interieur du combustible. Le volume des gaz de fission degages a ete mesure dans certaines aiguilles: les resultats sont interpretes en liaison avec la repartition des temperatures dans l'oxyde et le taux de combustion atteint. Enfin on a examine d'une part, le comportement d'un element combustible dont la partie centrale etait fondue pendant l'irradiation, et d'autre part, l'action du sodium entre dans certaines aiguilles dont la gaine s'etait rompue. (auteur) 12. Results of the irradiation of mixed UO{sub 2} - PuO{sub 2} oxide fuel elements; Resultats d'irradiation d'elements combustibles en oxyde mixte UO{sub 2} - PuO{sub 2} Energy Technology Data Exchange (ETDEWEB) Mikailoff, H; Mustelier, J P; Bloch, J; Ezran, L; Hayet, L [Commissariat a l' Energie Atomique, Fontenay-aux-Roses (France). Centre d' Etudes Nucleaires 1966-07-01 In order to study the behaviour of fuel elements used for the first charge of the reactor Rapsodie, a first batch of eleven needles was irradiated in the reactor EL3 and then examined. These needles (having a shape very similar lo that of the actual needles to be used) were made up of a stack of sintered mixed-oxide pellets: UO{sub 2} containing about 10 per cent of PuO{sub 2}. The density was 85 to 97 per cent of the theoretical, value. The diametral gap between the oxide and the stainless steel can was between 0,06 and 0,27 mm. The specific powers varied from 1230 to 2700 W/cm{sup 3} and the can temperature was between 450 and 630 C. The maximum burn-up attained was 22000 MW days/tonne. Examination of the needles (metrology, radiography and {gamma}-spectrography) revealed certain macroscopic changes, and the evolution of the fuel was shown by micrographic studies. These observations were used, together with flux measurements results, to calculate the temperature distribution inside the fuel. The volume of the fission gas produced was measured in some of the samples; the results are interpreted taking into account the temperature distribution in the oxide and the burn-up attained. Finally a study was made both of the behaviour of a fuel element whose central part was molten during irradiation, and of the effect of sodium which had penetrated into some of the samples following can rupture. (author) [French] Afin d'etudier le comportement des elements combustibles destines a la premiere charge du reacteur Rapsodie, une premiere serie de onze aiguilles a ete irradiee dans le reacteur EL3 et examinee apres irradiation. Ces aiguilles (aux caracteristiques geometriques tres proches de celles des aiguilles definitives) etaient constituees d'un empilement de pastilles frittees en oxyde mixte UO{sub 2} a 10 pour cent environ de PuO{sub 2}, dont la densite etait comprise entre 85 et 97 pour cent de la densite theorique. Le jeu diametral entre l'oxyde et la gaine en acier 13. Irradiation of fruit and vegetables International Nuclear Information System (INIS) O'Beirne, David 1987-01-01 There is likely to be less economic incentive to irradiate fruits and vegetables compared with applications which increase the safety of foods such as elimination of Salmonella or decontamination of food ingredients. Of the fruit and vegetable applications, irradiation of mushrooms may offer the clearest economic benefits in North-Western Europe. The least likely application appears to be sprout inhibition in potatoes and onions, because of the greater efficiency and flexibility of chemical sprout inhibitors. In the longer-term, combinations between irradiation/MAP/other technologies will probably be important. Research in this area is at an early stage. Consumer attitudes to food irradiation remain uncertain. This will be a crucial factor in the commercial application of the technology and in the determining the balance between utilisation of irradiation and of technologies which compete with irradiation. (author) 14. Status of irradiation capsule design International Nuclear Information System (INIS) Nagata, Hiroshi; Yamaura, Takayuki; Nagao, Yoshiharu 2013-01-01 For the irradiation test after the restart of JMTR, further precise temperature control and temperature prediction are required. In the design of irradiation capsule, particularly sophisticated irradiation temperature prediction and evaluation are urged. Under such circumstance, among the conventional design techniques of irradiation capsule, the authors reviewed the evaluation method of irradiation temperature. In addition, for the improvement of use convenience, this study examined and improved FINAS/STAR code in order to adopt the new calculation code that enables a variety of analyses. In addition, the study on the common use of the components for radiation capsule enabled the shortening of design period. After the restart, the authors will apply this improved calculation code to the design of irradiation capsule. (A.O.) 15. Food irradiation scenario in India International Nuclear Information System (INIS) Thomas, Paul 1998-01-01 Over 3 decades of research and developmental effort in India have established the commercial potential for food irradiation to reduce post-harvest losses and to ensure food safety. Current regulations permit irradiation of onions, potatoes and spices for domestic consumption and operation of commercial irradiators for treatment of food. In May 1997 draft rules have been notified permitting irradiation of several additional food items including rice, wheat products, dry fruits, mango, meat and poultry. Consumers and food industry have shown a positive attitude to irradiated foods. A prototype commercial irradiator for spices set up by Board of Radiation and Isotope Technology (BRIT) is scheduled to commence operation in early 1998. A commercial demonstration plant for treatment of onions is expected to be operational in the next 2 years in Lasalgaon, Nashik district. (author) 16. International document on food irradiation International Nuclear Information System (INIS) 1990-06-01 This international document highlights the major issues related to the acceptance of irradiated food by consumers, governmental and intergovernmental activities, the control of the process, and trade. The conference recognized that: Food irradiation has the potential to reduce the incidence of foodborne diseases. It can reduce post-harvest food losses and make available a larger quantity and a wider variety of foodstuffs for consumers. Regulatory control by competent authorities is a necessary prerequisite for introduction of the process. International trade in irradiated foods would be facilitated by harmonization of national procedures based on internationally recognized standards for the control of food irradiation. Acceptance of irradiated food by the consumer is a vital factor in the successful commercialization of the irradiation process, and information dissemination can contribute to this acceptance 17. Market testing of irradiated food International Nuclear Information System (INIS) Duc, Ho Minh 2001-01-01 Viet Nam has emerged as one of the three top producers and exporters of rice in the world. Tropical climate and poor infrastructure of preservation and storage lead to huge losses of food grains, onions, dried fish and fishery products. Based on demonstration irradiation facility pilot scale studies and marketing of irradiated rice, onions, mushrooms and litchi were successfully undertaken in Viet Nam during 1992-1998. Irradiation technology is being used commercially in Viet Nam since 1991 for insect control of imported tobacco and mould control of national traditional medicinal herbs by both government and private sectors. About 30 tons of tobacco and 25 tons of herbs are irradiated annually. Hanoi Irradiation Centre has been continuing open house practices for visitors from school, universities and various different organizations and thus contributed in improved public education. Consumers were found to prefer irradiated rice, onions, litchi and mushrooms over those nonirradiated. (author) 18. Gamma irradiation service in Mexico International Nuclear Information System (INIS) Liceaga C, G.; Martinez A, L.; Mendez T, D.; Ortiz A, G.; Olvera G, R. 1997-01-01 In 1980 it was installed in Mexico, on the National Institute of Nuclear Research, an irradiator model J S-6500 of a canadian manufacture. Actually, this is the greatest plant in the Mexican Republic that offers a gamma irradiation process at commercial level to diverse industries. However, seeing that the demand for sterilize those products were not so much as the irradiation capacity it was opted by the incursion in other types of products. During 17 years had been irradiated a great variety of products grouped of the following form: dehydrated foods, disposable products for medical use, cosmetics, medicaments, various. Nowadays the capacity of the irradiator is saturated virtue of it is operated the 24 hours during the 365 days of the year and only its operation is suspended by the preventive and corrective maintenance. However, the fresh food market does not be attended since this irradiator was designed for doses greater than 10 kGy (1.0 Mrad) 19. Food irradiation: chemistry and applications International Nuclear Information System (INIS) Thakur, B.R.; Singh, R.K. 1994-01-01 Food irradiation is one of the most extensively and thoroughly studied methods of food preservation. Despite voluminous data on safety and wholesomeness of irradiated foods, food irradiation is still a “process in waiting.” Although some countries are allowing the use of irradiation technology on certain foods, its full potential is not recognized. Only 37 countries worldwide permit the use of this technology. If used to its full potential, food irradiation can save millions of human lives being lost annually due to food‐borne diseases or starvation and can add billions of dollars to the world economy. This paper briefly reviews the history and chemistry of food irradiation along with its main applications, impediments to its adoption, and its role in improving food availability and health situation, particularly in developing countries of the world 20. Irradiation creep models - an overview International Nuclear Information System (INIS) Matthews, J.R.; Finnis, M.W. 1988-01-01 The modelling of irradiation creep is now highly developed but many of the basic processes underlying the models are poorly understood. A brief introduction is given to the theory of cascade interactions, point defect clustering and dislocation climb. The range of simple irradiation creep models is reviewed including: preferred nucleation of interstitial loops; preferred absorption of point defects by dislocations favourably orientated to an applied stress; various climb-enhanced glide and recovery mechanisms, and creep driven by internal stresses produced by irradiation growth. A range of special topics is discussed including: cascade effects; creep transients; structural and induced anisotropy; and the effect of impurities. The interplay between swelling and growth with thermal and irradiation creep is emphasized. A discussion is given on how irradiation creep theory should best be developed to assist the interpretation of irradiation creep observations and the requirements of reactor designers. (orig.) International Nuclear Information System (INIS) Beyers, M. 1983-08-01 International Nuclear Information System (INIS) Perraudin, Claude; Amarge, Edmond; Guiho, J.-P.; Horiot, J.-C.; Taniel, Gerard; Viel, Georges; Brethon, J.-P. 1981-01-01 The invention refers to an irradiation appliance making use of radioactive sources such as cobalt 60. This invention concerns an irradiation appliance delivering an easily adjustable irradiation beam in accurate dimensions and enabling the radioactive sources to be changed without making use of intricate manipulations at the very place where the appliance has to be used. This kind of appliance is employed in radiotherapy [fr 3. Irradiance sensors for solar systems Energy Technology Data Exchange (ETDEWEB) Storch, A.; Schindl, J. [Oesterreichisches Forschungs- und Pruefzentrum Arsenal GesmbH, Vienna (Austria). Business Unit Renewable Energy 2004-07-01 The presented project surveyed the quality of irradiance sensors used for applications in solar systems. By analysing an outdoor measurement, the accuracies of ten commercially available irradiance sensors were evaluated, comparing their results to those of a calibrated Kipp and Zonen pyranometer CM21. Furthermore, as a simple method for improving the quality of the results, for each sensor an irradiance-calibration was carried out and examined for its effectiveness. (orig.) 4. Dose Distribution of Gamma Irradiators International Nuclear Information System (INIS) Park, Seung Woo; Shin, Sang Hun; Son, Ki Hong; Lee, Chang Yeol; Kim, Kum Bae; Jung, Hai Jo; Ji, Young Hoon 2010-01-01 5. Progress in food irradiation: Australia Energy Technology Data Exchange (ETDEWEB) Wills, P A 1982-11-01 Progress in food irradiation treatment of Australian commodities, such as meat, pepper, honey, fruit is described. Irradiation took place with /sup 60/Co gamma radiation while testing for radiation sensitivity of Staphyllococcus in meat, of Bacillus aureus in pepper, of Streptococcus plutin and Bacillus larvae in honey, and of the fruitfly Dacus tryoni infesting fruit. So far, two State Health Commissions in Australia have authorised irradiation of shrimps with their sale being restricted to the State authorising treatment. Energy Technology Data Exchange (ETDEWEB) Ahmed, M 1982-11-01 The Bangladesh contribution deals with fish preservation by irradiation and, in this context, with the radiosensitivity of mesophilic and psychophilic microorganisms. Sprouting inhibition is studied with potatoes and onions. A further part deals with irradiation of spices. Mutagenicity tests were carried on rats and mice fed with irradiated fish. The tests were performed at the Institute for Food and Radiation Biology, near Dacca in December 1981. 7. Irradiation of spices and herbs International Nuclear Information System (INIS) Eiss, M.I. 1983-01-01 Gamma irradiation has been extensively studied as a means of reducing the microbial contamination of spices. Experiments indicate that spices, with water contents of 4.5-12%, are very resistant to physical or chemical change when irradiated. Since spices are used primarily as food flavoring agents, their flavor integrity must not be changed by the process. Sensory and food applications analysis indicate no significant difference between irradiated samples and controls for all spices tested 8. Progress in food irradiation: Australia International Nuclear Information System (INIS) Wills, P.A. 1982-01-01 Progress in food irradiation treatment of Australian commodities, such as meat, pepper, honey, fruit is described. Irradiation took place with 60 Co gamma radiation while testing for radiation sensitivity of Staphyllococcus in meat, of Bacillus aureus in pepper, of Streptococcus plutin and Bacillus larvae in honey, and of the fruitfly Dacus tryoni infesting fruit. Sofar, two State Health Commissions in Australia have authorised irradiation of shrimps with their sale being restricted to the State authorising treatment. (AJ) [de 9. Desinfestation of soybeans by irradiation International Nuclear Information System (INIS) Alvarez, M.; Prieto, E.; Mesa, J.; Fraga, R.; Fung, V. 1996-01-01 The effect of irradiation with the doses 0.5 and 1.0 kGy on desinfestation of soy beans and on important chemical compounds of this product was studied in this paper. The results showed the effectiveness of applied doses in the control of insect pests of soy beans during its storage and total proteins, fat and moisture and also the identity and quality characteristics of oil extrated from irradiated product which were not change by irradiation [es 10. National symposium on food irradiation International Nuclear Information System (INIS) 1979-10-01 This report contains abstracts of papers delivered at the National symposium on food irradiation held in Pretoria. The abstracts have been grouped into the following sections: General background, meat, agricultural products, marketing and radiation facilities - cost and plant design. Each abstract has been submutted separately to INIS. Tables listing irradiated food products cleared for human consumption in different countries are given as well as a table listing those irradiated food items that have been cleared in South Africa International Nuclear Information System (INIS) 1990-01-01 Canada has been in the forefront of irradiation technology for over 30 years. Some 83 of the 147 irradiators used worldwide are Canadian-built, yet Canadian food processors have been very slow to use the technology. This paper is an update on the food irradiation regulatory situation in Canada and the factors that influence it. It also reviews some significant non-regulatory developments. (author) International Nuclear Information System (INIS) Manes, K.R.; Ahlstrom, H.G.; Coleman, L.W.; Storm, E.K.; Glaze, J.A.; Hurley, C.A.; Rienecker, F.; O'Neal, W.C. 1977-01-01 The first laser/plasma studies performed with the Shiva laser system will be two sided irradiations extending the data obtained by other LLL lasers to higher powers. The twenty approximately 1 TW laser pulses will reach the target simultaneously from above and below in nested pentagonal clusters. The upper and lower clusters of ten beams each are radially polarized so that they strike the target in p-polarization and maximize absorption. This geometry introduces laser system isolation problems which will be briefly discussed. The layout and types of target diagnostics will be described and a brief status report on the facility given International Nuclear Information System (INIS) Miettinen, J.K. 1974-01-01 Food preservation by means of ionizing radiation has been technically feasible for more than a decade. Its utilization could increase food safety, extend the transport and shell life of foods, cut food losses, and reduce dependence upon chemical additives. The prime obstacles have been the strict safety requirements set by health authorities to this preservation method and the high costs of the long-term animal tests necessary to fulfil these requirements. An International Food Irradiation Project, expected to establish the toxicological safety of 10 foods by about 1976, is described in some detail. (author) International Nuclear Information System (INIS) Barrett, A. 1988-01-01 This paper describes body irradiation (TBI) being used increasingly as consolidation treatment in the management of leukaemia, lymphoma and various childhood tumours with the aim of sterilizing any malignant cells or micrometastases. Systemic radiotherapy as an adjunct to chemotherapy offers several possible benefits. There are no sanctuary sites for TBI; some neoplastic cells are very radiosensitive, and resistance to radiation appears to develop less readily than to drugs. Cross-resistance between chemotherapy and radiotherapy does not seem to be common and although plateau effects may be seen with chemotherapy there is a linear dose-response curve for clonogenic cell kill with radiation International Nuclear Information System (INIS) 1980-01-01 Conventional neutron irradiation therapy machines, based on the use of cyclotrons for producing neutron beams, use a superconducting magnet for the cyclotron's magnetic field. This necessitates complex liquid He equipment and presents problems in general hospital use. If conventional magnets are used, the weight of the magnet poles considerably complicates the design of the rotating gantry. Such a therapy machine, gantry and target facilities are described in detail. The use of protons and deuterons to produce the neutron beams is compared and contrasted. (U.K.) Energy Technology Data Exchange (ETDEWEB) Scarlatescu, Ioana, E-mail: [email protected]; Avram, Calin N. [Faculty of Physics, West University of Timisoara, Bd. V. Parvan 4, 300223 Timisoara (Romania); Virag, Vasile [County Hospital “Gavril Curteanu” - Oradea (Romania) 2015-12-07 In this paper we present one treatment plan for irradiation cases which involve a complex technique with multiple beams, using the 3D conformational technique. As the main purpose of radiotherapy is to administrate a precise dose into the tumor volume and protect as much as possible all the healthy tissues around it, for a case diagnosed with a primitive neuro ectoderm tumor, we have developed a new treatment plan, by controlling one of the two adjacent fields used at spinal field, in a way that avoids the fields superposition. Therefore, the risk of overdose is reduced by eliminating the field divergence. 17. Commercial implementation of food irradiation International Nuclear Information System (INIS) Welt, M.A. 1985-01-01 Recent positive developments in regulatory matters involving food irradiation appear to be opening the door to commercial implementation of the technology. Experience gained over five years in operating multi-purpose food irradiation facilities in the United States have demonstrated the technical and economic feasibility of the radiation preservation of food for a wide variety of purposes. Public education regarding food irradiation has been intensified especially with the growing favorable involvement of food trade associations, the USDA, and the American Medical Association. After 41 years of development effort, food irradiation will become a commercial reality in 1985. (author) 18. Irradiation's promise: fewer foodborne illnesses? International Nuclear Information System (INIS) Roberts, T. 1986-01-01 Food irradiation offers a variety of potential benefits to the food supply. It can delay ripening and sprouting of fruits and vegetables, and substitute for chemical fumigants to kill insects. However, one of the most important benefits of food irradiation is its potential use for destroying microbial pathogens that enter the food supply, including the two most common disease causing bacteria: salmonella and campylobacter. Animal products are one of the primary carriers of pathogens. Food borne illnesses are on the rise, and irradiation of red meats and poultry could significantly reduce their occurrence. Food irradiation should be examined more closely to determine its possible benefits in curtailing microbial diseases 19. International Developments of Food Irradiation Energy Technology Data Exchange (ETDEWEB) Loaharanu, P. [Head, Food Preservation Section, Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture, Wagramerstr. 5, A-1400, Vienna (Austria) 1997-12-31 Food irradiation is increasingly accepted and applied in many countries in the past decade. Through its use, food losses and food-borne diseases can be reduced significantly, and wider trade in many food items can be facilitated. The past five decades have witnessed a positive evolution on food irradiation according to the following: 1940s: discovery of principles of food irradiation; 1950s: initiation of research in advanced countries; 1960s: research and development were intensified in some advanced and developing countries; 1970s: proof of wholesomeness of irradiated foods; 1980s: establishment of national regulations; 1990s: commercialization and international trade. (Author) 20. Spices, irradiation and detection methods International Nuclear Information System (INIS) Sjoeberg, A.M.; Manninen, M. 1991-01-01 This paper is about microbiological aspects of spices and microbiological methods to detect irradiated food. The proposed method is a combination of the Direct Epifluorescence Filter Technique (DEFT) and the Aerobic Plate Count (APC). The evidence for irradiation of spices is based on the demonstration of a higher DEFT count than the APC. The principle was first tested in our earlier investigation in the detection of irradiation of whole spices. The combined DEFT+APC procedure was found to give a fairly reliable indication of whether or not a whole spice sample had been irradiated. The results are given (8 figs, 22 refs) 1. Food irradiation and the consumer International Nuclear Information System (INIS) Thomas, P.A. 1990-01-01 2. International Developments of Food Irradiation Energy Technology Data Exchange (ETDEWEB) Loaharanu, P [Head, Food Preservation Section, Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture, Wagramerstr. 5, A-1400, Vienna (Austria) 1998-12-31 Food irradiation is increasingly accepted and applied in many countries in the past decade. Through its use, food losses and food-borne diseases can be reduced significantly, and wider trade in many food items can be facilitated. The past five decades have witnessed a positive evolution on food irradiation according to the following: 1940s: discovery of principles of food irradiation; 1950s: initiation of research in advanced countries; 1960s: research and development were intensified in some advanced and developing countries; 1970s: proof of wholesomeness of irradiated foods; 1980s: establishment of national regulations; 1990s: commercialization and international trade. (Author) 3. International Developments of Food Irradiation International Nuclear Information System (INIS) Loaharanu, P. 1997-01-01 Food irradiation is increasingly accepted and applied in many countries in the past decade. Through its use, food losses and food-borne diseases can be reduced significantly, and wider trade in many food items can be facilitated. The past five decades have witnessed a positive evolution on food irradiation according to the following: 1940's: discovery of principles of food irradiation; 1950's: initiation of research in advanced countries; 1960's: research and development were intensified in some advanced and developing countries; 1970's: proof of wholesomeness of irradiated foods; 1980's: establishment of national regulations; 1990's: commercialization and international trade. (Author) 4. Societal benefits of food irradiation International Nuclear Information System (INIS) 2013-01-01 Food irradiation has a direct impact on society by reducing the occurrence of food-borne illness, decreasing food spoilage and waste, and facilitating global trade. Food irradiation is approved in 40 countries around the world to decontaminate food of disease and spoilage causing microorganisms, sterilize insect pests, and inhibit sprouting. A recent estimate suggests that 500,000 metric of food is currently irradiated worldwide, primarily to decontaminate spices. Since its first use in the 1960s the use of irradiation for food has grown slowly, but it remains the major technology of choice for certain applications. The largest growth sector in recent years has been phytosanitary irradiation of fruit to disinfest fruit intended for international shipment. For many countries which have established strict quarantine standards, irradiation offers as an effective alternative to chemical fumigants some of which are being phased out due to their effects on the ozone layer. Insects can be sterilized at very low dose levels, thus quality of fruit can be maintained. Irradiation is also highly effective in destroying microbial pathogens such as Salmonella spp., E. coli, and Listeria, hence its application for treatment of spices, herbs, dried vegetables, frozen seafood, poultry, and meat and its contribution to reducing foodborne illnesses. Unfortunately the use of irradiation for improving food safety has been under-exploited. This presentation will provide details on the use, benefits, opportunities, and challenges of food irradiation. (author) 5. Status of food irradiation worldwide International Nuclear Information System (INIS) Loaharanu, P. 1992-01-01 The past four decades have witnessed the steady development of food irradiation technology - from laboratory-scale research to full-scale commercial application. The present status of this technology, approval for processing food items in 37 countries and commerical use of irradiated food in 24 countries, will be discussed. The trend in the use of irradiation to overcome certain trade barriers such as quarantine and hygiene will be presented. Emphasis will be made on the use of irradiation as an alternative to chemical treatments of food. (orig.) [de 6. Contribution to the study of physico-chemical properties of surfaces modified by laser treatment. Application to the enhancement of localized corrosion resistance of stainless steels; Contribution a l'etude des proprietes physico-chimiques des surfaces modifiees par traitement laser. Application a l'amelioration de la resistance a la corrosion localisee des aciers inoxydables Energy Technology Data Exchange (ETDEWEB) Pacquentin, W. 2011-11-25 integrite sur des periodes de plus en plus longues. L'objectif de ce travail de these est d'evaluer le potentiel d'un traitement de refusion laser pour ameliorer la resistance a la corrosion d'un acier inoxydable de type 304L; l'utilisation du laser dans le domaine des traitements de surface constituant un procede en pleine evolution a cause des changements recents dans la technologie des lasers. Dans le cadre de ce travail, le choix du laser s'est porte sur un laser nano-impulsionnel a fibre dopee ytterbium dont les caracteristiques permettent la fusion quasiinstantanee sur quelques microns de la surface traitee, immediatement suivie d'une solidification ultra-rapide avec des vitesses de refroidissement pouvant atteindre 1011 K/s. La combinaison de ces processus favorise l'elimination des defauts surfaciques, la formation de phases hors equilibre, la segregation d'elements chimiques et la formation d'une nouvelle couche d'oxyde dont les proprietes sont gouvernees par les parametres laser. Afin de les correler avec la reactivite electrochimique de la surface, l'influence de deux parametres laser sur les proprietes physicochimiques de la surface a ete etudiee: la puissance du laser et le taux de recouvrement des impacts laser. Pour clarifier ces relations, la resistance a la corrosion par piquration des surfaces traitees a ete determinee par des tests electrochimiques. Pour des parametres laser specifiques, le potentiel de piquration d'un acier inoxydable de type 304L augmente de plus de 500 mV traduisant ainsi une meilleure tenue a la corrosion localisee en milieu chlorure. L'interdependance des differents phenomenes resultant du traitement laser a rendu complexe la hierarchisation de leur effet sur la sensibilite de l'alliage teste. Cependant, il a ete montre que la nature de l'oxyde thermique forme au cours de la refusion laser et ses defauts sont du premier ordre pour l'amorcage des International Nuclear Information System (INIS) 1991-01-01 This fact sheet focusses on the question of whether irradiation can be used to make spoiled food good. No food processing procedures can substitute for good hygienic practices, and good manufacturing practices must be followed in the preparation of food whether or not the food is intended for further processing by irradiation or any other means. 3 refs International Nuclear Information System (INIS) 1991-01-01 This fact sheet considers the effects on packaging materials of food irradiation. Extensive research has shown that almost all commonly used food packaging materials toted are suitable for use. Furthermore, many packaging materials are themselves routinely sterilized by irradiation before being used. 2 refs International Nuclear Information System (INIS) 1978-01-01 The irradiation of polymer film material is a strengthening procedure. To obtain a substantial uniformity in the radiation dosage profile, the film is irradiated in a trough having lateral deflection blocks adjacent to the film edges. These deflect the electrons towards the surface of the trough bottom for further deflection towards the film edge. (C.F.) International Nuclear Information System (INIS) Plodinec, M.J. 1982-01-01 The precipitation process for the decontamination of soluble SRP wastes produces a material whose radioactivity is dominated by 137 Cs. Potentially, this material could be vitrified to produce irradiation sources similar to the Hanford CsCl sources. In this report, process steps necessary for the production of cesium glass irradiation sources (CGS), and the nature of the sources produced, are examined. Three options are considered in detail: direct vitrification of precipitation process waste; direct vitrification of this waste after organic destruction; and vitrification of cesium separated from the precipitation process waste. Direct vitrification is compatible with DWPF equipment, but process rates may be limited by high levels of combustible materials in the off-gas. Organic destruction would allow more rapid processing. In both cases, the source produced has a dose rate of 2 x 10 4 rads/hr at the surface. Cesium separation produces a source with a dose rate of 4 x 10 5 at the surface, which is nearer that of the Hanford sources (2 x 10 6 rads/hr). Additional processing steps would be required, as well as R and D to demonstrate that DWPF equipment is compatible with this intensely radioactive material Science.gov (United States) Josephson, Edward S. International Nuclear Information System (INIS) Josephson, E.S. 1981-01-01 International Nuclear Information System (INIS) Hallman, Guy J. 2012-01-01 The history of the development of generic phytosanitary irradiation (PI) treatments is discussed beginning with its initial proposal in 1986. Generic PI treatments in use today are 150 Gy for all hosts of Tephritidae, 250 Gy for all arthropods on mango and papaya shipped from Australia to New Zealand, 300 Gy for all arthropods on mango shipped from Australia to Malaysia, 350 Gy for all arthropods on lychee shipped from Australia to New Zealand and 400 Gy for all hosts of insects other than pupae and adult Lepidoptera shipped to the United States. Efforts to develop additional generic PI treatments and reduce the dose for the 400 Gy treatment are ongoing with a broad based 5-year, 12-nation cooperative research project coordinated by the joint Food and Agricultural Organization/International Atomic Energy Agency Program on Nuclear Techniques in Food and Agriculture. Key groups identified for further development of generic PI treatments are Lepidoptera (eggs and larvae), mealybugs and scale insects. A dose of 250 Gy may suffice for these three groups plus others, such as thrips, weevils and whiteflies. - Highlights: ► The history of phytosanitary irradiation (PI) treatments is given. ► Generic PI treatments in use today are discussed. ► Suggestions for future research are presented. ► A dose of 250 Gy for most insects may suffice. International Nuclear Information System (INIS) Gruenewald, T.; Rumpf, G.; Troemel, I.; Bundesforschungsanstalt fuer Ernaehrung, Karlsruhe 1978-07-01 The results of several test series on the storage of irradiated and non-irradiated German grown onion are reported. Investigated was the influence of the irradiation conditions such as time and dose and of the storage conditions on sprouting, spoilage, browning of the vegetation centres, composition of the onions, strength and sensorial properties of seven different onion varieties. If the onions were irradiated during the dormancy period following harvest, a dose of 50 Gy (krad) was sufficient to prevent sprouting. Regarding the irradiated onions, it was not possible by variation of the storage conditions within the limits set by practical requirements to extend the dormancy period or to prevent browning of the vegetation centres, however. (orig.) 891 MG 892 RSW [de 15. Consumer acceptance of irradiated poultry International Nuclear Information System (INIS) Hashim, I.B.; Resurreccion, A.V.A.; McWatters, K.H. 1995-01-01 16. Food irradiation development: Malaysian perspective International Nuclear Information System (INIS) Zainon Othman 1997-01-01 17. The wholesomeness of irradiated food International Nuclear Information System (INIS) Elias, P.S. 1976-01-01 18. Consumer acceptance of irradiated poultry. Science.gov (United States) Hashim, I B; Resurreccion, A V; McWatters, K H 1995-08-01 19. Detection methods of irradiated foodstuffs Energy Technology Data Exchange (ETDEWEB) Ponta, C C; Cutrubinis, M; Georgescu, R [IRASM Center, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-077125 Magurele-Bucharest (Romania); Mihai, R [Life and Environmental Physics Department, Horia Hulubei National Institute for Physics and Nuclear Engineering, PO Box MG-6, RO-077125 Magurele-Bucharest (Romania); Secu, M [National Institute of Materials Physics, Bucharest (Romania) 2005-07-01 20. Consumer attitude toward food irradiation International Nuclear Information System (INIS) Bruhn, C.M.M. 1986-01-01 1. Food irradiation: contaminating our food International Nuclear Information System (INIS) Piccioni, R. 1988-01-01 2. The PHENIX experimental irradiation program International Nuclear Information System (INIS) Michel, P.; Courcon, P.; Coulon, P. 1985-03-01 The PHENIX experimental irradiation program represents a substancial volume of work. For example, more than forty experiments were in the core during the 33rd PHENIX irradiation cycle at the end of 1984. This program ensures the implementation, optimization and qualification of new solutions for the future developpment of French LMFBRs in three significant areas: fissile, fertile and absorber elements 3. Food irradiation and bacterial toxins International Nuclear Information System (INIS) Tranter, H.S.; Modi, N.K.; Hambleton, P.; Melling, J.; Rose, S.; Stringer, M.F. 1987-01-01 The authors' findings indicate that irradiation confers no advantage over heat processing in respect of bacterial toxins (clostridium botulinum, neurotoxin A and staphylococcal enterotoxin A). It follows that irradiation at doses less than the ACINF recommended upper limit of 10 kGy could not be used to improve the ambient temperature shelf life on non-acid foods. (author) 4. Nutritional aspects of irradiated shrimp International Nuclear Information System (INIS) Shamsuzzaman, K. 1989-11-01 Data available in the literature on the nutritional aspects of irradiated shrimp are reviewed and the indication is that irradiation of shrimp at doses up to about 3.2 kGy does not significantly affect the levels of its protein, fat, carbohydrate and ash. There are no reports on the effect of irradiation of shrimp above 3.2 kGy on these components. Limited information available indicates that there are some minor changes in the fatty acid composition of shrimp as a result of irradiation. Irradiation also causes some changes in the amino acid composition of shrimp; similar changes occur due to canning and hot-air drying. Some of the vitamins in shrimp, such as thiamine, are lost as a result of irradiation but the loss is less extensive than in thermally processed shrimp. Protein quality of shrimp, based on the growth of rats as well as that of Tetrahymena pyriformis, is not affected by irradiation. No adverse effects attributed to irradiation were found either in short-term or long-term animal feeding tests 5. Sensorial evaluation of irradiated mangoes International Nuclear Information System (INIS) Broisler, Paula Olhe; Cruz, Juliana Nunes da; Sabato, Susy Frey 2007-01-01 6. Sensorial evaluation of irradiated mangoes Energy Technology Data Exchange (ETDEWEB) Broisler, Paula Olhe; Cruz, Juliana Nunes da; Sabato, Susy Frey [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)]. E-mails: [email protected]; [email protected]; [email protected] 2007-07-01 7. ASEAN workshop on food irradiation International Nuclear Information System (INIS) 1985-01-01 This proceedings was organized by the ASEAN Food Handling Bureau in Collaboration with the Thai Atomic Energy Commission for Peace. Experts from ASEAN and overseas were invited to present a series of papers covering the state of the art of irradiation technology and the important issues relating to food irradiation 8. Progress in food irradiation: Germany International Nuclear Information System (INIS) Diehl, J.F. 1978-01-01 The report on German results of irradiation of food covers the period from 1975 to 1978. Special attention is paid to the radiation-chemical changes of food, radiation microbiology with regard to clostridium botulinum, the enzymatic behaviour, physical alterations in several sorts of meat and vegetables after irradiation, tolerability investigations, mutagenicity tests, and, finally, legal aspects. (AJ) [de 9. Optical absorption of irradiated carbohydrates International Nuclear Information System (INIS) Supe, A.A.; Tiliks, Yu.E. 1994-01-01 The optical absorption spectra of γ-irradiated carbohydrates (glucose, lactose, sucrose, maltose, and starch) and their aqueous solutions were studied. The comparison of the data obtained with the determination of the concentrations of molecular and radical products of radiolysis allows the absorption bands with maxima at 250 and 310 nm to be assigned to the radicals trapped in the irradiated carbohydrates 10. National symposium on food irradiation International Nuclear Information System (INIS) Beyers, M.; Brodrick, H.T.; Van Niekerk, W.C.A. 1980-01-01 This report contains proceedings of papers delivered at the national symposium on food irradiation held in Pretoria. The proceedings have been grouped into the following sections: general background; meat; agricultural products; marketing; and radiation facilities - cost and plant design. Each paper has been submitted separately to INIS. Tables listing irradiated food products cleared for human consumption in different countries are given 11. Electron irradiation of power transistors International Nuclear Information System (INIS) Hower, P.L.; Fiedor, R.J. 1982-01-01 A method for reducing storage time and gain parameters in a semiconductor transistor includes the step of subjecting the transistor to electron irradiation of a dosage determined from measurements of the parameters of a test batch of transistors. Reduction of carrier lifetime by proton bombardment and gold doping is mentioned as an alternative to electron irradiation. (author) 12. An overview of food irradiation International Nuclear Information System (INIS) Stevenson, M.H. 1991-01-01 This outline survey reviews the subject of food irradiation under the following headings:- brief history, the process (sources, main features of a food processing facility, interaction of radiation with food, main applications of the technology, packaging) consumer concerns (safety, nutritional changes, labelling, detection), international use of food irradiation and legal aspects. (UK) 13. The safety of irradiated foods International Nuclear Information System (INIS) Selman, J.D. 1988-01-01 This state of the art outline review written for 'Food Manufacture' looks at the wholesomeness of irradiated foods, and makes a comparison with conventionally treated products. Topics mentioned are doses, radioresistance of microorganisms especially clostudium botulinum and the problem of bacterial toxins, storage conditions, nutrition, especially vitamin loss, and detection of irradiation. (U.K.) 14. Food irradiation seminar: Asia and the Pacific International Nuclear Information System (INIS) Mitchell, G.E. 1986-01-01 The report covers the Seminar for Asia and the Pacific on the practical application of food irradiation. The seminar assessed the practical application of food irradiation processes, commercial utilisation and international trade of irradiated food International Nuclear Information System (INIS) Hernandes, N.K.; Vital, H. de C.; Sabaa-Srur, A.U.O. 2003-01-01 16. Dosimetry of blood irradiator - 2000 International Nuclear Information System (INIS) Mhatre, Sachin G.V.; Shinde, S.H.; Bhat, R.M.; Rao, Suresh; Sharma, D.N. 2008-01-01 17. Mechanical properties of irradiated beryllium International Nuclear Information System (INIS) Beeston, J.M.; Longhurst, G.R.; Wallace, R.S. 1992-01-01 Beryllium is planned for use as a neutron multiplier in the tritium breeding blanket of the International Thermonuclear Experimental Reactor (ITER). After fabricating samples of beryllium at densities varying from 80 to 100% of the theoretical density, we conducted a series of experiments to measure the effect of neutron irradiation on mechanical properties, especially strength and ductility. Samples were irradiated in the Advanced Test Reactor (ATR) to a neutron fluence of 2.6 x 10 25 n/m 2 (E > MeV) at an irradiation temperature of 75deg C. These samples were subsequently compression-tested at room temperature, and the results were compared with similar tests on unirradiated specimens. We found that the irradiation increased the strength by approximately four times and reduced the ductility to approximately one fourth. Failure was generally ductile, but the 80% dense irradiated samples failed in brittle fracture with significant generation of fine particles and release of small quantities of tritium. (orig.) 18. Mechanical properties of irradiated beryllium Science.gov (United States) Beeston, J. M.; Longhurst, G. R.; Wallace, R. S.; Abeln, S. P. 1992-10-01 Beryllium is planned for use as a neutron multiplier in the tritium breeding blanket of the International Thermonuclear Experimental Reactor (ITER). After fabricating samples of beryllium at densities varying from 80 to 100% of the theoretical density, we conducted a series of experiments to measure the effect of neutron irradiation on mechanical properties, especially strength and ductility. Samples were irradiated in the Advanced Test Reactor (ATR) to a neutron fluence of 2.6 × 10 25 n/m 2 ( E > 1 MeV) at an irradiation temperature of 75°C. These samples were subsequently compression-tested at room temperature, and the results were compared with similar tests on unirradiated specimens. We found that the irradiation increased the strength by approximately four times and reduced the ductility to approximately one fourth. Failure was generally ductile, but the 80% dense irradiated samples failed in brittle fracture with significant generation of fine particles and release of small quantities of tritium. 19. Irradiation as a quarantine treatment International Nuclear Information System (INIS) Burditt, A.K. Jr. 1991-01-01 The use of irradiation as an alternative treatment for commodities subject to infestation by pests of quarantine importance is outlined in this article. A dose of 300 Gy or less has been found to prevent adult emergence when insect eggs or larvae are irradiated and research has shown that such doses will not affect the quality of most commodities. The use of gamma rays from cobalt-60 or caesium-137 sources, as well as electrons or X-rays from linear accelerators, has been approved for food irradiation. Irradiation facilities must meet regulations promulgated by nuclear, health and agricultural quarantine agencies with regard to location, facility design, sources, operation, personnel, dosimetry and other requirements. Education of industry operators and the general public is needed in order to gain acceptance of irradiation as a quarantine treatment. (author). 21 refs, 1 tab 20. Safer food means food irradiation International Nuclear Information System (INIS) Steele, J.H. 2000-01-01 In this article the author presents the sanitary advantages that are brought by food irradiation. OMS experts state that this technique is safe and harmless for any average global dose between 10 KGy and 100 KGy. Whenever a seminar is held on the topic, it is always concluded that food irradiation should be promoted and favoured. In France food irradiation is authorized for some kinds of products and exceptionally above a 10 KGy dose. Historically food irradiation has been hampered in its development by its classification by American Authorities as food additives in 1958 (Delanay clause). The author draws a parallel between food irradiation and pasteurization or food deep-freezing in their beginnings. (A.C.) Energy Technology Data Exchange (ETDEWEB) Beddoes, J M [Atomic Energy of Canada Ltd., Ottawa, Ontario. Commercial Products 1982-04-01 Irradiation has advantages as a method of preserving food, especially in the Third World. The author tabulates some examples of actual use of food irradiation with dates and tonnages, and tells the story of the gradual acceptance of food irradiation by the World Health Organization, other international bodies, and the U.S. Food and Drug Administration (USFDA). At present, the joint IAEA/FAO/WHO standard permits an energy level of up to 5 MeV for gamma rays, well above the 1.3 MeV energy level of /sup 60/Co. The USFDA permits irradiation of any food up to 10 krad, and minor constituents of a diet may be irradiated up to 5 Mrad. The final hurdle to be cleared, that of economic acceptance, depends on convincing the food processing industry that the process is technically and economically efficient. 2. Consumer acceptance of irradiated food Energy Technology Data Exchange (ETDEWEB) Loaharanu, P [Head, Food Preservation Section, Joint FAO/ IAEA Division of Nuclear Techniques in Food and Agriculture, Wagramerstr. 5, A-1400, Vienna (Austria) 1998-12-31 There was a widely held opinion during the 1970s and 1980s that consumers would be reluctant to purchase irradiated food, as it was perceived that consumers would confuse irradiated food with food contaminated by radionuclides. Indeed, a number of consumer attitude surveys conducted in several western countries during these two decades demonstrated that the concerns of consumers on irradiated food varied from very concerned to seriously concerned.This paper attempts to review parameters conducting in measuring consumer acceptance of irradiated food during the past three decades and to project the trends on this subject. It is believed that important lessons learned from past studies will guide further efforts to market irradiated food with wide consumer acceptance in the future. (Author) 3. Irradiation of spices - a review International Nuclear Information System (INIS) 2007-01-01 4. Consumer acceptance of irradiated food Energy Technology Data Exchange (ETDEWEB) Loaharanu, P. [Head, Food Preservation Section, Joint FAO/ IAEA Division of Nuclear Techniques in Food and Agriculture, Wagramerstr. 5, A-1400, Vienna (Austria) 1997-12-31 There was a widely held opinion during the 1970s and 1980s that consumers would be reluctant to purchase irradiated food, as it was perceived that consumers would confuse irradiated food with food contaminated by radionuclides. Indeed, a number of consumer attitude surveys conducted in several western countries during these two decades demonstrated that the concerns of consumers on irradiated food varied from very concerned to seriously concerned.This paper attempts to review parameters conducting in measuring consumer acceptance of irradiated food during the past three decades and to project the trends on this subject. It is believed that important lessons learned from past studies will guide further efforts to market irradiated food with wide consumer acceptance in the future. (Author) 5. Irradiation environment and materials behavior International Nuclear Information System (INIS) Ishino, Shiori 1992-01-01 Irradiation environment is unique for materials used in a nuclear energy system. Material itself as well as irradiation and environmental conditions determine the material behaviour. In this review, general directions of research and development of materials in an irradiation environment together with the role of materials science are discussed first, and then recent materials problems are described for energy systems which are already existing (LWR), under development (FBR) and to be realized in the future (CTR). Topics selected are (1) irradiation embrittlement of pressure vessel steels for LWRs, (2) high fluence performance of cladding and wrapper materials for fuel subassemblies of FBRs and (3) high fluence irradiation effects in the first wall and blanket structural materials of a fusion reactor. Several common topics in those materials issues are selected and discussed. Suggestions are made on some elements of radiation effects which might be purposely utilized in the process of preparing innovative materials. (J.P.N.) 69 refs 6. Food irradiation: the 'experts' choice International Nuclear Information System (INIS) Watts, P. 1990-01-01 The UK Government has decided to lift the ban on food irradiation. The proponents of food irradiation claim it is an effective and safe means of preserving food, at minimum risk to the public. However, the prospect of irradiated food being on the shelves has created considerable opposition from environmental, consumer, public health groups and trade unions. The long list of unanswered health and safety questions means the public could be exposed to a whole new range of risks. The consumer is justified as saying ''if food has to be irradiated, what was wrong with it, good food does not need irradiating''. The answer to food contamination is improved hygiene and training in farm, factory and shop. (author) 7. Pallet irradiators for food processing International Nuclear Information System (INIS) McKinnon, R.G.; Chu, R.D.H. 1985-01-01 This paper looks at the various design concepts for the irradiation processing of food products, with particular emphasis on handling the products on pallets. Pallets appear to offer the most attractive method for handling foods from many considerations. Products are transported on pallets. Warehouse space is commonly designed for pallet storage and, if products are already palletized before and after irradiation, then labour could be saved by irradiating on pallets. This is also an advantage for equipment operation since a larger carrier volume means lower operation speeds. Different pallet irradiator design concepts are examined and their suitability for several applications are discussed. For example, low product holdup for fast turn around will be a consideration for those operating an irradiation 'service' business; others may require a very large source where efficiency is the primary requirement and this will not be consistent with low holdup. The radiation performance characteristics and processing costs of these machines are discussed. (author) 8. Eatability of the irradiated food International Nuclear Information System (INIS) Luna C, P.C. 1992-05-01 A food is eatable and innocuous when it has an acceptable nutritional quality, it is toxicological and microbiologically safe for the human consumption. Not one preservation treatment allows to assure this in absolute form. As it happens with other conservation methods, the irradiation produce biological, chemical and physical changes in the treated food. For to check if such changes could cause damages to the health of the consumer, its have been carried out extensive studies to evaluate the inoculate of the irradiated foods. Analyzing diverse toxicity studies to prove the eatability of the irradiated foods, in this work those are presented but important in chronological order. In summary, until today it exists a great heap of tests that they demonstrate without place to doubts that the foods irradiated with a dose up to 10 KGy its are capable for the human consumption, for what can to be concluded that a safety margin exists to consume foods irradiated. (Author) 9. Consumer acceptance of irradiated food International Nuclear Information System (INIS) Loaharanu, P. 1997-01-01 There was a widely held opinion during the 1970's and 1980's that consumers would be reluctant to purchase irradiated food, as it was perceived that consumers would confuse irradiated food with food contaminated by radionuclides. Indeed, a number of consumer attitude surveys conducted in several western countries during these two decades demonstrated that the concerns of consumers on irradiated food varied from very concerned to seriously concerned.This paper attempts to review parameters conducting in measuring consumer acceptance of irradiated food during the past three decades and to project the trends on this subject. It is believed that important lessons learned from past studies will guide further efforts to market irradiated food with wide consumer acceptance in the future. (Author) International Nuclear Information System (INIS) Wiedersich, H.; Okamoto, P.R.; Lam, N.Q. 1977-01-01 Irradiation at elevated temperature induces redistribution of the elements in alloys on a microstructural level. This phenomenon is caused by differences in the coupling of the various alloy constituents to the radiation-induced defect fluxes. A simple model of the segregation process based on coupled reaction-rate and diffusion equations is discussed. The model gives a good description of the experimentally observed consequences of radiation-induced segregation, including enrichment or depletion of solute elements near defect sinks such as surfaces, voids and dislocations; precipitation of second phases in solid solutions; precipitate redistribution in two-phase alloys; and effects of defect-production rates on void-swelling rates in alloys with minor solute additions International Nuclear Information System (INIS) Goehrich, K.; Vogt, H. 1979-01-01 This patent describes a tube for irradiation equipment for limiting an emergent beam, with a baseplate, possessing a central aperture, intended for attaching to the equipment, as well as four carrier plates, each of which possesses a limiting edge and a sliding edge located at right angles thereto. The carrier plates are located parallel to the baseplate, the limiting edge of each carrier plate resting against the sliding edge of the adjacent carrier plate and each of the two mutually opposite pairs of carrier plates being displaceable, parallel to the direction of its sliding edges and symmetrically to the center of the transmission aperture, for the purpose of continuously varying the transmission aperture defined by the limiting edges, during which displacement each of the displaced carrier plates carries with it the carrier plate, resting against the limiting edge of the former plate, parallel to the direction of the limiting edge of the latter plate. 8 claims International Nuclear Information System (INIS) Torronen, K.; Pelli, R.; Planman, T.; Valo, M. 1993-01-01 Mitigation methods for reducing the irradiation damage on pressure vessel materials are reviewed: load leakage loading schemes are commonly used in PWRs to mitigate reactor pressure vessel embrittlement; dummy assemblies have been applied in WWER 440-type and in some old western power plants, when exceptional fast embrittlement has been encountered; shielding of the pressure vessel has been developed, but is not in common use; pre-stressing the pressure vessel has been proposed for preventing PTS failures, but its applicability is not yet demonstrated. The large number of successful annealing treatments performed in WWER 440 type reactors as well as research on the effects of annealing treatments suggest applications for western PWRs. The emergency core cooling systems have been modified in WWER 440-type reactors in connection with other mitigation measures. (authors). 37 refs., 18 figs., 2 tabs International Nuclear Information System (INIS) Reiley, T.C.; Jung, P. 1977-01-01 The results to date in the area of radiation enhanced deformation using beams of light ions to simulate fast neutron displacement damage are reviewed. A comparison is made between these results and those of in-reactor experiments. Particular attention is given to the displacement rate calculations for light ions and the electronic energy losses and their effect on the displacement cross section. Differences in the displacement processes for light ions and neutrons which may effect the irradiation creep process are discussed. The experimental constraints and potential problem areas associated with these experiments are compared to the advantages of simulation. Support experiments on the effect of thickness on thermal creep are presented. A brief description of the experiments in progress is presented for the following laboratories: HEDL, NRL, ORNL, PNL, U. of Lowell/MIT in the United States, AERE Harwell in the United Kingdom, CEN Saclay in France, GRK Karlsruhe and KFA Julich in West Germany Energy Technology Data Exchange (ETDEWEB) Torronen, K; Pelli, R; Planman, T; Valo, M [Technical Research Centre of Finland, Jyvaeskylae (Finland). Combustion and Thermal Engineering Lab. 1994-12-31 Mitigation methods for reducing the irradiation damage on pressure vessel materials are reviewed: load leakage loading schemes are commonly used in PWRs to mitigate reactor pressure vessel embrittlement; dummy assemblies have been applied in WWER 440-type and in some old western power plants, when exceptional fast embrittlement has been encountered; shielding of the pressure vessel has been developed, but is not in common use; pre-stressing the pressure vessel has been proposed for preventing PTS failures, but its applicability is not yet demonstrated. The large number of successful annealing treatments performed in WWER 440 type reactors as well as research on the effects of annealing treatments suggest applications for western PWRs. The emergency core cooling systems have been modified in WWER 440-type reactors in connection with other mitigation measures. (authors). 37 refs., 18 figs., 2 tabs. International Nuclear Information System (INIS) Guy, R. 1978-01-01 The transport container for irradiated or used nuclear fuel is provided with an identical heat shield against fires on the top and bottom sides. Each heat shield consists of two inner nickel plates, whose contact surfaces are polished to a mirror finish and an outer plate of stainless steel. The nickel plate on the box is spot welded to it while the second nickel plate is spot welded to the steel plate. Both together are in turn welded so as to be leaktight to the edges of the box. For extreme heat effects and based on the different (bimetal) coefficients of expansion, the steel plate with the nickel plate attached to it bulges away from the box. The second nickel plate remains at the box, so that a subpressure space is formed with the mirror nickel surfaces. The heat radiation and heat conduction to the box are greatly reduced by this. (DG) [de International Nuclear Information System (INIS) Yueh-jen, Y.; Jin-lai, Z.; Shao-chun, L. 1983-01-01 Occasionally, in China, marine products can not be provided for the markets in good quality, for during the time when they are being transported from the sea port to inland towns or even at the time when they are unloaded from the ship, they are beginning to spoil. Obviously, it is very important that appropriate measures should be taken to prevent them from decay. Our study has proved that the shelf life of fresh Flatfish (Cynoglossue robustus) and Silvery pomfret (stromateoides argenteus), which, packed in sealed containers, are irradiated by 1.5 kGy, 2.2 kGy and 3.0 kGy, can be stored for about 13 to 26 days at 3 deg to 5 deg C. (author) Energy Technology Data Exchange (ETDEWEB) Halperin, E.C.; Knechtle, S.J.; Harland, R.C.; Yamaguchi, Yasua; Sontag, M.; Bollinger, R.R. (Duke Univ., Durham, NC (USA). Dept. of Radiology Duke Univ., Durham, NC (USA). Dept. of Microbiology and Immunology) 1990-05-01 Xenogeneic transplantation (XT) is the transplantation of organs or tissues from a member of one species to a member of another. Mammalian species frequently have circulating antibody which is directed against the foreign organ irrespective of known prior antigen exposure. This antibody may lead to hyperacute rejection once it ensues so efforts must be directed towards eliminating the pre-existing antibody. In those species in which hyperacute rejection of xenografts does not occur, cell-mediated refection, similar to allograft rejection, may occur. It is in the prevention of this latter form of refection that radiation is most likely to be beneficial in XT. Both total lymphoid irradiation (TLI) and selective lyphoid irradiation (LSI) have been investigated for use in conjunction with XT. TLI has contributed to the prolongation of pancreatic islet-cell xenografts from hamsters to rats. TLI has also markedly prolonged the survival of cardiac transplants from hamsters to rats. A more modest prolongation of graft survival has been seen with the use of TLI in rabbit-to-rat exchanges. Therapy with TLI, cyclosporine, and splenectomy has markedly prolonged the survival of liver transplants from hamsters to rats, and preliminary data suggest that TLI may contribute to the prolongation of graft survival in the transplantation of hearts from monkeys to baboons. SLI appears to have prolonged graft survival, when used in conjunction with anti-lymphocyte globulin, in hamster-to-rat cardiac graft exchanges. The current state of knowledge of the use of irradiaiton in experimental XT is reviewed. (author). 38 refs.; 1 fig.; 5 tabs. International Nuclear Information System (INIS) Halperin, E.C.; Knechtle, S.J.; Harland, R.C.; Yamaguchi, Yasua; Sontag, M.; Bollinger, R.R.; Duke Univ., Durham, NC 1990-01-01 Xenogeneic transplantation (XT) is the transplantation of organs or tissues from a member of one species to a member of another. Mammalian species frequently have circulating antibody which is directed against the foreign organ irrespective of known prior antigen exposure. This antibody may lead to hyperacute rejection once it ensues so efforts must be directed towards eliminating the pre-existing antibody. In those species in which hyperacute rejection of xenografts does not occur, cell-mediated refection, similar to allograft rejection, may occur. It is in the prevention of this latter form of refection that radiation is most likely to be beneficial in XT. Both total lymphoid irradiation (TLI) and selective lyphoid irradiation (LSI) have been investigated for use in conjunction with XT. TLI has contributed to the prolongation of pancreatic islet-cell xenografts from hamsters to rats. TLI has also markedly prolonged the survival of cardiac transplants from hamsters to rats. A more modest prolongation of graft survival has been seen with the use of TLI in rabbit-to-rat exchanges. Therapy with TLI, cyclosporine, and splenectomy has markedly prolonged the survival of liver transplants from hamsters to rats, and preliminary data suggest that TLI may contribute to the prolongation of graft survival in the transplantation of hearts from monkeys to baboons. SLI appears to have prolonged graft survival, when used in conjunction with anti-lymphocyte globulin, in hamster-to-rat cardiac graft exchanges. The current state of knowledge of the use of irradiaiton in experimental XT is reviewed. (author). 38 refs.; 1 fig.; 5 tabs 19. Hydrogen-plasticity in the austenitic alloys; Interactions hydrogene-plasticite dans les alliages austenitiques Energy Technology Data Exchange (ETDEWEB) De lafosse, D. [Ecole Nationale Superieure des Mines, Lab. PECM-UMR CNRS 5146, 42 - Saint-Etienne (France) 2007-07-01 This presentation deals with the hydrogen effects under stresses corrosion, in austenitic alloys. The objective is to validate and characterize experimentally the potential and the limits of an approach based on an elastic theory of crystal defects. The first part is devoted to the macroscopic characterization of dynamic hydrogen-dislocations interactions by aging tests. then the hydrogen influence on the plasticity is evaluated, using analytical classic models of the elastic theory of dislocations. The hydrogen influence on the flow stress of bcc materials is analyzed experimentally with model materials. (A.L.B.) 20. Cytologic studies on irradiated gestric cancer cells Energy Technology Data Exchange (ETDEWEB) Isono, S; Takeda, T; Amakasu, H; Asakawa, H; Yamada, S [Miyagi Prefectural Adult Disease Center, Natori (Japan) 1981-06-01 The smears of the biopsy and resected specimens obtained from 74 cases of irradiated gastric cancer were cytologically analyzed for effects of irradiation. Irradiation increased the amount of both necrotic materials and neutrophils in the smears. Cancer cells were decreased in number almost in inverse proportion to irradiation dose. Clusters of cancer cells shrank in size and cells were less stratified after irradiation. Irradiated cytoplasms were swollen, vacuolated and stained abnormally. Irradiation with less than 3,000 rads gave rise to swelling of cytoplasms in almost all cases. Nuclei became enlarged, multiple, pyknotic and/or stained pale after irradiation. Nuclear swelling was more remarkable in cancer cells of differentiated adenocarcinomas. 1. Quality control of static irradiation processing products International Nuclear Information System (INIS) Bao Jianzhong; Chen Xiulan; Cao Hong; Zhai Jianqing 2002-01-01 Based on the irradiation processing practice of the nuclear technique application laboratory of Yangzhou Institute of Agricultural Science, the quality control of irradiation processing products is discussed International Nuclear Information System (INIS) Tencheva, S.; Todorov, S. 1975-01-01 The influence of gamma-ray on tomatoes picked in a pink-red ripening stage, good for consumption, is studied. For that purpose tomatoes of ''Pioneer 2'' variety packed in perforated 500 g plastic bags were irradiated on a gamma device (Cobalt-60) at a dose power of 1900 rad/min with doses 200 or 300 krad. Samples were stored after irradiation at room temperature (20 - 22sup(o)C). Microbiological studies demonstrated that 44 resp. 99.96 per cent of the initial number of microorganisms was destroyed after irradiation with 200 resp. 300 krad. The time required for the number of microorganisms to be restored was accordingly increased. Irradiation delayed tomato ripening by 4 to 6 days, demonstrable by the reduced content of the basic staining substances - carotene and licopine. Immediately after irradiation the ascorbic acid content was reduced by an average of 13 per cent. After 18 days the amount of ascorbic acid in irradiated tomatoes was increased to a higher than the starting level, this is attributed to reductone formation during irradiation. The elevated total sugar content shown to be invert sugar was due to further tomato ripening. (Ch.K.) 3. Method of detecting irradiated pepper International Nuclear Information System (INIS) Doumaru, Takaaki; Furuta, Masakazu; Katayama, Tadashi; Toratani, Hirokazu; Takeda, Atsuhiko 1989-01-01 Spices represented by pepper are generally contaminated by microorganisms, and for using them as foodstuffs, some sterilizing treatment is indispensable. However, heating is not suitable to spices, accordingly ethylene oxide gas sterilization has been inevitably carried out, but its carcinogenic property is a problem. Food irradiation is the technology for killing microorganisms and noxious insects which cause the rotting and spoiling of foods and preventing the germination, which is an energy-conserving method without the fear of residual chemicals, therefore, it is most suitable to the sterilization of spices. In the irradiation of lower than 10 kGy, the toxicity test is not required for any food, and the irradiation of spices is permitted in 20 countries. However, in order to establish the international distribution organization for irradiated foods, the PR to consumers and the development of the means of detecting irradiation are the important subjects. The authors used pepper, and examined whether the hydrogen generated by irradiation remains in seeds and it can be detected or not. The experimental method and the results are reported. From the samples without irradiation, hydrogen was scarcely detected. The quantity of hydrogen generated was proportional to dose. The measuring instrument is only a gas chromatograph. (K.I.) 4. Phytosanitary irradiation - Development and application Science.gov (United States) Hallman, Guy J.; Loaharanu, Paisan 2016-12-01 5. Irradiation preservation of Korean shellfish International Nuclear Information System (INIS) Chung, J.R.; Kim, S.I.; Lee, M.C. 1976-01-01 6. Detection of some irradiated foods International Nuclear Information System (INIS) NASR, E.H.A 2009-01-01 This study was performed to investigate the possibility of using two rapid methods namely Supercritical Fluid Extraction (SFE) and Direct Solvent Extraction (DSE) methods for extraction and isolation of 2-dodecylcyclobutanone (2-DCB) followed by detecting this chemical marker by Gas chromatography technique and used this marker for identification of irradiated some foods containing fat (beef meat, chicken, camembert cheese and avocado) post irradiation, during cold and frozen storage. Consequently, this investigation was designed to study the following main points:- 1- The possibility of applying SFE-GC and DSE-GC rapid methods for the detection of 2-DCB from irradiated food containing fat (beef meat, chicken, camembert cheese and avocado fruits) under investigation.2-Studies the effect of gamma irradiation doses on the concentration of 2-DCB chemical marker post irradiation and during frozen storage at -18 degree C of chicken and beef meats for 12 months.3-Studies the effect of gamma irradiation doses on the concentration of 2-DCB chemical marker post irradiation and during cold storage at 4±1 degree C of camembert cheese and avocado fruits for 20 days. 7. Irradiation effects on perfluorinated polymers International Nuclear Information System (INIS) Lappan, U.; Geissler, U.; Haeussler, L.; Pompe, G.; Scheler, U.; Lunkwitz, K. 2002-01-01 8. Irradiation's potential for preserving food International Nuclear Information System (INIS) Morrison, R.M. 1986-01-01 International Nuclear Information System (INIS) Ishitsuka, Etsuo; Kawamura, Hiroshi 1995-01-01 Beryllium is expected as a neutron multiplier and plasma facing materials in the fusion reactor, and the neutron irradiation data on properties of beryllium up to 800 degrees C need for the engineering design. The acquisition of data on the tritium behavior, swelling, thermal and mechanical properties are first priority in ITER design. Facility for the post irradiation examination of neutron irradiated beryllium was constructed in the hot laboratory of Japan Materials Testing Reactor to get the engineering design data mentioned above. This facility consist of the four glove boxes, dry air supplier, tritium monitoring and removal system, storage box of neutron irradiated samples. Beryllium handling are restricted by the amount of tritium;7.4 GBq/day and 60 Co;7.4 MBq/day International Nuclear Information System (INIS) 1991-01-01 This fact sheet considers the issue of the irradiation of food containing food additives or pesticide residues. The conclusion is that there is no health hazard posed by radiolytic products of pesticides or food additives. 1 ref 11. Irradiated topaz in the reactor International Nuclear Information System (INIS) Helal, A.I.; Zahran, N.F.; Gomaac, M.A.M.; Salama, S. 2007-01-01 Gem stones are those stones which have beauty that can be based on its color, transparency, brilliance and the crystalline perfection . Topaz is used mainly as gemstones, It is the most common irradiated gem on the market. High energy such as neutrons, have enough energy to produce color centers . Irradiation is most often carried out in nuclear reactors (high-energy neutrons). Irradiation of topaz in the Egyptian research reactor (ETRR-2) by neutrons changes its cloudy white color to a reddish pink which could be changed to blue by heating 12. Irradiation a boon to farmers International Nuclear Information System (INIS) McVeigh, S. 1981-01-01 Irradiation sterilization is emerging as a process of tremendous value to the food marketing industry. Much of the latest research has been done by the Atomic Energy Board at Pelindaba, using the strong gamma rays produced by cobalt-60 to kill the pathogens, microprobes, small insects and other food destroying agents usually found in food and fruit. Irradiation also helps delay ripening and ageing to a slight degree, a property of great value to food and fruit exporters. The advantages of various irradiated food are shortly discussed 13. Food irradiation dispelling the doubts International Nuclear Information System (INIS) Nair, P.M. 1994-01-01 Irradiation processing of the food item eliminates the use of harmful chemicals for treatment of food items and the produce can be conserved fresh. Another important aspect of this process is that it can help to stabilize the prices and give better remuneration to the farmer and hygienic product to the consumer. The already growing Indian nuclear industry can provide the source as well as the pros and cons of food technology for installation of irradiation facilities. The pros and cons of irradiation process are described. (M.K.V.) 14. ESR signals of irradiated insects International Nuclear Information System (INIS) Ukai, Mitsuko; Kameya, Hiromi; Imamura, Taro; Miyanoshita, Akihiro; Todoriki, Setsuko; Shimoyama, Yuhei 2009-01-01 Analysis of irradiated insects using Electron Spin Resonance (ESR) spectroscopy was reported. The insects were maize weevil, red flour beetle, Indian meal moth and cigarette beetle that are hazardous to crops. The ESR spectra were consisted of a singlet at g=2 and a sextet centered at the similar g-value. The singlet signal is due to an organic free radical. The sextet signal is attributable to the hyperfine interactions from Mn 2+ ions. Upon irradiation, new signals were not detected. The relaxation times, T 1 and T 2 , showed no variations before and after irradiation. (author) 15. Therapeutic irradiation and brain injury International Nuclear Information System (INIS) Sheline, G.E.; Wara, W.M.; Smith, V. 1980-01-01 This is a review and reanalysis of the literature on adverse effects of therapeutic irradiation on the brain. Reactions have been grouped and considered according to time of appearance. The emphasis of the analysis is on delayed reactions, especially those that occur from a few months to several years after irradiation. All dose specifications were converted into equivalent megavoltage rads. The data were analyzed in terms of total dose, overall treatment time and number of treatment fractions. Also discussed were acute radiation reactions, early delayed radiation reactions, somnolence and leukoencephalopathy post-irradiation/chemotherapy and combined effects of radiation and chemotherapy 16. Food irradiation: Facts or fiction? International Nuclear Information System (INIS) Loaharanu, P. 1990-01-01 Food irradiation is at a political crossroad. In one direction, it is moving forward supported by overwhelming scientific evidence of its safety and benefits to economy and health. In the opposite direction, it threatens to be derailed by misleading claims about its safety and usefulness. Whether people will ultimately benefit from the use of irradiation to help fight serious food problems, or whether they will allow the technology to go to waste, will be determined by how successful people are in separating the facts from the fiction of food irradiation 17. Luminescence detection of irradiated foods International Nuclear Information System (INIS) Sanderson, D.C.W. 1990-01-01 The need for forensic tests to identify irradiated foods has been widely recognised at a time of growing international trade in such products and impending changes in UK and EEC legislation to control the process. This paper outlines the requirements for and of such tests, and discusses recent developments in luminescence approaches aimed at meeting the needs of public analysts, retailers and consumers. Detecting whether or not food has been irradiated, and if so to what dose, is one of the challenges which food irradiation poses to the scientist. (author) 18. Bone cell viability after irradiation International Nuclear Information System (INIS) Jacobsson, M.; Kaelebo, P.; Tjellstroem, A.; Turesson, I.; Goeteborg Univ.; Goeteborg Univ.; Goeteborg Univ. 1987-01-01 Adult rabbits were irradiated to one proximal tibial metaphysis while the contralateral tibia served as a control. Each animal was thus its own control. Single doses of 15, 25 and 40 Gy 60 Co were used. The follow-up time was 11 to 22 weeks after irradiation. A histochemical method, recording diaphorase (NADH 2 and NADPH 2 ) activity in osteocytes, was employed. This method is regarded as superior to conventional histology. No evidence of osteocyte death was found even after 22 weeks following 40 Gy irradiation. This is interpreted as an indication that the osteocytes, which are end stage cells, are relatively radioresistant. (orig.) 19. Irradiation of Kensington Pride mangoes International Nuclear Information System (INIS) McLauchlan, R.L.; Mitchell, G.E.; Johnson, G.I.; Wills, P.A. 1990-01-01 Mangoes (cv. Kensington Pride) exhibited delayed ripening and increased external injury (lenticel damage) following irradiation at 300 or 600 Gy but not at 75 Gy. Altering the conditions of irradiation (lower temperature, nitrogen atmosphere, lower dose rate) had no effect in alleviating that injury. Some chemical constituents were also affected to minor degrees but eating quality was not. Irradiation of mature-green, preclimacteric mangoes at doses of 300 Gy or more is not recommended; doses of 75 Gy can be used without adversely affecting marketability. (author) 20. Progress in food irradiation: Nigeria International Nuclear Information System (INIS) 1978-01-01 The effect of gamma radiation on Aspergillus flavus and some of its toxic metabolites has been studied. This involved the determination of radio-sensitivity of aflatoxins to gamma radiation and the toxicity of irradiated aflatoxins, the effect of irradiation on the formation of aflatoxins in some Nigerian foodstuffs and on the macronutrients of soya-gari diet, and isolation and characterisation of radiation-induced mutants in A. flavus. A research project is now underway to investigate the effect on nutrients in foodstuffs following the destruction of fungal toxins (aflatoxins) and fungi by gamma irradiation (OGBADU, Ahmadu Bello University, Zaria). (orig.) [de 1. International status of food irradiation International Nuclear Information System (INIS) Roberts, P.B. 1982-09-01 Recent international moves that are likely to result in an increasing acceptance of irradiated foods are reviewed. Particular attention is given to the activities of the FAO, WHO, Codex Alimentarius and to attitudes in the United States and the Asian-Pacific region. In 1979, the Codex Alimentarius Commission adopted a Recommended General Standard for Irradiated Food. A resume is given of a revised version of the standard that is presently under consideration. However, remaining barriers to trade in irradiated food are briefly discussed, such as legal and regulatory problems, labelling, public acceptance and economic viability 2. Post irradiation examination technology exchange International Nuclear Information System (INIS) Sozawa, Shizuo; Ito, Masayasu; Taguchi, Taketoshi; Nakagawa, Tetsuya; Lee, Hyung-Kwon 2012-01-01 Under the KAERI and JAEA agreement, in a part of the program 18 (Post Irradiation Examination (PIE) and Evaluation Technique of Irradiated Materials), an eddy current test was proposed as a round robin test, and it has been being progressed in both organizations in order to enhance the post irradiation examination technology. Up to now, several data are obtained by both PIE facilities. In this paper, the round robin test program is shown, and also shown obtained data with discussion from applicability as a nondestructive test in the hot cell. (author) 3. Food irradiation, profits and limitations International Nuclear Information System (INIS) Luna C, P.C. 1992-05-01 The utility of the irradiation to overcome diverse problems of lost nutritious, it has been demonstrated in multiple investigation works, that its have confirmed the value and the inoculation of the irradiated foods. The quantity of energy applied to each food, is in function of the wanted effect. In this document a guide with respect to the practical application and the utility of the irradiation process in different foods, as well as the suggested dose average is shown. Among the limitations of the use of this technology, its are the costs and not being able to apply it to some fresh foods. (Author) Energy Technology Data Exchange (ETDEWEB) Marks, Lawrence B.; Bentel, Gunilla; Sherouse, George W.; Spencer, David P.; Light, Kim 2015-01-15 A case study is presented. Craniospinal radiotherapy and a three-field pineal boost for trilateral retinoblastoma were delivered to a patient previously irradiated for ocular retinoblastoma. The availability of CT-based three-dimensional treatment planning provided the capability of identifying the previously irradiated volume as a three-dimensional anatomic structure and of designing a highly customized set of treatment beams that minimized reirradiation of that volume. International Nuclear Information System (INIS) Wanke, R. 1997-01-01 I. Factor currently influencing advancing opportunities for food irradiation include: heightened incidence and awareness of food borne illnesses and causes. Concerns about ensuring food safety in international as well as domestic trade. Regulatory actions regarding commonly used fumigants/pesticides e.g. Me Br. II. Modern irradiator design: the SteriGenics M ini Cell . A new design for new opportunities. Faster installation of facility. Operationally and space efficient. Provides local o nsite control . Red meat: a currently developing opportunity. (Author) International Nuclear Information System (INIS) 1991-01-01 This fact sheet considers the safety of industrial irradiation facilities. Although there have been accidents, none of them has endangered public health or environmental safety, and the radiation processing industry is considered to have a very good safety record. Gamma irradiators do not produce radioactive waste, and the radiation sources at the facilities cannot explode nor in any other way release radioactivity into the environment. 3 refs International Nuclear Information System (INIS) Reiley, T.C.; Shannon, R.H.; Auble, R.L. 1980-01-01 Accelerator-produced charged-particle beams have advantages over neutron irradiation for studying radiation effects in materials, the primary advantage being the ability to control precisely the experimental conditions and improve the accuracy in measuring effects of the irradiation. An apparatus has recently been built at ORNL to exploit this advantage in studying irradiation creep. These experiments employ a beam of 60 MeV alpha particles from the Oak Ridge Isochronous Cyclotron (ORIC). The experimental approach and capabilities of the apparatus are described. The damage cross section, including events associated with inelastic scattering and nuclear reactions, is estimated. The amount of helium that is introduced during the experiments through inelastic processes and through backscattering is reported. Based on the damage rate, the damage processes and the helium-to-dpa ratio, the degree to which fast reactor and fusion reactor conditions may be simulated is discussed. Recent experimental results on the irradiation creep of type 316 stainless steel are presented, and are compared to light ion results obtained elsewhere. These results include the stress and temperature dependence of the formation rate under irradiation. The results are discussed in relation to various irradiation creep mechanisms and to damage microstructure as it evolves during these experiments. (orig.) Energy Technology Data Exchange (ETDEWEB) Boice, John D. Jr. [International Epidemiology Institute, Rockville, MD (United States)]. E-mail: [email protected]; Robison, Leslie L. [University of Minnesota, Minneapolis, MN (United States); Mertens, Ann [University of Texas M D Anderson Cancer Center, Houston, TX (United States); Stovall, Marilyn; Green, Daniel M.; Mulvihill, John J.; ); Roswell Park Cancer Institute, Buffalo, NY (United States); University of Oklahoma, Oklahoma City, OK (US)) 2000-09-01 Little (1999) recently reviewed the evidence that paternal preconception irradiation in the Sellafield workforce (Parker et al 1999) and among Japanese atomic bomb survivors (Otake et al 1990) might be associated with an increased risk of stillbirth. He concluded that the association reported for radiation workers was statistically incompatible with the absence of an association seen among the exposed Japanese parents. These studies and analyses illustrate the considerable difficulty in assessing stillbirths conceived by men exposed to ionising radiation at work. For example, occupational doses may not be sufficiently large to result in a detectable effect and maternal factors that are associated with stillbirths and important to adjust for may not be available. These papers also bring to focus a relevant but not well-studied public health issue, namely, what are the reproductive risks for men and women exposed to potential mutagens? We wish to emphasise here the theoretical and practical advantages of addressing this issue in persons not with low dose occupational or acute atomic bomb exposures, but with higher dose medical experiences; in particular, in survivors of cancers of childhood, adolescents, and young adulthood (Blatt 1999, Bryne et al 1998, Sankila et al 1998, Green et al 1997, Hawkins and Stevens 1996). Letter-to-the-editor. Science.gov (United States) Hallman, Guy J. 2012-07-01 The history of the development of generic phytosanitary irradiation (PI) treatments is discussed beginning with its initial proposal in 1986. Generic PI treatments in use today are 150 Gy for all hosts of Tephritidae, 250 Gy for all arthropods on mango and papaya shipped from Australia to New Zealand, 300 Gy for all arthropods on mango shipped from Australia to Malaysia, 350 Gy for all arthropods on lychee shipped from Australia to New Zealand and 400 Gy for all hosts of insects other than pupae and adult Lepidoptera shipped to the United States. Efforts to develop additional generic PI treatments and reduce the dose for the 400 Gy treatment are ongoing with a broad based 5-year, 12-nation cooperative research project coordinated by the joint Food and Agricultural Organization/International Atomic Energy Agency Program on Nuclear Techniques in Food and Agriculture. Key groups identified for further development of generic PI treatments are Lepidoptera (eggs and larvae), mealybugs and scale insects. A dose of 250 Gy may suffice for these three groups plus others, such as thrips, weevils and whiteflies. Energy Technology Data Exchange (ETDEWEB) Hallman, Guy J [United States Department of Agriculture, Agricultural Research Service, Weslaco, TX (United States) 2013-01-15 The history of the development of generic phytosanitary irradiation (PI) treatments is discussed beginning with its initial proposal in 1986. Generic PI treatments in use today are 150 Gy for all hosts of Tephritidae, 250 Gy for all arthropods on mango and papaya shipped from Australia to New Zealand, 300 Gy for all arthropods on mango shipped from Australia to Malaysia, 350 Gy for all arthropods on lychee shipped from Australia to New Zealand and 400 Gy for all hosts of insects other than pupae and adult Lepidoptera shipped to the United States. Efforts to develop additional generic PI treatments and reduce the dose for the 400 Gy treatment are ongoing with a broad based 5-year, 12-nation cooperative research project coordinated by the joint Food and Agricultural Organization/International Atomic Energy Agency Program on Nuclear Techniques in Food and Agriculture. Key groups identified for further development of generic PI treatments are Lepidoptera (eggs and larvae), mealybugs and scale insects. A dose of 250 Gy may suffice for these three groups plus others, such as thrips, weevils and whiteflies. (author) 11. Irradiation of spices and herbs International Nuclear Information System (INIS) Saul, C. 1985-01-01 Changes in the microbiology, chemistry, mutagenicity and sensory of spices due to gamma irradiation are discussed. This process has been shown to be safe and wholesome with no effect on product quality or flavour 12. Food irradiation in South Africa International Nuclear Information System (INIS) De Wet, W.J. 1982-01-01 The article indicates the necessity for additional methods of food preservation and emphasises that food irradiation is developing into an important method of food preservation because it has been proved scientifically and practically that food irradiation can be applied effectively; also that there is absolute certainty that radiation-processed products are safe and nutritious and that such food is acceptable to the consumer and food trade, also with a view to costs. It discusses the joint food irradiation programme of the AEB and Department of Agriculture and Fisheries and points out that exemption for the irradiation of potatoes was already obtained in 1977 and later for mango's, paw-paws, chicken, onions, garlic and strawberries. Conditional exemption was obtained for avocado's and dried bananas. Other food-kinds on which research is being continued are grapes, melons, mushrooms, stone fruit and spices International Nuclear Information System (INIS) 2006-01-01 Radiation technology is one of the most important fields which the IAEA supports and promotes, and has several programmes that facilitate its use in the developing Member States. In view of this mandate, this Booklet on 'Gamma Irradiators for Radiation Processing' is prepared which describes variety of gamma irradiators that can be used for radiation processing applications. It is intended to present description of general principles of design and operation of the gamma irradiators available currently for industrial use. It aims at providing information to industrial end users to familiarise them with the technology, with the hope that the information contained here would assist them in selecting the most optimum irradiator for their needs. Correct selection affects not only the ease of operation but also yields higher efficiency, and thus improved economy. The Booklet is also intended for promoting radiation processing in general to governments and general public 14. Irradiation of Northwest agricultural products International Nuclear Information System (INIS) Eakin, D.E.; Tingey, G.L.; Anderson, D.B.; Hungate, F.P. 1985-01-01 Irradiation of food for disinfestation and preservation is increasing in importance because of increasing resrictions on various chemical treatments. Irradiation treatment is of particular interest in the Northwest because of a growing supply of agricultural products and the need to develop new export markets. Several products have, or could potentially have, significant export markets if stringent insect control procedures are developed and followed. Due to the recognized potential benefits of irradiation, Pacific Northwest Laboratory (PNL) is conducting this program to evaluate the benefits of using irradiation on Northwest agricultural products under the US Department of Energy (DOE) Defense Byproducts Production and Utilization Program. Commodities currently included in the program are cherries, apples, asparagus, spices, hay, and hides 15. Irradiation emerges as processing alternative International Nuclear Information System (INIS) Hatfield, D. 1985-01-01 CERN Document Server Shabalin, E P; Kulikov, S A; Kulagin, E N; Melihov, V V; Belyakov, A A; Golovanov, L B; Borzunov, Yu T; Konstantinov, V I; Androsov, A V 2002-01-01 The URAM-2 irradiation facility has been built and mounted at the channel No. 3 of the IBR-2 reactor. It was constructed for study of radiolysis effects by fast neutron irradiation in some suitable for effective cold neutron production materials (namely: solid methane, methane hydrate, water ice, etc.). The facility cooling system is based on using liquid helium as a coolant material. The original charging block of the rig allows the samples to be loaded by condensing gas into irradiation cavity or by charging beads of ice prepared before. Preliminary tests for each facility block and assembling them at the working position were carried out. Use of the facility for study accumulation of chemical energy under irradiation at low temperature in materials mentioned above and its spontaneous release was started. 17. Irradiation of Northwest agricultural products International Nuclear Information System (INIS) Eakin, D.E.; Tingey, G.L. 1985-02-01 Irradiation of food for disinfestation and preservation is increasing in importance because of increasing restrictions on various chemical treatments. Irradiation treatment is of particular interest in the Northwest because of a growing supply of agricultural products and the need to develop new export markets. Several products have, or could potentially have, significant export markets if stringent insect control procedures are developed and followed. Due to the recognized potential benefits of irradiation, Pacific Northwest Laboratory (PNL) is conducting this program to evaluate the benefits of using irradiation on Northwest agricultural products under the US Department of Energy (DOE) Defense Byproducts Production and Utilization Program. Commodities currently included in the program are cherries, apples, asparagus, spices, hay, and hides 18. Food irradiation in South Africa Energy Technology Data Exchange (ETDEWEB) De Wet, W J 1982-01-01 The article indicates the necessity for additional methods of food preservation and emphasises that food irradiation is developing into an important method of food preservation because it has been proved scientifically and practically that food irradiation can be applied effectively; also that there is absolute certainty that radiation-processed products are safe and nutritious and that such food is acceptable to the consumer and food trade, also with a view to costs. It discusses the joint food irradiation programme of the AEB and Department of Agriculture and Fisheries and points out that exemption for the irradiation of potatoes was already obtained in 1977 and later for mangos, paw-paws, chicken, onions, garlic and strawberries. Conditional exemption was obtained for avocado's and dried bananas. Other food-kinds on which research is being continued are grapes, melons, mushrooms, stone fruit and spices. 19. Finely divided, irradiated tetrafluorethylene polymers International Nuclear Information System (INIS) Brown, M.T.; Rodway, W.G. 1977-01-01 Dry non-sticky fine lubricant powders are made by γ-irradiation of unsintered coagulated dispersion grade tetrafluoroethylene polymers. These powders may also be dispersed in an organic medium for lubricating purposes 20. Effect of irradiation on vitamins International Nuclear Information System (INIS) Kilcast, D. 1994-01-01 Food irradiation is a physical process involving treatment of food with ionising radiation. Its main uses are reduction in spoilage and pathogenic organisms, inhibition of ripening and sprouting processes, and insect disinfestation. Chemical changes in the treated foods are small, and expert committees have concluded that they carry no special nutritional problems. Some vitamins are sensitive to irradiative degradation, however, and opponents of the process have claimed that extensive destruction will occur. Irradiation doses will, however, be limited by organoleptic changes, and maximum levels are being introduced into legislation for specific foods. Examination of the published literature shows that vitamins C and B 1 are the most sensitive water-soluble vitamins, and that E and A are the most sensitive fat-soluble vitamins. Vitamin losses on irradiation of permitted foods in western countries will not be of nutritional importance. (Author) 1. Food irradiation and combination processes International Nuclear Information System (INIS) Campbell-Platt, G.; Grandison, A.S. 1990-01-01 International approval of food irradiation is being given for the use of low and medium doses. Uses are being permitted for different categories of foods with maximum levels being set between 1 and 10 kGy. To maximize the effectiveness of these mild irradiation treatments while minimizing any organoleptic quality changes, combination processes of other technologies with irradiation will be useful. Combinations most likely to be exploited in optimal food processing include the use of heat, low temperature, and modified-atmosphere packaging. Because irradiation does not have a residual effect, the food packaging itself becomes an important component of a successful process. These combination processes provide promising alternatives to the use of chemical preservatives or harsher processing techniques. (author) 2. Fuel irradiation experience at Halden International Nuclear Information System (INIS) Vitanza, Carlo 1996-01-01 The OECD Halden Reactor Project is an international organisation devoted to improved safety and reliability of nuclear power station through an user-oriented experimental programme. A significant part of this programme consists of studies addressing fuel performance issues in a range of conditions realised in specialised irradiation. The key element of the irradiation carried out in the Halden reactor is the ability to monitor fuel performance parameters by means of in-pile instrumentation. The paper reviews some of the irradiation rigs and the related instrumentation and provides examples of experimental results on selected fuel performance items. In particular, current irradiation conducted on high/very high burn-up fuels are reviewed in some detail 3. New developments in food irradiation International Nuclear Information System (INIS) Molins, R. 1996-01-01 Food irradiation technology is rapidly gaining worldwide acceptance as a promising tool to help alleviate some important food security and safety concerns, and to facilitate the international trade in food. Because of the different priorities that these issues receive in various countries, food irradiation is being considered by developing countries as the technology of choice over chemical fumigants in applications related to the reduction of food losses such as the insect disinfestation of stored staple and export commodities and the inhibition of sprouting of bulb and tuber crops. In contrast, the use of irradiation as a 'cold pasteurization' method to improve the hygienic quality and safety of foods is emerging as the primary field of application in developed countries. Moreover, the use of irradiation as an alternative, non-chemical quarantine treatment for fresh fruits, vegetables and other agricultural commodities entering international trade will no doubt benefit exporting as well as importing countries. 4 figs International Nuclear Information System (INIS) Beddoes, J.M. 1982-01-01 Irradition has advantages as a method of preserving food, especially in the Third World. The author tabulates some examples of actual use of food irradiation with dates and tonnages, and tells the story of the gradual acceptance of food irradiation by the World Health Organization, other international bodies, and the U.S. Food and Drug Administration (USFDA). At present, the joint IAEA/FAO/WHO standard permits an energy level of up to 5 MeV for gamma rays, well above the 1.3 MeV energy level of 60 Co. The USFDA permits irradiation of any food up to 10 krad, and minor constituents of a diet may be irradiated up to 5 Mrad. The final hurdle to be cleared, that of economic acceptance, depends on convincing the food processing industry that the process is technically and economically efficient 5. ATLAS Pixel Group - Photo Gallery from Irradiation CERN Multimedia 2001-01-01 Photos 1,2,3,4,5,6,7 - Photos taken before irradiation of Pixel Test Analog Chip and Pmbars (April 2000) Photos 8,9,10,11 - Irradiation of VDC chips (May 2000) Photos 12, 13 - Irradiation of Passive Components (June 2000) Photos 14,15, 16 - Irradiation of Marebo Chip (November 1999) 6. Legislations the field of food irradiation International Nuclear Information System (INIS) 1987-05-01 An outline is given of the national legislation in 39 countries in the field of food irradiation. Where available the following information is given for each country: form of legislation, object of legislation including information on the irradiation treatment, the import and export trade of irradiated food, the package labelling and the authorization and control of the irradiation procedures 7. Study on irradiation treatment to drunk crab International Nuclear Information System (INIS) Cao Hong; Chen Xiulan; Zhai Jianqing; Bao Jianzhong; Wang Jinrong 2002-01-01 For guaranteeing the quality of irradiated drunk crab, manufacture method of the dosimeter, sample setting and taking position, irradiation time, asymmetry degree of irradiation dose, contrast of the dosimeter are discussed and some reference datum to commercialization of drunk crab's irradiation are provided 8. Dose mapping role in gamma irradiation industry International Nuclear Information System (INIS) Noriah Mod Ali; John Konsoh Sangau; Mazni Abd Latif 2002-01-01 In this studies, the role of dosimetry activity in gamma irradiator was discussed. Dose distribution in the irradiator, which is a main needs in irradiator or chamber commissioning. This distribution data were used to confirm the dosimetry parameters i.e. exposure time, maximum and minimum dose map/points, and dose distribution - in which were used as guidelines for optimum product irradiation. (Author) 9. The PIREX proton irradiation facility International Nuclear Information System (INIS) Victoria, M. 1995-01-01 The proton Irradiation Experiment (PIREX) is a materials irradiation facility installed in a beam line of the 590 MeV proton accelerator at the Paul Scherrer Institute. Its main purpose is the testing of candidate materials for fusion reactor components. Protons of this energy produce simultaneously displacement damage and spallation products, amongst them helium and can therefore simulate any possible synergistic effects of damage and helium, that would be produced by the fusion neutrons International Nuclear Information System (INIS) Gorman, P.K. 1995-01-01 An experimental inductoslag apparatus to recycle irradiated vanadium was fabricated and tested. An experimental electroslag apparatus was also used to test possible slags. The testing was carried out with slag materials that were fabricated along with impurity bearing vanadium samples. Results obtained include computer simulated thermochemical calculations and experimentally determined removal efficiencies of the transmutation impurities. Analyses of the samples before and after testing were carried out to determine if the slag did indeed remove the transmutation impurities from the irradiated vanadium 11. Nutritional value of irradiated food International Nuclear Information System (INIS) Diehl, J.F.; Hasselmann, C.; Kilcast, D. 1991-01-01 Statements made in 2 reports by the European Parliamentary Commission on the Environment, Public Health and Consumer Protection, chaired on both occasions by members of the German Green Party, that irradiated foods have no nutritional value are challenged. Attempts by the European Commission to regulate food irradiation in the European Community have been turned down by the European Parliament on the basis of these reports 12. Progress in food irradiation: Netherlands Energy Technology Data Exchange (ETDEWEB) Stegeman, H 1982-11-01 The Dutch contribution gives an accurate description of the gamma radio preservation facility where a great variety of types of fruit, vegetables, meat and spices were treated with radiosensitivity of bacteria and fungi as well as spores being tested. Wholesomeness studies were limited to feeding tests on pigs and mutagenity tests on Salmonella typhimurium. 12 products were given as authorized for irradiation stating irradiation effect, radiation dose and shelf-life duration. 13. Irradiated food - no nutritional value? International Nuclear Information System (INIS) Diehl, J.F.; Hasselmann, C. 1991-01-01 Attempts by the European Commission to regulate food irradiation in the European Community by a directive have been repeatedly turned down by the European Parliament. The basis of information for the Parliamentarians was a Committee Report, which stated that irradiated foods had no nutritional value. This conclusion is compared with the richly available results of experimental studies. The authors conclude that the European Parliament has been completely misinformed. (orig.) [de 14. The PIREX proton irradiation facility Energy Technology Data Exchange (ETDEWEB) Victoria, M. [Association EURATOM, Villigen (Switzerland) 1995-10-01 The proton Irradiation Experiment (PIREX) is a materials irradiation facility installed in a beam line of the 590 MeV proton accelerator at the Paul Scherrer Institute. Its main purpose is the testing of candidate materials for fusion reactor components. Protons of this energy produce simultaneously displacement damage and spallation products, amongst them helium and can therefore simulate any possible synergistic effects of damage and helium, that would be produced by the fusion neutrons. 15. Studies on the irradiated solids International Nuclear Information System (INIS) Lesueur, D. 1988-01-01 The 1988 progress report of the Irradiated Solids laboratory (Polytechnic School, France), is presented. The Laboratory activities concern the investigations on disordered solids (chemical or structural disorder). The disorder itself, its effects on the material physical properties and the particle-matter interactions, are investigated. The research works are performed in the following fields: solid state physics, irradiation and stoechiometric defects, and nuclear materials. The scientific reviews, the congress communications and the thesis are listed [fr 16. Design of Shanghai irradiation center International Nuclear Information System (INIS) Chen Fugen; Lu Zhongwen; Xue Xiangrong; Yao Zewu; Du Bende; Xu Zhicheng; Du Kangsen 1988-01-01 Shanghai Irradiation Center, situated in westrn suburb of Shanghai, was completed in August, 1986. At present, a 6.55 x 10 15 Bq 60 Co source has been loaded, though the designed activity of maximum loading is 18.5 x 10 10 Bq. the center is designed mainly for irradiation preservation of food and sterilization of medical devices and tools. Its processing ability is 10 t/h for potatoes 17. Irradiated vaccines against bovine babesiosis International Nuclear Information System (INIS) Weilgama, D.J.; Weerasinghe, H.M.C.; Perera, P.S.G.; Perera, J.M.R. 1988-01-01 Experiments were conducted on non-splenectomized Bos taurus calves to determine the immunogenicity of blood vaccines containing either Babesia bigemina or Babesia bovis parasites irradiated in a 60 Co source. Groups of calves between 6 and 10 months of age, found to be free of previous babesial infections by serodiagnosis, were inoculated with B. bigemina ('G' isolate) irradiated at rates ranging from 350 to 500 Gy. These vaccines caused low to moderate reactions on primary inoculation which subsided without treatment. Parasites irradiated at 350 Gy produced a strong immunity against virulent homologous challenge. Vaccinated calves also withstood virulent heterologous B. bigemina ('H' isolate) and B. bovis ('A' isolate) challenges made 85 and 129 days later. It also became evident that the use of babesicides to control reactions should be avoided since early treatment of 'reactor' animals caused breakdown of immunity among vaccinates. B. bovis ('A' isolate) parasites irradiated at dose rates of either 300 Gy or 350 Gy caused mild to moderate reactions in immunized calves, with the reactions in the 300 Gy group being slightly more severe. On challenge with homologous parasites, animals that had previously been inoculated with organisms irradiated at 300 Gy showed better protection than those that had received parasites irradiated at 350 Gy. (author). 28 refs, 5 tabs 18. Ferrobielastic twinning in irradiated quartz International Nuclear Information System (INIS) Shiau, S.M. 1986-01-01 19. Shihoro irradiation plant for potato International Nuclear Information System (INIS) Kameyama, Kenji 1985-01-01 There have been rapid moves toward the commercialization of food irradiation around the world since November, 1980, when a joint FAO/IAEA/WHO expert committee made a recommendation on the wholesomeness of irradiated foods. The bold US move toward the commercialization has had a great impact. Ahead of these move around the world, Japan built a commercial irradiation plant in 1974, which has been operated for inhibiting the sprouting of potatoes. This plant was built in Shihoro, Hokkaido, and two thirds of the 400 million yen construction cost was provided by the Government and Hokkaido authorities for five agricultural cooperative associations of four local townships. Since then, the plant has been under the joint management of these cooperatives. The aim and circumstance of the plant construction are described. The mechanism of the plant with conveyors, a turntable and a Co-60 source of 300,000 Ci is shown. The plant processes 15 tons of potatoes per hour with the dose from 60 to 150 Gy. Potato bruise and irradiation effect, irradiation time and effect, and post-irradiation storage temperature and potato quality are reported. (Kako, I.) 20. China's move to food irradiation International Nuclear Information System (INIS) Wedekind, L.H. 1986-01-01 The Chinese officials outlined China's past and future directions at a recent international food irradiation seminar in Shanghai sponsored by the FAO and IAEA. The meeting was attended by about 170 participants from China and 22 other countries, primarily from the Asian and Pacific region. Three food irradiation plants currently are operating in the region and 14 more are planned over the next 5 years. It was reported that China continues to suffer high food losses, up to 30% for some commodities, primarily due to preservation and storage problems. In January 1986, the first of five regional irradiation facilities planned in China officially opened in Shanghai. The Shanghai irradiation centre plans to process up to 35,000 tons of vegetables a year, as well as some spices, fruits, and non-food products. The Ministry of Public Health has approved seven irradiated foods as safe human diets: rice, potatoes, onions, garlic, peanuts, mushrooms and pork sausages; approval for apples is expected shortly. The Chinese officials at the Shanghai meeting stressed their openness to foreign participation and cooperation in food irradiation's development 1. Identification methods for irradiated wheat International Nuclear Information System (INIS) Zhu Shengtao; Kume, Tamikazu; Ishigaki, Isao. 1992-02-01 The effect of irradiation on wheat seeds was examined using various kinds of analytical methods for the identification of irradiated seeds. In germination test, the growth of sprouts was markedly inhibited at 500Gy, which was not affected by storage. The decrease in germination percentage was detected at 3300Gy. The results of enzymatic activity change in the germ measured by Vita-Scope germinator showed that the seeds irradiated at 10kGy could be identified. The content of amino acids in ungerminated and germinated seeds were analyzed. Irradiation at 10kGy caused the decrease of lysine content but the change was small which need very careful operation to detect it. The chemiluminescence intensity increased with radiation dose and decreased during storage. The wheat irradiated at 10kGy could be identified even after 3 months storage. In the electron spin resonance (ESR) spectrum analysis, the signal intensity with the g value f 2.0055 of skinned wheat seeds increased with radiation dose. Among these methods, germination test was the most sensitive and effective for identification of irradiated wheat. (author) 2. Fracture toughness of irradiated beryllium International Nuclear Information System (INIS) Beeston, J.M. 1978-01-01 International Nuclear Information System (INIS) 1991-10-01 This newsletter contains brief summaries of three coordinated research meetings held in 1991: irradiation in combination with other processes for improving food quality; application of irradiation technique for food processing in Africa; and food irradiation programme for Middle East and European countries. The first Workshop on Public Information on Food Irradiation is summarized, and a Coordinated Research Programme on Irradiation as a Quarantine Treatment of Mites, Nematodes and Insects other than Fruit Fly is announced. This issue also contains a report on the status of food irradiation in China, and a supplement lists clearances of irradiated foods. Tabs 4. The present situation of irradiation services International Nuclear Information System (INIS) Hironiwa, Takayuki 2014-01-01 The present state of food irradiation in Japan is presented from a point of view of a trustee for irradiation business. Radiation sprout inhibition of potatoes, only approved by Government, and spice treatment, now being applied for, are explained. Existing establishments capable of entrusting irradiation services as business in Japan are outlined including Co-60 gamma ray and X-ray irradiation and electron beam irradiation. Principles of irradiation-induced physical and chemical effects in irradiated materials specifically organic polymers and brief explanation of facilities together with safety devices are also explained. (S. Ohno) International Nuclear Information System (INIS) Boice, J.D. 1981-01-01 6. Electron accelerator technology research in food irradiation International Nuclear Information System (INIS) Jin Jianqiao; Ye Mingyang; Zhang Yue; Yang Bin; Xu Tao; Kong Xiangshan 2014-01-01 Electronic accelerator was applied to instead of cobalt sources for food irradiation, to keep food quality and to improve the effect of the treatment. Appropriate accelerator parameters lead to optimal technique. The irradiation effect is associated with the relationship between uniformity and irradiating speed, the effect of cargo size on radiation penetration, as well as other factors that affect the irradiation effects. Industrialization of electron accelerator irradiation will be looked to the future. (authors) 7. Improvement of irradiation facilities performance in JMTR International Nuclear Information System (INIS) Kanno, Masaru; Sakurai, Susumu; Honma, Kenzo; Sagawa, Hisashi; Nakazaki, Chousaburo 1999-01-01 Various kinds of irradiation facilities are installed in the JMTR for the purpose of irradiation tests on fuels and materials and of producing radioisotopes. The irradiation facilities have been improved so far at every opportunity of new irradiation requirements and of renewing them which reached the design lifetime. Of these irradiation facilities, improvements of the power ramping test facility (BOCA/OSF-1 facility) and the hydraulic rabbit No.2 (HR-2 facility) are described here. (author) 8. Changes of lipids in irradiated chickens International Nuclear Information System (INIS) Moersel, J.T.; Wende, I.; Schwarz, K. 1991-01-01 Chickens were irradiated in a 6 deg Co gamma irradiation source. The irradiation has been done to reduce or eliminate Salmonella. The experiments were done to test this decontamination method of chickens if changes of lipids take place. It was to be seen, that peroxidation of lipids was more rapidly as in control. The time of storage of irradiated chickens has to be shorter because of changes in lipids. After irradiation the chickens had trade quality. (orig.) [de 9. Chemical aspects of irradiated mangoes International Nuclear Information System (INIS) Singh, H. 1990-06-01 10. Independent Laboratory for Detection of Irradiated Foods. Detection of the irradiated food in the INCT International Nuclear Information System (INIS) Stachowicz, W. 2007-01-01 Lecture shows different methods applied for detection of irradiated foods. Structure and equipment of the Independent Laboratory for Detection of Irradiated Foods operating in the INCT is described. Several examples of detection of food irradiation are given in details 11. Economic aspects of food irradiation Directory of Open Access Journals (Sweden) M. M. Osetskaya 2017-01-01 Science.gov (United States) Souza, Clécia Moura; Silva, Leonardo G. The aim of this work was to characterize the samples of irradiated and non-irradiated rubber from automotive scrap tires. Rubber samples from scrap tires were irradiated at irradiation doses of 200, 400 and 600kGy in an electron beam accelerator. Subsequently, both the irradiated and non-irradiated samples were characterized by thermogravimetry (TG), differential scanning calorimetry (DSC), tensile strength mechanical test, and Fourier transform infrared (FTIR) spectrophotometry. 13. Composting of sewage sludge irradiated International Nuclear Information System (INIS) Hashimoto, Shoji; Watanabe, Hiromasa; Nishimura, Koichi; Kawakami, Waichiro 1981-01-01 Recently, the development of the techniques to return sewage sludge to forests and farm lands has been actively made, but it is necessary to assure its hygienic condition lest the sludge is contaminated by pathogenic bacteria. The research to treat sewage sludge by irradiation and utilize it as fertilizer or soil-improving material has been carried out from early on in Europe and America. The effects of the irradiation of sludge are sterilization, to kill parasites and their eggs, the inactivation of weed seeds and the improvement of dehydration. In Japan, agriculture is carried out in the vicinity of cities, therefore it is not realistic to use irradiated sludge for farm lands as it is. The composting treatment of sludge by aerobic fermentation is noticed to eliminate the harms when the sludge is returned to forests and farm lands. It is desirable to treat sludge as quickly as possible from the standpoint of sewage treatment, accordingly, the speed of composting is a problem. The isothermal fermentation experiment on irradiated sludge was carried out using a small-scale fermentation tank and strictly controlling fermentation conditions, and the effects of various factors on the fermentation speed were studied. The experimental setup and method are described. The speed of composting reached the maximum at 50 deg C and at neutral or weak alkaline pH. The speed increased with the increase of irradiation dose up to 30 Mrad. (Kako, I.) 14. Mechanical properties of irradiated beryllium Energy Technology Data Exchange (ETDEWEB) Beeston, J.M.; Longhurst, G.R.; Wallace, R.S. (EG and G Idaho, Inc., Idaho Falls, ID (United States). Idaho National Engineering Lab.); Abeln, S.P. (EG and G Rocky Flats, Inc., Golden, CO (United States)) 1992-10-01 Beryllium is planned for use as a neutron multiplier in the tritium breeding blanket of the International Thermonuclear Experimental Reactor (ITER). After fabricating samples of beryllium at densities varying from 80 to 100% of the theoretical density, we conducted a series of experiments to measure the effect of neutron irradiation on mechanical properties, especially strength and ductility. Samples were irradiated in the Advanced Test Reactor (ATR) to a neutron fluence of 2.6 x 10[sup 25] n/m[sup 2] (E > MeV) at an irradiation temperature of 75deg C. These samples were subsequently compression-tested at room temperature, and the results were compared with similar tests on unirradiated specimens. We found that the irradiation increased the strength by approximately four times and reduced the ductility to approximately one fourth. Failure was generally ductile, but the 80% dense irradiated samples failed in brittle fracture with significant generation of fine particles and release of small quantities of tritium. (orig.). 15. Stem cell migration after irradiation International Nuclear Information System (INIS) Nothdurft, W.; Fliedner, T.M. 1979-01-01 The survival rate of irradiated rodents could be significantly improved by shielding only the small parts of hemopoietic tissues during the course of irradiation. The populations of circulating stem cells in adult organisms are considered to be of some importance for the homeostasis between the many sites of blood cell formation and for the necessary flexibility of hemopoietic response in the face of fluctuating demands. Pluripotent stem cells are migrating through peripheral blood as has been shown for several mammalian species. Under steady state conditions, the exchange of stem cells between the different sites of blood cell formation appears to be restricted. Their presence in blood and the fact that they are in balance with the extravascular stem cell pool may well be of significance for the surveilance of the integrity of local stem cell populations. Any decrease of stem cell population in blood below a critical size results in the rapid immigration of circulating stem cells in order to restore local stem cell pool size. Blood stem cells are involved in the regeneration after whole-body irradiation if the stem cell population in bone marrows is reduced to less than 10% of the normal state. In the animals subjected to partial-body irradiation, the circulating stem cells appear to be the only source for the repopulation of the heavily irradiated, aplastic sites of hemopoietic organs. (Yamashita, S.) 16. Gamma irradiation of rice grains International Nuclear Information System (INIS) Roy, M.K.; Ghosh, S.K.; Chatterjee, S.R. 1991-01-01 Rice grains of the variety, Pusa-33, at 12.0% moisture, were irradiated with doses of 0-150 kGy. The crystallinity of starch, soluble amylose and yellowness of treated grains increased with increment in the dose of radiation but water absorption and volume expansion on cooling decreased. irradiation at doses of 3-5 kGy increased imperceptibly the hardening of rice cooled after cooking, but had no effect on edibility. The off-aroma in irradiated grains was perceptible at doses higher than 5 kGy. The changes in colour and aroma persisted also on cooking. Upto a dose of 5 kGy, the sensory scores of rice, both cooked and uncooked, were at or above acceptable limit of score (5,5). The doses of 3 and 5 kGy were highly effective in reducing fungal population in irradiated grains, but in view of the changes in colour and cooking qualities, 3 kGy is the preferred dose-limit of irradiation. (author). 17 refs., 5 tabs., 1 fig 17. Hemibody irradiation in multiple myeloma International Nuclear Information System (INIS) Tobias, J.S.; Richards, J.D.M.; Blackman, G.M.; Joannides, T.; Trask, C.W.L.; Nathan, J.I. 1985-01-01 Eighteen patients with multiple myeloma were treated by hemibody irradiation using large single fractions, usually to a dose of 10 Gy (lower half) and 7.5 GY (upper half). All except one patient had previously been treated by multiple courses of conventional chemotherapy with melphalan and prednisone, and were considered to be resistant to further chemotherapy. In most cases, local field irradiation had also been given for symptomatic bone pain. Of the 13 patients who had symptoms at the start of hemibody irradiation, 11 improved sufficiently for their analgesia requirement to be reduced. In eight patients, there was a significant fall in circulating immunoglobulin but no patient with Bence-Jones proteinuria had complete resolution of this biochemical abnormality. Although thrombocytopenia and neutropenia were common, only two patients required platelet transfusion and the treatment was in general extremely well tolerated. Survival following hemibody irradiation was similar to the survival reported from the use of 'second-line' chemotherapy and we feel that hemibody irradiation is a more acceptable alternative for most patients. (orig.) 18. Nutritional aspects of food irradiation International Nuclear Information System (INIS) Murray, T.K. 1981-01-01 From the nutritional point of view the irradiation of fruits and vegetables presents few problems. It should be noted that irradiation-induced changes in the β-carotene content of papaya (not available to the Joint Expert Committee in 1976) have been demonstrated to be unimportant. The Joint Expert Committee also noted the need for more data on thiamine loss. These have been forthcoming and indicate that control of insects in rice is possible without serious loss of the vitamin. Experiments with other cereal crops were also positive in this regard. The most important evidence on the nutritional quality of irradiated beef and poultry was the demonstration that they contained no anti-thiamine properties. A point not to be overlooked is the rather serious loss of thiamine when mackerel is irradiated at doses exceeding 3 kGy. Recent evidence indicates that thiamine loss could be reduced by using a high dose rate application process. Though spices contribute little directly to the nutritional quality of the food supply they play an important indirect role. It is thus encouraging that they can be sterilized by irradiation without loss of aroma and taste and without significant loss of β-carotenes. Of future importance are the observations on single cell protein and protein-fat-carbohydrate mixtures. The reduction of net protein utilization in protein-fat mixtures may be the result of physical interaction of the components. (orig./AJ) 19. Food irradiation: Public opinion surveys International Nuclear Information System (INIS) Kerr, S.D. 1987-01-01 The Canadian government are discussing the legislation, regulations and practical protocol necessary for the commercialization of food irradiation. Food industry marketing, public relations and media expertise will be needed to successfully introduce this new processing choice to retailers and consumers. Consumer research to date including consumer opinion studies and market trials conducted in the Netherlands, United States, South Africa and Canada will be explored for signposts to successful approaches to the introduction of irradiated foods to retailers and consumers. Research has indicated that the terms used to describe irradiation and information designed to reduce consumer fears will be important marketing tools. Marketers will be challenged to promote old foods, which look the same to consumers, in a new light. Simple like or dislike or intention to buy surveys will not be effective tools. Consumer fears must be identified and effectively handled to support a receptive climate for irradiated food products. A cooperative government, industry, health professional, consumer association and retailer effort will be necessary for the successful introduction of irradiated foods into the marketplace. Grocery Products Manufacturers of Canada is a national trade association of more than 150 major companies engaged in the manufacture of food, non-alcoholic beverages and array of other national-brand consumer items sold through retail outlets 20. Neutron irradiation of seeds 2 Energy Technology Data Exchange (ETDEWEB) 1968-10-01 The irradiation of seeds with the fast neutron of research reactors has been hampered by difficulties in accurately measuring dose and in obtaining repeatable and comparable results. Co-ordinated research under an international program organized by the FAO and IAEA has already resulted in significant improvements in methods of exposing seeds in research reactors and in obtaining accurate dosimetry. This has been accomplished by the development of a standard reactor facility for the neutron irradiation of seeds and standard methods for determining fast-neutron dose and the biological response after irradiation. In this program various divisions of the IAEA and the Joint FAO/IAEA Division co-operate with a number of research institutes and reactor centres throughout the world. Results of the preliminary experiments were reported in Technical Reports Series No. 76, ''Neutron Irradiation of Seeds''. This volume contains the proceedings of a meeting of co-operators in the FAO/IAEA Neutron Seed Irradiation Program and other active scientists in this field. The meeting was held in Vienna from 11 to 15 December 1967. Refs, figs and tabs. International Nuclear Information System (INIS) Ehlermann, D.; Reinacher, E.; Antonacopoulos, N. 1977-01-01 2. Effects of irradiation upon spices Energy Technology Data Exchange (ETDEWEB) 1978-04-01 3. Nutritional aspects of food irradiation Energy Technology Data Exchange (ETDEWEB) Murray, T K 1981-08-01 From the nutritional point of view the irradiation of fruits and vegetables presents few problems. It should be noted that irradiation-induced changes in the ..beta..-carotene content of papaya (not available to the Joint Expert Committee in 1976) have been demonstrated to be unimportant. The Joint Expert Committee also noted the need for more data on thiamine loss. These have been forthcoming and indicate that control of insects in rice is possible without serious loss of the vitamin. Experiments with other cereal crops were also positive in this regard. The most important evidence on the nutritional quality of irradiated beef and poultry was the demonstration that they contained no anti-thiamine properties. A point not to be overlooked is the rather serious loss of thiamine when mackerel is irradiated at doses exceeding 3 kGy. Recent evidence indicates that thiamine loss could be reduced by using a high dose rate application process. Though spices contribute little directly to the nutritional quality of the food supply they play an important indirect role. It is thus encouraging that they can be sterilized by irradiation without loss of aroma and taste and without significant loss of ..beta..-carotenes. Of future importance are the observations on single cell protein and protein-fat-carbohydrate mixtures. The reduction of net protein utilization in protein-fat mixtures may be the result of physical interaction of the components. 4. Irradiation damage to the lung International Nuclear Information System (INIS) Fennessy, J.J. 1987-01-01 While some degree of injury to normal, non-tumor-bearing, intrathoracic structures always occurs following irradiation for cure or palliation of neoplastic disease, clinical expression of this injury is uncommon. However, under certain circumstances, clinical manifestations may be severe and life threatening. Acute radiographic manifestations of pulmonary injury usually appear either synchronous with or, more typically, seven to ten days after the onset of the clinical syndrome. The acute signs of edema and slight volume loss within the irradiated zone are nonspecific except for their temporal and spatial relationship to the irradiation of the patient. Resolution of the acute changes is followed by pulmonary cicatrization, which is almost always stable within one year after completion of therapy. Change in postirradiation scarring following stabilization of the reaction must always be assumed to be due to some other process. While the radiograph primarily reveals pulmonary injury, all tissues, including the heart and major vessels, are susceptible, and the radiologist must recognize that any change within the thorax of a patient who has undergone thoracic irradiation may be a complication of that treatment. Differentiation of irradiation injury from residual or recurrent tumor, drug reaction, or opportunistic infection may be difficult and at times impossible 5. The wholesomeness of irradiated foodstuffs International Nuclear Information System (INIS) Elias, P.S. 1992-01-01 The evaluation of the wholesomeness of irradiated foodstuffs is based on a large body of scientific data mainly from four areas: the radiochemical effects of ionising radiation, the toxicity of irradiated foodstuffs, the microbiological effects and the nutritional aspects. The technology of processing food with ionising radiation with the objectives of decontamination, disinfestation and of extended shelf life has been investigated in many countries for more than fourty years. The wholesomeness of irradiated food has been examined over the same period through intensive research into the radiochemical, toxicological, microbiological and nutritional changes resulting from this processing technology. The ultimate result was the decision of the international Joint FAO/IAEA/WHO Expert Committee, published in 1981, to regard in general the irradiation of food in the dose range up to 10 kGy as an acceptable processing technology and to consider foodstuffs so treated as wholesome. A large number of national authorities came to similar conclusions following their own evaluation of the available data. The most important scientific results, the general restrictions on the use of food irradiation, and existing prejudices will be dealt with. (orig.) [de 6. Plasmodium falciparum: attenuation by irradiation International Nuclear Information System (INIS) Waki, S.; Yonome, I.; Suzuki, M. 1983-01-01 The effect of irradiation on the in vitro growth of Plasmodium falciparum was investigated. The cultured malarial parasites at selected stages of development were exposed to gamma rays and the sensitivity of each stage was determined. The stages most sensitive to irradiation were the ring forms and the early trophozoites; late trophozoites were relatively insensitive. The greatest resistance was shown when parasites were irradiated at a time of transition from the late trophozoite and schizont stages to young ring forms. The characteristics of radiosensitive variation in the parasite cycle resembled that of mammalian cells. Growth curves of parasites exposed to doses of irradiation upto 150 gray had the same slope as nonirradiated controls but parasites which were exposed to 200 gray exhibited a growth curve which was less steep than that for parasites in other groups. Less than 10 organisms survived from the 10(6) parasites exposed to this high dose of irradiation; the possibility exists of obtaining radiation-attenuated P. falciparum 7. Food irradiation: Gamma processing facilities Energy Technology Data Exchange (ETDEWEB) 1998-12-31 The number of products being radiation processed is constantly increasing and today include such diverse items as medical disposable, fruits and vegetables, bulk spices, meats, sea foods and waste effluents. Not only do the products differ but also many products, even those within the same groupings, require different minimum and maximum radiation doses. These variations create many different requirements in the irradiator design. The design of Cobalt-60 radiation processing facilities is well established for a number of commercial applications. Installations in over 40 countries, with some in operation since the early 1960s, are testimony to the fact that irradiator design, manufacture, installation and operation is a well established technology. However, in order to design gamma irradiators for the preservation of foods one must recognize those parameters typical to the food irradiation process as well as those systems and methods already well established in the food industry. This paper discusses the basic design concepts for gamma food irradiators. They are most efficient when designed to handle a limited product density range at an established dose. Safety of Cobalt-60 transport, safe facility operation principles and the effect of various processing parameters on economics, will also be discussed. (Author) 8. Food irradiation: Gamma processing facilities International Nuclear Information System (INIS) 1997-01-01 The number of products being radiation processed is constantly increasing and today include such diverse items as medical disposable, fruits and vegetables, bulk spices, meats, sea foods and waste effluents. Not only do the products differ but also many products, even those within the same groupings, require different minimum and maximum radiation doses. These variations create many different requirements in the irradiator design. The design of Cobalt-60 radiation processing facilities is well established for a number of commercial applications. Installations in over 40 countries, with some in operation since the early 1960s, are testimony to the fact that irradiator design, manufacture, installation and operation is a well established technology. However, in order to design gamma irradiators for the preservation of foods one must recognize those parameters typical to the food irradiation process as well as those systems and methods already well established in the food industry. This paper discusses the basic design concepts for gamma food irradiators. They are most efficient when designed to handle a limited product density range at an established dose. Safety of Cobalt-60 transport, safe facility operation principles and the effect of various processing parameters on economics, will also be discussed. (Author) 9. Food irradiation: Gamma processing facilities Energy Technology Data Exchange (ETDEWEB) 1997-12-31 The number of products being radiation processed is constantly increasing and today include such diverse items as medical disposable, fruits and vegetables, bulk spices, meats, sea foods and waste effluents. Not only do the products differ but also many products, even those within the same groupings, require different minimum and maximum radiation doses. These variations create many different requirements in the irradiator design. The design of Cobalt-60 radiation processing facilities is well established for a number of commercial applications. Installations in over 40 countries, with some in operation since the early 1960s, are testimony to the fact that irradiator design, manufacture, installation and operation is a well established technology. However, in order to design gamma irradiators for the preservation of foods one must recognize those parameters typical to the food irradiation process as well as those systems and methods already well established in the food industry. This paper discusses the basic design concepts for gamma food irradiators. They are most efficient when designed to handle a limited product density range at an established dose. Safety of Cobalt-60 transport, safe facility operation principles and the effect of various processing parameters on economics, will also be discussed. (Author) International Nuclear Information System (INIS) Ehlermann, D.; Reinacher, E.; Antonacopoulos, N. 1977-01-01 11. Neutron irradiation of seeds 2 International Nuclear Information System (INIS) 1968-01-01 The irradiation of seeds with the fast neutron of research reactors has been hampered by difficulties in accurately measuring dose and in obtaining repeatable and comparable results. Co-ordinated research under an international program organized by the FAO and IAEA has already resulted in significant improvements in methods of exposing seeds in research reactors and in obtaining accurate dosimetry. This has been accomplished by the development of a standard reactor facility for the neutron irradiation of seeds and standard methods for determining fast-neutron dose and the biological response after irradiation. In this program various divisions of the IAEA and the Joint FAO/IAEA Division co-operate with a number of research institutes and reactor centres throughout the world. Results of the preliminary experiments were reported in Technical Reports Series No. 76, ''Neutron Irradiation of Seeds''. This volume contains the proceedings of a meeting of co-operators in the FAO/IAEA Neutron Seed Irradiation Program and other active scientists in this field. The meeting was held in Vienna from 11 to 15 December 1967. Refs, figs and tabs 12. Materials irradiation research in neutron science Energy Technology Data Exchange (ETDEWEB) Noda, Kenji; Oyama, Yukio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment 1997-11-01 Materials irradiation researches are planned in Neutron Science Research Program. A materials irradiation facility has been conceived as one of facilities in the concept of Neutron Science Research Center at JAERI. The neutron irradiation field of the facility is characterized by high flux of spallation neutrons with very wide energy range up to several hundred MeV, good accessibility to the irradiation field, good controllability of irradiation conditions, etc. Extensive use of such a materials irradiation facility is expected for fundamental materials irradiation researches and R and D of nuclear energy systems such as accelerator-driven incineration plant for long-lifetime nuclear waste. In this paper, outline concept of the materials irradiation facility, characteristics of the irradiation field, preliminary technical evaluation of target to generate spallation neutrons, and materials researches expected for Neutron Science Research program are described. (author) Energy Technology Data Exchange (ETDEWEB) Wanke, R [Director Business Development. SteriGenics International Inc. 17901 East Warren Avenue No. 4, Detroit, Michigan 48224-1333 (United States) 1998-12-31 I. Factor currently influencing advancing opportunities for food irradiation include: heightened incidence and awareness of food borne illnesses and causes. Concerns about ensuring food safety in international as well as domestic trade. Regulatory actions regarding commonly used fumigants/pesticides e.g. Me Br. II. Modern irradiator design: the SteriGenics {sup M}ini Cell{sup .} A new design for new opportunities. Faster installation of facility. Operationally and space efficient. Provides local {sup o}nsite control{sup .} Red meat: a currently developing opportunity. (Author) Energy Technology Data Exchange (ETDEWEB) Wanke, R. [Director Business Development. SteriGenics International Inc. 17901 East Warren Avenue No. 4, Detroit, Michigan 48224-1333 (United States) 1997-12-31 I. Factor currently influencing advancing opportunities for food irradiation include: heightened incidence and awareness of food borne illnesses and causes. Concerns about ensuring food safety in international as well as domestic trade. Regulatory actions regarding commonly used fumigants/pesticides e.g. Me Br. II. Modern irradiator design: the SteriGenics {sup M}ini Cell{sup .} A new design for new opportunities. Faster installation of facility. Operationally and space efficient. Provides local {sup o}nsite control{sup .} Red meat: a currently developing opportunity. (Author) 15. A study on UV irradiated HDPE International Nuclear Information System (INIS) Sang Haibo; Liu Zimin; Wu Shishan; Shen Jian 2006-01-01 16. Post-irradiation examination and R and D programs using irradiated fuels at KAERI International Nuclear Information System (INIS) Chun, Yong Bum; Min, Duck Kee; Kim, Eun Ka and others 2000-12-01 This report describes the Post-Irradiation Examination(PIE) and R and D programs using irradiated fuels at KAERI. The objectives of post-irradiation examination (PIE) for the PWR irradiated fuels, CANDU fuels, HANARO fuels and test fuel materials are to verify the irradiation performance and their integrity as well as to construct a fuel performance data base. The comprehensive utilization program of the KAERI's post-irradiation examination related nuclear facilities such as Post-Irradiation Examination Facility (PIEF), Irradiated Materials Examination Facility (IMEF) and HANARO is described 17. Post-irradiation examination and R and D programs using irradiated fuels at KAERI International Nuclear Information System (INIS) Chun, Yong Bum; So, Dong Sup; Lee, Byung Doo; Lee, Song Ho; Min, Duck Kee 2001-09-01 This report describes the Post-Irradiation Examination(PIE) and R and D programs using irradiated fuels at KAERI. The objectives of post-irradiation examination (PIE) for the PWR irradiated fuels, CANDU fuels, HANARO fuels and test fuel materials are to verify the irradiation performance and their integrity as well as to construct a fuel performance data base. The comprehensive utilization program of the KAERI's post-irradiation examination related nuclear facilities such as Post-Irradiation Examination Facility (PIEF), Irradiated Materials Examination Facility (IMEF) and HANARO is described 18. Ionic conductivity in irradiated KCL International Nuclear Information System (INIS) Vignolo Rubio, J. 1979-01-01 The ionic conductivity of X and gamma irradiated KCl single crystals has been studied between room temperature and 600 deg C. The radiation induced damage resulting in a decrease of the conductivity heals by thermal annealing in two steps which are at about 350 and 550 deg C respectively. It has been found that the radiation induced colour centres are not involved in the observed decrease of the ionic conductivity. Howewer, it has been observed that the effects of quenching and plastic deformation on the conductivity of the samples are very similar to the effect induced by irradiation. It is suggested that small radiation induced dislocation loops might cause the ionic conductivity decrease observed in irradiated samples. (auth) 19. Internal friction in irradiated textolite International Nuclear Information System (INIS) Zajkin, Yu.A.; Kozhamkulov, B.A.; Koztaeva, U.P. 1996-01-01 Structural relaxation in irradiated textolite of ST and ST-EhTF trade marks presenting pressed material got by method of impregnation of fibreglass by phenole and epoxytriphenole binders relatively. Measuring of temperature dependences of internal friction (TDIF) is carried out in torsional pendulum at oscillation frequency 0.6-1.0 Hz before and after irradiation by stopped gamma-quanta with energy 3 MeV on electron accelerator EhLU-4. α and β peaks, related with segments motion in base and side chains of macromolecular have being observed on TDIF of all textolite. Growth of peaks height after irradiation evident about increase of segments mobility in base chain and about de-freezing of segments in side chains and it could be considered as qualitative measure of radiation destruction rate. Comparison of temperature dependences of internal friction indicates on higher radiation stability of textolite of ST-EhTF trade mark 20. Endodontics and the irradiated patient International Nuclear Information System (INIS) Cox, F.L. 1976-01-01 With increasingly larger numbers of irradiated patients in our population, it seems likely that all dentists will eventually be called upon to manage the difficult problems that these patients present. Of utmost concern should be the patient's home care program and the avoidance of osteroradionecrosis. Endodontics and periodontics are the primary areas for preventing or eliminating the infection that threatens osteoradionecrosis. Endodontic treatment must be accomplished with the utmost care and maximum regard for the fragility of the periapical tissues. Pulpally involved teeth should never be left open in an irradiated patient, and extreme care must be taken with the between-visits seal. If one is called upon for preradiation evaluation, routine removal of all molar as well as other compromised teeth should be considered. Attention should be directed to the literature for further advances in the management of irradiated patients 1. Ionic conductivity in irradiated KCL International Nuclear Information System (INIS) Vignolo Rubio, J. 1979-01-01 The ionic conductivity of X and gamma irradiated KCL single crystals has been studied between room temperature and 600 degree centigree. the radiation induced damage resulting in a decrease of the conductivity heals by thermal annealing in two steps which are at about 350 and 550 degree centigree respectively. It has been found that the radiation induced colour centres are not involved in the observed decrease of the ionic conductivity. However. It has been observed that the effects of quenching and plastic deformation on the conductivity of the samples are very similar to the effect induced by irradiation. It is suggested that, samples radiation induced dislocation loops might cause the ionic conductivity decrease observed in irradiated samples. (Author) 2. Irradiation for conjunctival granulocytic sarcoma International Nuclear Information System (INIS) Fleckenstein, K.; Geinitz, H.; Grosu, A.; Molls, M.; Goetze, K.; Werner, M. 2003-01-01 Case History and Findings: A 73-year-old woman with a history of myeloproliferative syndrome (MPS) presented with bilateral chemosis, redness and burning of the eyes. The ocular motility was severely impaired. Ophthalmological examination revealed markedly distended conjunctivas on both sides. Biopsy disclosed conjunctival granulocytic sarcoma as an initial symptom of acute myelogenous leukemia (AML). Diagnosis was confirmed by peripheral blood smear and bone marrow aspiration. Treatment and Outcome: The orbital tumor disappeared completely after local external beam irradiation with a total dose of 30 Gy and no further orbital recurrence occurred. With chemotherapy following irradiation transient hematological remission was achieved. 5 months after diagnosis the patient died of respiratory failure following atypical pneumonia as a consequence of her underlying disorder. Conclusion: Detection of orbital granulocytic sarcoma, even in the absence of typical leukemic symptoms is of practical importance, because treatment with irradiation can lead to stabilization or improvement in the patient's vision. (orig.) 3. Detection of irradiated frozen foods International Nuclear Information System (INIS) Miyahara, Makoto; Toyoda, Masatake; Saito, Yukio 1998-01-01 We tried to detect whether foods were irradiated or not by the o-tyrosine method and the mtDNA method. The o-tyrosine method was applied to four kinds of meat (beef, pork, chicken and tuna). The results showed the linear relation between amount of o-tyrosine and dose (0-10 kGy). However, small amount of o-tyrosine were produced in some cases which application of the method summed to be very difficult because small difference between irradiated foods and untreated foods. Possibility of mtDNA method was investigated. Work and time for separation of mitochondria and extraction of DNA were reduced by a protease-solid phase extraction method. By PCR method, accurate mtDNA could be detected from very small amount of DNA. The irradiation effect is able to detect from 50 Gy. (S.Y.) 4. International status of food irradiation International Nuclear Information System (INIS) Diehl, J.F. 1983-01-01 Radiation processing of foods has been studied for over 30 years. To a considerable extent this research was carried out in the framework of various international projects. After optimistic beginnings in the 1950s and long delays, caused by uncertainty about the health safety of foods so treated, food irradiation has now reached the stage of practical application in several countries. In order to prepare the way for world-wide accceptance of the new process, the Codex Alimentarius Commission has accepted an 'International General Standard for Irradiated Foods' and an 'International Code of Practice for the Operation of Irradiation Facilities Used for the Treatment of Foods'. Psychological barriers to a process associated with the word 'radiation' are still formidable; it appears, however, that acceptance by authorities, food industry and consumers continues to grow 5. Dose distribution of non-coplanar irradiation Energy Technology Data Exchange (ETDEWEB) Fukui, Toshiharu; Wada, Yoichi; Takenaka, Eiichi 1987-02-01 Non-coplanar irradiations were applied to the treatment of brain tumor. The dose distribution around the target area due to non-coplanar irradiation was half less than the dose when coplanar irradiation used. Integral volume dose due to this irradiation was not always less than that due to conventional opposing or rotational irradiation. This irradiation has the better application to the following;as a boost therapy, glioblastoma multiforme;as a radical therapy, recurrent brain tumor, well differentiated brain tumor such as craniopharyngioma, hypophyseal tumor etc and AV-malformation. 6. Irradiation induced effects in zirconium (A review) International Nuclear Information System (INIS) 1975-06-01 Irradiation creep in zirconium and its alloys is comprehensively discussed. The main theories are outlined and the gaps between them and the observed creep behaviour, indicated. Although irradiation induced point defects play an important role, effects due to irradiation induced dislocation loops seem insignificant. The experimental results suggest that microstructural variations due to prior cold-working or hydrogen injection perturb the irradiation growth and the irradiation creep of zircaloy. Further investigations into these areas are required. One disadvantage of creep experiments lies in their duration. The possibility of accelerated experiments using ion implantation or electron irradiation is examined in the final section, and its possible advantages and disadvantages are outlined. (author) 7. Internal irradiation for cystic craniopharyngioma International Nuclear Information System (INIS) Kobayashi, Tatsuya; Kageyama, Naoki 1979-01-01 8. Regulatory aspects of food irradiation International Nuclear Information System (INIS) Nowlan, N.V. 1985-01-01 The role of the Nuclear Energy Board in relation to radiation safety in Ireland is described. The Board has the duty to control by licence all activities involving ionizing radiation, as well as providing advice and information to the Government on all aspects of radiation safety. The licensing procedures used by the Board, including site approval, construction, commissioning, source loading and commercial operation, in the licensing of large irradiation facilities were described, and an outline of the proposed new legislation which may become necessary if and when the irradiation of food for commercial purposes begins in Ireland is given 9. ESR identification of irradiated foodstuffs International Nuclear Information System (INIS) Raffi, J. 1993-01-01 The conditions required to use Electron Spin Resonance (ESR) in identification of irradiated foods is first described. Then we present the results of an intercomparison sponsored by the Community Bureau of Reference involving 22 european laboratories. Qualitative identification of irradiated beef bones, dried grapes and papaya is very easy. Kinetical studies are necessary in case of fish species. Further researches are required in case of pistachio-nuts. Although all laboratories could distinguish between the two dose ranges used in case of meat bones (i.e. 1-3 and 7-10 kGy), there is an overlap of the results from the different laboratories. 2 tabs., 3 figs 10. Analysis of irradiation disordering data Energy Technology Data Exchange (ETDEWEB) Schwartz, D L [Jet Propulsion Lab., Pasadena, CA (USA); Schwartz, D M 1978-08-01 The analysis of irradiation disordering data in ordered Ni/sub 3/Mn is discussed. An analytical expression relating observed irradiation induced magnetic changes in this material to the number of alternating site <110> replacements is derived. This expression is then employed to analyze previous experimental results. This analysis gives results which appear to be consistent with a previous Monte Carlo data analysis and indicates that the expected number of alternating site <110> replacements is 66.4 per 450 eV recoil. 11. Significance of irradiation of blood International Nuclear Information System (INIS) Sekine, Hiroshi; Gotoh, Eisuke; Mochizuki, Sachio 1992-01-01 Many reports of fatal GVHD occurring in non-immunocompromised patients after blood transfusion have been published in Japan. One explantation is that transfused lymphocytes were simulated and attack the recipient organs recognized as HLA incompatible. That is so called 'one-way matching'. To reduce the risk of post-transfusion GVHD, one of the most convenient methods is to irradiate the donated blood at an appropriate dose for inactivation of lymphocytes. Because no one knows about the late effect of irradiated blood, it is necessary to make the prospective safety control. (author) International Nuclear Information System (INIS) Katusin-Raxem, B.; Mihaljievic, B.; Razem, D. 2002-01-01 13. Safeguards approach for irradiated fuel International Nuclear Information System (INIS) Harms, N.L.; Roberts, F.P. 1987-03-01 IAEA verification of irradiated fuel has become more complicated because of the introduction of variations in what was once presumed to be a straightforward flow of fuel from reactors to reprocessing plants, with subsequent dissolution. These variations include fuel element disassembly and reassembly, rod consolidation, double-tiering of fuel assemblies in reactor pools, long term wet and dry storage, and use of fuel element containers. This paper reviews future patterns for the transfer and storage of irradiated LWR fuel and discusses appropriate safeguards approaches for at-reactor storage, reprocessing plant headend, independent wet storage, and independent dry storage facilities International Nuclear Information System (INIS) 1986-10-01 The meeting carried out by the Group was attended by invited specialists on legislation, marketing, consumer attitudes and industry interested in the application of food irradiation. The major objectives of the meeting were to identify barriers and constraints to trade in irradiated food and to recommend actions to be carried out by the Group to promote trade in such foods. The report of the meeting and selected 9 background papers used at the meeting are presented. A separate abstract was prepared for each of these papers 15. Irradiation plant for flowable material International Nuclear Information System (INIS) Bosshard, E. 1975-01-01 The irradiation plant can be used to treat various flowable materials including effluent or sewage sludge. The plant contains a concrete vessel in which a partition is mounted to form two coaxial irradiation chambers through which the flowable material can be circulated by means of an impeller. The partition can be formed to house tubes of radiation sources and to provide a venturi-like member about the impeller. The operation of the impeller is reversed periodically to assure movement of both heavy and light particles in the flow. (U.S.) 16. Carrier mobilities in irradiated silicon CERN Document Server Brodbeck, T J; Sloan, T; Fretwurst, E; Kuhnke, M; Lindström, G 2002-01-01 Using laser pulses with <1 ns duration and a 500 MHz digital oscilloscope the current pulses were investigated for p-i-n Si diodes irradiated by neutrons up to 1 MeV equivalent fluences of 2.4x10 sup 1 sup 4 n/cm sup 2. Fitting the current pulse duration as a function of bias voltage allowed measurement of mobility and saturation velocity for both electrons and holes. No significant changes in these parameters were observed up to the maximum fluence. There are indications of a non-uniform space charge distribution in heavily irradiated diodes. 17. Irradiation of metastatic carcinoma parotid International Nuclear Information System (INIS) Jack, G.A. 1981-01-01 Acinic cell carcinomas of the parotid should be considered distinct malignancies despite descriptions of low-grade malignant potential and innocuous histologic patterns. Benign-appearing tumors frequently have a clinically malignant course. Blood-borne metastases may oocur early despite gross and microscopic innocence. Indolent growth may be a characteristic of local disease, which may then be approached with less than radical parotidectomy and sacrifice of the facial nerve. These tumors prove to be radiosensitive. More agressive postoperative irradiation and palliative irradiation is recommended. Two cases of successful palliation of spinal metastases are presented as examples of radiosensitivity of this tumor Energy Technology Data Exchange (ETDEWEB) Rodrigues Junior, Ary de Araujo (ed.) 2018-04-01 In this book you will learn how the gamma irradiators and accelerators for industry and research applications work and all the radioprotection safety items that should be followed when operating them. This book was written mainly for those who intend to become Radiation Safety Officers (RSO) responsible for the operation of gamma irradiators, but it is also useful to business people who plan to embark on this area or for those who are simply curious. This book is only an introduction to the subject and is far from being exhaustive. (author) 19. Recent developments in food irradiation International Nuclear Information System (INIS) Loaharanu, P. 1985-01-01 Nowadays there is growing interest by the food industry, government and consumers in the use of food irradiatin to kill harmful insects, prevent diseases and keep food fresher longer. This interest has been stimulated by growing public concern over chemicals used in foods. While food irradiation technologies have been around for more than 50 years, only recently have they become cost effective and gained prominent attention as potentially safer ways of protecting food products and public health. This paper looks at recent developments in food irradiation processing and discusses the issues that lie ahead. (author)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5436112284660339, "perplexity": 10944.858881467242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00551.warc.gz"}
http://mathhelpforum.com/differential-geometry/143074-divergence.html
# Math Help - divergence 1. ## divergence prove that the series 1+1/4+1/7+1/10... diverges. 2. Originally Posted by alexandrabel90 prove that the series 1+1/4+1/7+1/10... diverges. Have you written the the formula that produces this series? $\frac{1}{1}+\frac{1}{4}+\frac{1}{7}+...=\sum_{n=1} ^{\infty}\frac{1}{3n-2}$ 3. Originally Posted by dwsmith Have you written the the formula that produces this series? $\frac{1}{1}+\frac{1}{4}+\frac{1}{7}+...=\sum_{n=1} ^{\infty}\frac{1}{3n-2}$ I'll just add that it should now be clear that an appropriate test can be used.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9187747836112976, "perplexity": 2722.0494386540613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/m/maximum+tsunami+elevations.html
#### Sample records for maximum tsunami elevations 1. Maximum run-up behavior of tsunamis under non-zero initial velocity condition Directory of Open Access Journals (Sweden) Baran AYDIN 2018-03-01 Full Text Available The tsunami run-up problem is solved non-linearly under the most general initial conditions, that is, for realistic initial waveforms such as N-waves, as well as standard initial waveforms such as solitary waves, in the presence of initial velocity. An initial-boundary value problem governed by the non-linear shallow-water wave equations is solved analytically utilizing the classical separation of variables technique, which proved to be not only fast but also accurate analytical approach for this type of problems. The results provide important information on maximum tsunami run-up qualitatively. We observed that, although the calculated maximum run-ups increase significantly, going as high as double that of the zero-velocity case, initial waves having non-zero fluid velocity exhibit the same run-up behavior as waves without initial velocity, for all wave types considered in this study. 2. Tsunamis Science.gov (United States) ... busy after a disaster. Use text messages or social media to communicate with family and friends. Shareables Tsunami ... Power Plants Pandemic Power Outages Radiological Dispersion Device Severe ... 3. Community disruptions and business costs for distant tsunami evacuations using maximum versus scenario-based zones Science.gov (United States) Wood, Nathan J.; Wilson, Rick I.; Ratliff, Jamie L.; Peters, Jeff; MacMullan, Ed; Krebs, Tessa; Shoaf, Kimberley; Miller, Kevin 2017-01-01 Well-executed evacuations are key to minimizing loss of life from tsunamis, yet they also disrupt communities and business productivity in the process. Most coastal communities implement evacuations based on a previously delineated maximum-inundation zone that integrates zones from multiple tsunami sources. To support consistent evacuation planning that protects lives but attempts to minimize community disruptions, we explore the implications of scenario-based evacuation procedures and use the California (USA) coastline as our case study. We focus on the land in coastal communities that is in maximum-evacuation zones, but is not expected to be flooded by a tsunami generated by a Chilean earthquake scenario. Results suggest that a scenario-based evacuation could greatly reduce the number of residents and employees that would be advised to evacuate for 24–36 h (178,646 and 159,271 fewer individuals, respectively) and these reductions are concentrated primarily in three counties for this scenario. Private evacuation spending is estimated to be greater than public expenditures for operating shelters in the area of potential over-evacuations ($13 million compared to$1 million for a 1.5-day evacuation). Short-term disruption costs for businesses in the area of potential over-evacuation are approximately 122 million for a 1.5-day evacuation, with one-third of this cost associated with manufacturing, suggesting that some disruption costs may be recouped over time with increased short-term production. There are many businesses and organizations in this area that contain individuals with limited mobility or access and functional needs that may have substantial evacuation challenges. This study demonstrates and discusses the difficulties of tsunami-evacuation decision-making for relatively small to moderate events faced by emergency managers, not only in California but in coastal communities throughout the world. 4. An evaluation of onshore digital elevation models for tsunami inundation modelling Science.gov (United States) Griffin, J.; Latief, H.; Kongko, W.; Harig, S.; Horspool, N.; Hanung, R.; Rojali, A.; Maher, N.; Fountain, L.; Fuchs, A.; Hossen, J.; Upi, S.; Dewanto, S. E.; Cummins, P. R. 2012-12-01 Tsunami inundation models provide fundamental information about coastal areas that may be inundated in the event of a tsunami along with additional parameters such as flow depth and velocity. This can inform disaster management activities including evacuation planning, impact and risk assessment and coastal engineering. A fundamental input to tsunami inundation models is adigital elevation model (DEM). Onshore DEMs vary widely in resolution, accuracy, availability and cost. A proper assessment of how the accuracy and resolution of DEMs translates into uncertainties in modelled inundation is needed to ensure results are appropriately interpreted and used. This assessment can in turn informdata acquisition strategies depending on the purpose of the inundation model. For example, lower accuracy elevation data may give inundation results that are sufficiently accurate to plan a community's evacuation route but not sufficient to inform engineering of a vertical evacuation shelters. A sensitivity study is undertaken to assess the utility of different available onshore digital elevation models for tsunami inundation modelling. We compare airborne interferometric synthetic aperture radar (IFSAR), ASTER and SRTM against high resolution (historical tsunami run-up data. Large vertical errors (> 10 m) and poor resolution of the coastline in the ASTER and SRTM elevation models cause modelled inundation to be much less compared with models using better data and with observations. Therefore we recommend that ASTER and SRTM should not be used for modelling tsunami inundation in order to determine tsunami extent or any other measure of onshore tsunami hazard. We suggest that for certain disaster management applications where the important factor is the extent of inundation, such as evacuation planning, airborne IFSAR provides a good compromise between cost and accuracy; however the representation of flow parameters such as depth and velocity is not sufficient to inform detailed 5. Tsunamis Science.gov (United States) ... created by an underwater disturbance. Causes include earthquakes, landslides, volcanic eruptions, or meteorites--chunks of rock from space that strike the surface of Earth. A tsunami can move hundreds of miles per ... 6. Soil nematodes show a mid-elevation diversity maximum and elevational zonation on Mt. Norikura, Japan. Science.gov (United States) Dong, Ke; Moroenyane, Itumeleng; Tripathi, Binu; Kerfahi, Dorsaf; Takahashi, Koichi; Yamamoto, Naomichi; An, Choa; Cho, Hyunjun; Adams, Jonathan 2017-06-08 Little is known about how nematode ecology differs across elevational gradients. We investigated the soil nematode community along a ~2,200 m elevational range on Mt. Norikura, Japan, by sequencing the 18S rRNA gene. As with many other groups of organisms, nematode diversity showed a high correlation with elevation, and a maximum in mid-elevations. While elevation itself, in the context of the mid domain effect, could predict the observed unimodal pattern of soil nematode communities along the elevational gradient, mean annual temperature and soil total nitrogen concentration were the best predictors of diversity. We also found nematode community composition showed strong elevational zonation, indicating that a high degree of ecological specialization that may exist in nematodes in relation to elevation-related environmental gradients and certain nematode OTUs had ranges extending across all elevations, and these generalized OTUs made up a greater proportion of the community at high elevations - such that high elevation nematode OTUs had broader elevational ranges on average, providing an example consistent to Rapoport's elevational hypothesis. This study reveals the potential for using sequencing methods to investigate elevational gradients of small soil organisms, providing a method for rapid investigation of patterns without specialized knowledge in taxonomic identification. 7. Are inundation limit and maximum extent of sand useful for differentiating tsunamis and storms? An example from sediment transport simulations on the Sendai Plain, Japan Science.gov (United States) Watanabe, Masashi; Goto, Kazuhisa; Bricker, Jeremy D.; Imamura, Fumihiko 2018-02-01 We examined the quantitative difference in the distribution of tsunami and storm deposits based on numerical simulations of inundation and sediment transport due to tsunami and storm events on the Sendai Plain, Japan. The calculated distance from the shoreline inundated by the 2011 Tohoku-oki tsunami was smaller than that inundated by storm surges from hypothetical typhoon events. Previous studies have assumed that deposits observed farther inland than the possible inundation limit of storm waves and storm surge were tsunami deposits. However, confirming only the extent of inundation is insufficient to distinguish tsunami and storm deposits, because the inundation limit of storm surges may be farther inland than that of tsunamis in the case of gently sloping coastal topography such as on the Sendai Plain. In other locations, where coastal topography is steep, the maximum inland inundation extent of storm surges may be only several hundred meters, so marine-sourced deposits that are distributed several km inland can be identified as tsunami deposits by default. Over both gentle and steep slopes, another difference between tsunami and storm deposits is the total volume deposited, as flow speed over land during a tsunami is faster than during a storm surge. Therefore, the total deposit volume could also be a useful proxy to differentiate tsunami and storm deposits. 8. Impact of earthquake source complexity and land elevation data resolution on tsunami hazard assessment and fatality estimation Science.gov (United States) Muhammad, Ario; Goda, Katsuichiro 2018-03-01 This study investigates the impact of model complexity in source characterization and digital elevation model (DEM) resolution on the accuracy of tsunami hazard assessment and fatality estimation through a case study in Padang, Indonesia. Two types of earthquake source models, i.e. complex and uniform slip models, are adopted by considering three resolutions of DEMs, i.e. 150 m, 50 m, and 10 m. For each of the three grid resolutions, 300 complex source models are generated using new statistical prediction models of earthquake source parameters developed from extensive finite-fault models of past subduction earthquakes, whilst 100 uniform slip models are constructed with variable fault geometry without slip heterogeneity. The results highlight that significant changes to tsunami hazard and fatality estimates are observed with regard to earthquake source complexity and grid resolution. Coarse resolution (i.e. 150 m) leads to inaccurate tsunami hazard prediction and fatality estimation, whilst 50-m and 10-m resolutions produce similar results. However, velocity and momentum flux are sensitive to the grid resolution and hence, at least 10-m grid resolution needs to be implemented when considering flow-based parameters for tsunami hazard and risk assessments. In addition, the results indicate that the tsunami hazard parameters and fatality number are more sensitive to the complexity of earthquake source characterization than the grid resolution. Thus, the uniform models are not recommended for probabilistic tsunami hazard and risk assessments. Finally, the findings confirm that uncertainties of tsunami hazard level and fatality in terms of depth, velocity and momentum flux can be captured and visualized through the complex source modeling approach. From tsunami risk management perspectives, this indeed creates big data, which are useful for making effective and robust decisions. 9. Coastal Digital Elevation Models (DEMs) for tsunami hazard assessment on the French coasts Science.gov (United States) Maspataud, Aurélie; Biscara, Laurie; Hébert, Hélène; Schmitt, Thierry; Créach, Ronan 2015-04-01 Building precise and up-to-date coastal DEMs is a prerequisite for accurate modeling and forecasting of hydrodynamic processes at local scale. Marine flooding, originating from tsunamis, storm surges or waves, is one of them. Some high resolution DEMs are being generated for multiple coast configurations (gulf, embayment, strait, estuary, harbor approaches, low-lying areas…) along French Atlantic and Channel coasts. This work is undertaken within the framework of the TANDEM project (Tsunamis in the Atlantic and the English ChaNnel: Definition of the Effects through numerical Modeling) (2014-2017). DEMs boundaries were defined considering the vicinity of French civil nuclear facilities, site effects considerations and potential tsunamigenic sources. Those were identified from available historical observations. Seamless integrated topographic and bathymetric coastal DEMs will be used by institutions taking part in the study to simulate expected wave height at regional and local scale on the French coasts, for a set of defined scenarii. The main tasks were (1) the development of a new capacity of production of DEM, (2) aiming at the release of high resolution and precision digital field models referred to vertical reference frameworks, that require (3) horizontal and vertical datum conversions (all source elevation data need to be transformed to a common datum), on the basis of (4) the building of (national and/or local) conversion grids of datum relationships based on known measurements. Challenges in coastal DEMs development deal with good practices throughout model development that can help minimizing uncertainties. This is particularly true as scattered elevation data with variable density, from multiple sources (national hydrographic services, state and local government agencies, research organizations and private engineering companies) and from many different types (paper fieldsheets to be digitized, single beam echo sounder, multibeam sonar, airborne laser 10. TSUNAMI HAZARD IN NORTHERN VENEZUELA Directory of Open Access Journals (Sweden) B. Theilen-Willige 2006-01-01 Full Text Available Based on LANDSAT ETM and Digital Elevation Model (DEM data derived by the Shuttle Radar Topography Mission (SRTM, 2000 of the coastal areas of Northern Venezuela were investigated in order to detect traces of earlier tsunami events. Digital image processing methods used to enhance LANDSAT ETM imageries and to produce morphometric maps (such as hillshade, slope, minimum and maximum curvature maps based on the SRTM DEM data contribute to the detection of morphologic traces that might be related to catastrophic tsunami events. These maps combined with various geodata such as seismotectonic data in a GIS environment allow the delineation of coastal regions with potential tsunami risk. The LANDSAT ETM imageries merged with digitally processed and enhanced SRTM data clearly indicate areas that might be prone by flooding in case of catastrophic tsunami events. 11. A tsunami wave propagation analysis for the Ulchin Nuclear Power Plant considering the tsunami sources of western part of Japan International Nuclear Information System (INIS) Rhee, Hyun Me; Kim, Min Kyu; Sheen, Dong Hoon; Choi, In Kil 2013-01-01 The accident which was caused by a tsunami and the Great East-Japan earthquake in 2011 occurred at the Fukushima Nuclear Power Plant (NPP) site. It is obvious that the NPP accident could be incurred by the tsunami. Therefore a Probabilistic Tsunami Hazard Analysis (PTHA) for an NPP site should be required in Korea. The PTHA methodology is developed on the PSHA (Probabilistic Seismic Hazard Analysis) method which is performed by using various tsunami sources and their weights. In this study, the fault sources of northwestern part of Japan were used to analyze as the tsunami sources. These fault sources were suggested by the Atomic Energy Society of Japan (AESJ). To perform the PTHA, the calculations of maximum and minimum wave elevations from the result of tsunami simulations are required. Thus, in this study, tsunami wave propagation analysis were performed for developing the future study of the PTHA 12. Significant Tsunami Events Science.gov (United States) Dunbar, P. K.; Furtney, M.; McLean, S. J.; Sweeney, A. D. 2014-12-01 Tsunamis have inflicted death and destruction on the coastlines of the world throughout history. The occurrence of tsunamis and the resulting effects have been collected and studied as far back as the second millennium B.C. The knowledge gained from cataloging and examining these events has led to significant changes in our understanding of tsunamis, tsunami sources, and methods to mitigate the effects of tsunamis. The most significant, not surprisingly, are often the most devastating, such as the 2011 Tohoku, Japan earthquake and tsunami. The goal of this poster is to give a brief overview of the occurrence of tsunamis and then focus specifically on several significant tsunamis. There are various criteria to determine the most significant tsunamis: the number of deaths, amount of damage, maximum runup height, had a major impact on tsunami science or policy, etc. As a result, descriptions will include some of the most costly (2011 Tohoku, Japan), the most deadly (2004 Sumatra, 1883 Krakatau), and the highest runup ever observed (1958 Lituya Bay, Alaska). The discovery of the Cascadia subduction zone as the source of the 1700 Japanese "Orphan" tsunami and a future tsunami threat to the U.S. northwest coast, contributed to the decision to form the U.S. National Tsunami Hazard Mitigation Program. The great Lisbon earthquake of 1755 marked the beginning of the modern era of seismology. Knowledge gained from the 1964 Alaska earthquake and tsunami helped confirm the theory of plate tectonics. The 1946 Alaska, 1952 Kuril Islands, 1960 Chile, 1964 Alaska, and the 2004 Banda Aceh, tsunamis all resulted in warning centers or systems being established.The data descriptions on this poster were extracted from NOAA's National Geophysical Data Center (NGDC) global historical tsunami database. Additional information about these tsunamis, as well as water level data can be found by accessing the NGDC website www.ngdc.noaa.gov/hazard/ 13. Liquid films on shake flask walls explain increasing maximum oxygen transfer capacities with elevating viscosity. Science.gov (United States) Giese, Heiner; Azizan, Amizon; Kümmel, Anne; Liao, Anping; Peter, Cyril P; Fonseca, João A; Hermann, Robert; Duarte, Tiago M; Büchs, Jochen 2014-02-01 In biotechnological screening and production, oxygen supply is a crucial parameter. Even though oxygen transfer is well documented for viscous cultivations in stirred tanks, little is known about the gas/liquid oxygen transfer in shake flask cultures that become increasingly viscous during cultivation. Especially the oxygen transfer into the liquid film, adhering on the shake flask wall, has not yet been described for such cultivations. In this study, the oxygen transfer of chemical and microbial model experiments was measured and the suitability of the widely applied film theory of Higbie was studied. With numerical simulations of Fick's law of diffusion, it was demonstrated that Higbie's film theory does not apply for cultivations which occur at viscosities up to 10 mPa s. For the first time, it was experimentally shown that the maximum oxygen transfer capacity OTRmax increases in shake flasks when viscosity is increased from 1 to 10 mPa s, leading to an improved oxygen supply for microorganisms. Additionally, the OTRmax does not significantly undermatch the OTRmax at waterlike viscosities, even at elevated viscosities of up to 80 mPa s. In this range, a shake flask is a somehow self-regulating system with respect to oxygen supply. This is in contrary to stirred tanks, where the oxygen supply is steadily reduced to only 5% at 80 mPa s. Since, the liquid film formation at shake flask walls inherently promotes the oxygen supply at moderate and at elevated viscosities, these results have significant implications for scale-up. © 2013 Wiley Periodicals, Inc. 14. Tsunami deposits Energy Technology Data Exchange (ETDEWEB) NONE 2013-08-15 The NSC (the Nuclear Safety Commission of Japan) demand to survey on tsunami deposits by use of various technical methods (Dec. 2011), because tsunami deposits have useful information on tsunami activity, tsunami source etc. However, there are no guidelines on tsunami deposit survey in JAPAN. In order to prepare the guideline of tsunami deposits survey and evaluation and to develop the method of tsunami source estimation on the basis of tsunami deposits, JNES carried out the following issues; (1) organizing information of paleoseismological record and tsunami deposit by literature research, (2) field survey on tsunami deposit, and (3) designing the analysis code of sediment transport due to tsunami. As to (1), we organize the information gained about tsunami deposits in the database. As to (2), we consolidate methods for surveying and identifying tsunami deposits in the lake based on results of the field survey in Fukui Pref., carried out by JNES. In addition, as to (3), we design the experimental instrument for hydraulic experiment on sediment transport and sedimentation due to tsunamis. These results are reflected in the guideline on the tsunami deposits survey and evaluation. (author) 15. Tsunami deposits International Nuclear Information System (INIS) 2013-01-01 The NSC (the Nuclear Safety Commission of Japan) demand to survey on tsunami deposits by use of various technical methods (Dec. 2011), because tsunami deposits have useful information on tsunami activity, tsunami source etc. However, there are no guidelines on tsunami deposit survey in JAPAN. In order to prepare the guideline of tsunami deposits survey and evaluation and to develop the method of tsunami source estimation on the basis of tsunami deposits, JNES carried out the following issues; (1) organizing information of paleoseismological record and tsunami deposit by literature research, (2) field survey on tsunami deposit, and (3) designing the analysis code of sediment transport due to tsunami. As to (1), we organize the information gained about tsunami deposits in the database. As to (2), we consolidate methods for surveying and identifying tsunami deposits in the lake based on results of the field survey in Fukui Pref., carried out by JNES. In addition, as to (3), we design the experimental instrument for hydraulic experiment on sediment transport and sedimentation due to tsunamis. These results are reflected in the guideline on the tsunami deposits survey and evaluation. (author) 16. Tsunami hazard in the Caribbean: Regional exposure derived from credible worst case scenarios Science.gov (United States) Harbitz, C. B.; Glimsdal, S.; Bazin, S.; Zamora, N.; Løvholt, F.; Bungum, H.; Smebye, H.; Gauer, P.; Kjekstad, O. 2012-04-01 The present study documents a high tsunami hazard in the Caribbean region, with several thousands of lives lost in tsunamis and associated earthquakes since the XIXth century. Since then, the coastal population of the Caribbean and the Central West Atlantic region has grown significantly and is still growing. Understanding this hazard is therefore essential for the development of efficient mitigation measures. To this end, we report a regional tsunami exposure assessment based on potential and credible seismic and non-seismic tsunamigenic sources. Regional tsunami databases have been compiled and reviewed, and on this basis five main scenarios have been selected to estimate the exposure. The scenarios comprise two Mw8 earthquake tsunamis (north of Hispaniola and east of Lesser Antilles), two subaerial/submarine volcano flank collapse tsunamis (Montserrat and Saint Lucia), and one tsunami resulting from a landslide on the flanks of the Kick'em Jenny submarine volcano (north of Grenada). Offshore tsunami water surface elevations as well as maximum water level distributions along the shore lines are computed and discussed for each of the scenarios. The number of exposed people has been estimated in each case, together with a summary of the tsunami exposure for the earthquake and the landslide tsunami scenarios. For the earthquake scenarios, the highest tsunami exposure relative to the population is found for Guadeloupe (6.5%) and Antigua (7.5%), while Saint Lucia (4.5%) and Antigua (5%) have been found to have the highest tsunami exposure relative to the population for the landslide scenarios. Such high exposure levels clearly warrant more attention on dedicated mitigation measures in the Caribbean region. 17. 2004 Sumatra Tsunami Directory of Open Access Journals (Sweden) Vongvisessomjai, S. 2005-09-01 Full Text Available A catastrophic tsunami on December 26, 2004 caused devastation in the coastal region of six southern provinces of Thailand on the Andaman Sea coast. This paper summaries the characteristics of tsunami with the aim of informing and warning the public and reducing future casualties and damage.The first part is a review of the records of past catastrophic tsunamis, namely those in Chile in 1960, Alaska in 1964, and Flores, Java, Indonesia, in 1992, and the lessons drawn from these tsunamis. An analysis and the impact of the 2004 Sumatra tsunami is then presented and remedial measures recommended.Results of this study are as follows:Firstly, the 2004 Sumatra tsunami ranked fourth in terms of earthquake magnitude (9.0 M after those in 1960 in Chile (9.5 M, 1899 in Alaska (9.2 M and 1964 in Alaska (9.1 M and ranked first in terms of damage and casualties. It was most destructive when breaking in shallow water nearshore.Secondly, the best alleviation measures are 1 to set up a reliable system for providing warning at the time of an earthquake in order to save lives and reduce damage and 2 to establish a hazard map and implement land-use zoning in the devastated areas, according to the following principles:- Large hotels located at an elevation of not less than 10 m above mean sea level (MSL- Medium hotels located at an elevation of not less than 6 m above MSL- Small hotel located at elevation below 6 m MSL, but with the first floor elevated on poles to allow passage of a tsunami wave- Set-back distances from shoreline established for various developments- Provision of shelters and evacuation directionsFinally, public education is an essential part of preparedness. 18. Scenario-based tsunami risk assessment using a static flooding approach and high-resolution digital elevation data: An example from Muscat in Oman Science.gov (United States) Schneider, Bastian; Hoffmann, Gösta; Reicherter, Klaus 2016-04-01 Knowledge of tsunami risk and vulnerability is essential to establish a well-adapted Multi Hazard Early Warning System, land-use planning and emergency management. As the tsunami risk for the coastline of Oman is still under discussion and remains enigmatic, various scenarios based on historical tsunamis were created. The suggested inundation and run-up heights were projected onto the modern infrastructural setting of the Muscat Capital Area. Furthermore, possible impacts of the worst-case tsunami event for Muscat are discussed. The approved Papathoma Tsunami Vulnerability Assessment Model was used to model the structural vulnerability of the infrastructure for a 2 m tsunami scenario, depicting the 1945 tsunami and a 5 m tsunami in Muscat. Considering structural vulnerability, the results suggest a minor tsunami risk for the 2 m tsunami scenario as the flooding is mainly confined to beaches and wadis. Especially traditional brick buildings, still predominant in numerous rural suburbs, and a prevalently coast-parallel road network lead to an increased tsunami risk. In contrast, the 5 m tsunami scenario reveals extensively inundated areas and with up to 48% of the buildings flooded, and therefore consequently a significantly higher tsunami risk. We expect up to 60000 damaged buildings and up to 380000 residents directly affected in the Muscat Capital Area, accompanied with a significant loss of life and damage to vital infrastructure. The rapid urbanization processes in the Muscat Capital Area, predominantly in areas along the coast, in combination with infrastructural, demographic and economic growth will additionally increase the tsunami risk and therefore emphasizes the importance of tsunami risk assessment in Oman. 19. Tsunami hazard assessment in the Hudson River Estuary based on dynamic tsunami-tide simulations Science.gov (United States) Shelby, Michael; Grilli, Stéphan T.; Grilli, Annette R. 2016-12-01 This work is part of a tsunami inundation mapping activity carried out along the US East Coast since 2010, under the auspice of the National Tsunami Hazard Mitigation program (NTHMP). The US East Coast features two main estuaries with significant tidal forcing, which are bordered by numerous critical facilities (power plants, major harbors,...) as well as densely built low-level areas: Chesapeake Bay and the Hudson River Estuary (HRE). HRE is the object of this work, with specific focus on assessing tsunami hazard in Manhattan, the Hudson and East River areas. In the NTHMP work, inundation maps are computed as envelopes of maximum surface elevation along the coast and inland, by simulating the impact of selected probable maximum tsunamis (PMT) in the Atlantic ocean margin and basin. At present, such simulations assume a static reference level near shore equal to the local mean high water (MHW) level. Here, instead we simulate maximum inundation in the HRE resulting from dynamic interactions between the incident PMTs and a tide, which is calibrated to achieve MHW at its maximum level. To identify conditions leading to maximum tsunami inundation, each PMT is simulated for four different phases of the tide and results are compared to those obtained for a static reference level. We first separately simulate the tide and the three PMTs that were found to be most significant for the HRE. These are caused by: (1) a flank collapse of the Cumbre Vieja Volcano (CVV) in the Canary Islands (with a 80 km3 volume representing the most likely extreme scenario); (2) an M9 coseismic source in the Puerto Rico Trench (PRT); and (3) a large submarine mass failure (SMF) in the Hudson River canyon of parameters similar to the 165 km3 historical Currituck slide, which is used as a local proxy for the maximum possible SMF. Simulations are performed with the nonlinear and dispersive long wave model FUNWAVE-TVD, in a series of nested grids of increasing resolution towards the coast, by one 20. Numerical Study on the 1682 Tainan Historic Tsunami Event Science.gov (United States) Tsai, Y.; Wu, T.; Lee, C.; KO, L.; Chuang, M. 2013-12-01 We intend to reconstruct the tsunami source of the 1682/1782 tsunami event in Tainan, Taiwan, based on the numerical method. According to Soloviev and Go (1974), a strong earthquake shook the Tainan and caused severe damage, followed by tsunami waves. Almost the whole island was flooded by tsunami for over 120 km. More than 40,000 inhabitants were killed. Forts Zealand and Pigchingi were washed away. 1682/1782 event was the highest death toll in the Pacific Ocean regarded by Bryant (2001). However, the year is ambiguous in 1682 or 1782, and death toll is doubtful. We tend to believe that this event was happened in 1682 based on the evolution of the harbor name. If the 1682 tsunami event does exist, the hazard mitigation plan has to be modified, and restoring the 1682 event becomes important. In this study, we adopted the tsunami reverse tracking method (TRTM) to examine the possible tsunami sources. A series of numerical simulations were carried out by using COMCOT (Cornell Multi-grid Coupled Tsunami model), and nested grid with 30 m resolution was applied to the study area. According to the result of TRTM, the 1682 tsunami is most likely sourcing from the north segment of Manila Trench. From scenario study, we concluded that the 1682 event was triggered by an Mw >= 8.8 earthquake in north segment of Manila Trench, and 4 m wave height was observed in Tainan and its inundation range is agreeable with historical records. If this scenario occurred again, sever damage and death toll will be seen many high population cities, such as Tainan city, Kaohsiung city and Kenting, where No. 3 nuclear power plant is located. Detailed results will be presented in the full paper. Figure 1. Map of Tsunami Reverse Tracking Method (TRTM) in Tainan. Black arrow indicates direction of possible tsunami direction. The color bar denotes the magnitude of the maximum moment flux. Figure 2. Scenario result of Mw 8.8 in northern segment of Manila Trench. (Left: Initial free surface elevation 1. Tsunami risk mapping simulation for Malaysia Science.gov (United States) Teh, S.Y.; Koh, H. L.; Moh, Y.T.; De Angelis, D. L.; Jiang, J. 2011-01-01 The 26 December 2004 Andaman mega tsunami killed about a quarter of a million people worldwide. Since then several significant tsunamis have recurred in this region, including the most recent 25 October 2010 Mentawai tsunami. These tsunamis grimly remind us of the devastating destruction that a tsunami might inflict on the affected coastal communities. There is evidence that tsunamis of similar or higher magnitudes might occur again in the near future in this region. Of particular concern to Malaysia are tsunamigenic earthquakes occurring along the northern part of the Sunda Trench. Further, the Manila Trench in the South China Sea has been identified as another source of potential tsunamigenic earthquakes that might trigger large tsunamis. To protect coastal communities that might be affected by future tsunamis, an effective early warning system must be properly installed and maintained to provide adequate time for residents to be evacuated from risk zones. Affected communities must be prepared and educated in advance regarding tsunami risk zones, evacuation routes as well as an effective evacuation procedure that must be taken during a tsunami occurrence. For these purposes, tsunami risk zones must be identified and classified according to the levels of risk simulated. This paper presents an analysis of tsunami simulations for the South China Sea and the Andaman Sea for the purpose of developing a tsunami risk zone classification map for Malaysia based upon simulated maximum wave heights. ?? 2011 WIT Press. 2. Computed estimates of maximum temperature elevations in fetal tissues during transabdominal pulsed Doppler examinations. Science.gov (United States) Bly, S H; Vlahovich, S; Mabee, P R; Hussey, R G 1992-01-01 Measured characteristics of ultrasonic fields were obtained in submissions from manufacturers of diagnostic ultrasound equipment for devices operating in pulsed Doppler mode. Simple formulae were used with these data to generate upper limits to fetal temperature elevations, delta Tlim, during a transabdominal pulsed Doppler examination. A total of 236 items were analyzed, each item being a console/transducer/operating-mode/intended-use combination, for which the spatial-peak temporal-average intensity, ISPTA, was greater than 500 mW cm-2. The largest calculated delta Tlim values were approximately 1.5, 7.1 and 8.7 degrees C for first-, second- and third-trimester examinations, respectively. The vast majority of items yielded delta Tlim values which were less than 1 degree C in the first trimester. For second- and third-trimester examinations, where heating of fetal bone determines delta Tlim, most delta Tlim values were less than 4 degrees C. The clinical significance of the results is discussed. 3. Challenges in Defining Tsunami Wave Height Science.gov (United States) Stroker, K. J.; Dunbar, P. K.; Mungov, G.; Sweeney, A.; Arcos, N. P. 2017-12-01 The NOAA National Centers for Environmental Information (NCEI) and co-located World Data Service for Geophysics maintain the global tsunami archive consisting of the historical tsunami database, imagery, and raw and processed water level data. The historical tsunami database incorporates, where available, maximum wave heights for each coastal tide gauge and deep-ocean buoy that recorded a tsunami signal. These data are important because they are used for tsunami hazard assessment, model calibration, validation, and forecast and warning. There have been ongoing discussions in the tsunami community about the correct way to measure and report these wave heights. It is important to understand how these measurements might vary depending on how the data were processed and the definition of maximum wave height. On September 16, 2015, an 8.3 Mw earthquake located 48 km west of Illapel, Chile generated a tsunami that was observed all over the Pacific region. We processed the time-series water level data for 57 tide gauges that recorded this tsunami and compared the maximum wave heights determined from different definitions. We also compared the maximum wave heights from the NCEI-processed data with the heights reported by the NOAA Tsunami Warning Centers. We found that in the near field different methods of determining the maximum tsunami wave heights could result in large differences due to possible instrumental clipping. We also found that the maximum peak is usually larger than the maximum amplitude (½ peak-to-trough), but the differences for the majority of the stations were Warning Centers. Since there is currently only one field in the NCEI historical tsunami database to store the maximum tsunami wave height, NCEI will consider adding an additional field for the maximum peak measurement. 4. Tsunami Hockey Science.gov (United States) Weinstein, S.; Becker, N. C.; Wang, D.; Fryer, G. J. 2013-12-01 An important issue that vexes tsunami warning centers (TWCs) is when to cancel a tsunami warning once it is in effect. Emergency managers often face a variety of pressures to allow the public to resume their normal activities, but allowing coastal populations to return too quickly can put them at risk. A TWC must, therefore, exercise caution when cancelling a warning. Kim and Whitmore (2013) show that in many cases a TWC can use the decay of tsunami oscillations in a harbor to forecast when its amplitudes will fall to safe levels. This technique should prove reasonably robust for local tsunamis (those that are potentially dangerous within only 100 km of their source region) and for regional tsunamis (whose danger is limited to within 1000km of the source region) as well. For ocean-crossing destructive tsunamis such as the 11 March 2011 Tohoku tsunami, however, this technique may be inadequate. When a tsunami propagates across the ocean basin, it will encounter topographic obstacles such as seamount chains or coastlines, resulting in coherent reflections that can propagate great distances. When these reflections reach previously-impacted coastlines, they can recharge decaying tsunami oscillations and make them hazardous again. Warning center scientists should forecast sea-level records for 24 hours beyond the initial tsunami arrival in order to observe any potential reflections that may pose a hazard. Animations are a convenient way to visualize reflections and gain a broad geographic overview of their impacts. The Pacific Tsunami Warning Center has developed tools based on tsunami simulations using the RIFT tsunami forecast model. RIFT is a linear, parallelized numerical tsunami propagation model that runs very efficiently on a multi-CPU system (Wang et al, 2012). It can simulate 30-hours of tsunami wave propagation in the Pacific Ocean at 4 arc minute resolution in approximately 6 minutes of real time on a 12-CPU system. Constructing a 30-hour animation using 1 5. Tsunami hazard Energy Technology Data Exchange (ETDEWEB) NONE 2013-08-15 Tohoku Earthquake Tsunami on 11 March, 2011 has led the Fukushima Daiichi nuclear power plant to a serious accident, which highlighted a variety of technical issues such as a very low design tsunami height and insufficient preparations in case a tsunami exceeding the design tsunami height. Lessons such as to take measures to be able to maintain the important safety features of the facility for tsunamis exceeding design height and to implement risk management utilizing Probabilistic Safety Assessment are shown. In order to implement the safety assessment on nuclear power plants across Japan accordingly to the back-fit rule, Nuclear Regulatory Commission will promulgate/execute the New Safety Design Criteria in July 2013. JNES has positioned the 'enhancement of probabilistic tsunami hazard assessment' as highest priority issue and implemented in order to support technically the Nuclear Regulatory Authority in formulating the new Safety Design Criteria. Findings of the research had reflected in the 'Technical Review Guidelines for Assessing Design Tsunami Height based on tsunami hazards'. (author) 6. Tsunami hazard International Nuclear Information System (INIS) 2013-01-01 Tohoku Earthquake Tsunami on 11 March, 2011 has led the Fukushima Daiichi nuclear power plant to a serious accident, which highlighted a variety of technical issues such as a very low design tsunami height and insufficient preparations in case a tsunami exceeding the design tsunami height. Lessons such as to take measures to be able to maintain the important safety features of the facility for tsunamis exceeding design height and to implement risk management utilizing Probabilistic Safety Assessment are shown. In order to implement the safety assessment on nuclear power plants across Japan accordingly to the back-fit rule, Nuclear Regulatory Commission will promulgate/execute the New Safety Design Criteria in July 2013. JNES has positioned the 'enhancement of probabilistic tsunami hazard assessment' as highest priority issue and implemented in order to support technically the Nuclear Regulatory Authority in formulating the new Safety Design Criteria. Findings of the research had reflected in the 'Technical Review Guidelines for Assessing Design Tsunami Height based on tsunami hazards'. (author) 7. An Evaluation of Infrastructure for Tsunami Evacuation in Padang, West Sumatra, Indonesia (Invited) Science.gov (United States) Cedillos, V.; Canney, N.; Deierlein, G.; Diposaptono, S.; Geist, E. L.; Henderson, S.; Ismail, F.; Jachowski, N.; McAdoo, B. G.; Muhari, A.; Natawidjaja, D. H.; Sieh, K. E.; Toth, J.; Tucker, B. E.; Wood, K. 2009-12-01 Padang has one of the world’s highest tsunami risks due to its high hazard, vulnerable terrain and population density. The current strategy to prepare for tsunamis in Padang is focused on developing early warning systems, planning evacuation routes, conducting evacuation drills, and raising local awareness. Although these are all necessary, they are insufficient. Padang’s proximity to the Sunda Trench and flat terrain make reaching safe ground impossible for much of the population. The natural warning in Padang - a strong earthquake that lasts over a minute - will be the first indicator of a potential tsunami. People will have about 30 minutes after the earthquake to reach safe ground. It is estimated that roughly 50,000 people in Padang will be unable to evacuate in that time. Given these conditions, other means to prepare for the expected tsunami must be developed. With this motivation, GeoHazards International and Stanford University’s Chapter of Engineers for a Sustainable World partnered with Indonesian organizations - Andalas University and Tsunami Alert Community in Padang, Laboratory for Earth Hazards, and the Ministry of Marine Affairs and Fisheries - in an effort to evaluate the need for and feasibility of tsunami evacuation infrastructure in Padang. Tsunami evacuation infrastructure can include earthquake-resistant bridges and evacuation structures that rise above the maximum tsunami water level, and can withstand the expected earthquake and tsunami forces. The choices for evacuation structures vary widely - new and existing buildings, evacuation towers, soil berms, elevated highways and pedestrian overpasses. This interdisciplinary project conducted a course at Stanford University, undertook several field investigations, and concluded that: (1) tsunami evacuation structures and bridges are essential to protect the people in Padang, (2) there is a need for a more thorough engineering-based evaluation than conducted to-date of the suitability of 8. On the characteristics of landslide tsunamis. Science.gov (United States) Løvholt, F; Pedersen, G; Harbitz, C B; Glimsdal, S; Kim, J 2015-10-28 This review presents modelling techniques and processes that govern landslide tsunami generation, with emphasis on tsunamis induced by fully submerged landslides. The analysis focuses on a set of representative examples in simplified geometries demonstrating the main kinematic landslide parameters influencing initial tsunami amplitudes and wavelengths. Scaling relations from laboratory experiments for subaerial landslide tsunamis are also briefly reviewed. It is found that the landslide acceleration determines the initial tsunami elevation for translational landslides, while the landslide velocity is more important for impulsive events such as rapid slumps and subaerial landslides. Retrogressive effects stretch the tsunami, and in certain cases produce enlarged amplitudes due to positive interference. In an example involving a deformable landslide, it is found that the landslide deformation has only a weak influence on tsunamigenesis. However, more research is needed to determine how landslide flow processes that involve strong deformation and long run-out determine tsunami generation. © 2015 The Authors. 9. Time-dependent onshore tsunami response Science.gov (United States) Apotsos, Alex; Gelfenbaum, Guy R.; Jaffe, Bruce E. 2012-01-01 While bulk measures of the onshore impact of a tsunami, including the maximum run-up elevation and inundation distance, are important for hazard planning, the temporal evolution of the onshore flow dynamics likely controls the extent of the onshore destruction and the erosion and deposition of sediment that occurs. However, the time-varying dynamics of actual tsunamis are even more difficult to measure in situ than the bulk parameters. Here, a numerical model based on the non-linear shallow water equations is used to examine the effects variations in the wave characteristics, bed slope, and bottom roughness have on the temporal evolution of the onshore flow. Model results indicate that the onshore flow dynamics vary significantly over the parameter space examined. For example, the flow dynamics over steep, smooth morphologies tend to be temporally symmetric, with similar magnitude velocities generated during the run-up and run-down phases of inundation. Conversely, on shallow, rough onshore topographies the flow dynamics tend to be temporally skewed toward the run-down phase of inundation, with the magnitude of the flow velocities during run-up and run-down being significantly different. Furthermore, for near-breaking tsunami waves inundating over steep topography, the flow velocity tends to accelerate almost instantaneously to a maximum and then decrease monotonically. Conversely, when very long waves inundate over shallow topography, the flow accelerates more slowly and can remain steady for a period of time before beginning to decelerate. These results indicate that a single set of assumptions concerning the onshore flow dynamics cannot be applied to all tsunamis, and site specific analyses may be required. 10. Tsunamis - General Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Tsunami is a Japanese word meaning harbor wave. It is a water wave or a series of waves generated by an impulsive vertical displacement of the surface of the ocean... 11. Village Level Tsunami Threat Maps for Tamil Nadu, SE Coast of India: Numerical Modeling Technique Science.gov (United States) MP, J.; Kulangara Madham Subrahmanian, D.; V, R. M. 2014-12-01 The Indian Ocean tsunami (IOT) devastated several countries of North Indian Ocean. India is one of the worst affected countries after Indonesia and Sri Lanka. In India, Tamil Nadu suffered maximum with fatalities exceeding 8,000 people. Historical records show that tsunami has invaded the shores of Tamil Nadu in the past and has made people realize that the tsunami threat looms over Tamil Nadu and it is necessary to evolve strategies for tsunami threat management. The IOT has brought to light that tsunami inundation and runup varied within short distances and for the disaster management for tsunami, large scale maps showing areas that are likely to be affected by future tsunami are identified. Therefore threat assessment for six villages including Mamallapuram (also called Mahabalipuram) which is famous for its rock-cut temples, from the northern part of Tamil Nadu state of India has been carried out and threat maps categorizing the coast into areas of different degree of threat are prepared. The threat was assessed by numerical modeling using TUNAMI N2 code considering different tsunamigenic sources along the Andaman - Sumatra trench. While GEBCO and C-Map data was used for bathymetry and for land elevation data was generated by RTK - GPS survey for a distance of 1 km from shore and SRTM for the inland areas. The model results show that in addition to the Sumatra source which generated the IOT in 2004, earthquakes originating in Car Nicobar and North Andaman can inflict more damage. The North Andaman source can generate a massive tsunami and an earthquake of magnitude more than Mw 9 can not only affect Tamil Nadu but also entire south east coast of India. The runup water level is used to demarcate the tsunami threat zones in the villages using GIS. 12. Tsunami in the Arctic Science.gov (United States) Kulikov, Evgueni; Medvedev, Igor; Ivaschenko, Alexey 2017-04-01 rate of 10-3 per year. Additional tsunami threat might arise from rare earthquake occurrences within the continental slope of deep-sea basin of the Arctic Ocean and near the coast of the continent, where high probability of triggering submarine landslides exists that can generate even more dangerous tsunamis than those of seismotectonic origin. The most reliable information about the manifestation of the tsunami in the Arctic is associated with submarine landslide Storegga located on the continental slope of the Norwegian Sea and collapsed 8,200 years ago. Traces of sediment left behind by the tsunami waves on the coast, show that the maximum vertical tsunami runup could reach 20 meters. Factors causing the potential tsunami thread of landslides in Russian Arctic are sedimentation processes that can be associated with the formation of the alluvial fans of the great Siberian rivers Ob, Yenisei and Lena. 13. Tsunami simulation of 2011 Tohoku-Oki Earthquake. Evaluation of difference in tsunami wave pressure acting around Fukushima Daiichi Nuclear Power Station and Fukushima Daini Nuclear Power Station among different tsunami source models International Nuclear Information System (INIS) Fujihara, Satoru; Hashimoto, Norihiko; Korenaga, Mariko; Tamiya, Takahiro 2016-01-01 Since the 2011 Tohoku-Oki Earthquake, evaluations based on a tsunami simulation approach have had a very important role in promoting tsunami disaster prevention measures in the case of mega-thrust earthquakes. When considering tsunami disaster prevention measures based on the knowledge obtained from tsunami simulations, it is important to carefully examine the type of tsunami source model. In current tsunami simulations, there are various ways to set the tsunami source model, and a considerable difference in tsunami behavior can be expected among the tsunami source models. In this study, we carry out a tsunami simulation of the 2011 Tohoku-Oki Earthquake around Fukushima Daiichi (I) Nuclear Power Plant and Fukushima Daini (II) Nuclear Power Plant in Fukushima Prefecture, Japan, using several tsunami source models, and evaluate the difference in the tsunami behavior in the tsunami inundation process. The results show that for an incoming tsunami inundating an inland region, there are considerable relative differences in the maximum tsunami height and wave pressure. This suggests that there could be false information used in promoting tsunami disaster prevention measures in the case of mega-thrust earthquakes, depending on the tsunami source model. (author) 14. The effect analysis of 1741 Oshima-Oshima tsunami in the West Coast of Japan to Korea Energy Technology Data Exchange (ETDEWEB) Kim, Minkyu; Rhee, Hyunme; Choi, Inkil [Korea Atomic Energy Research institute, Daejeon (Korea, Republic of) 2013-05-15 It is very difficult to determine and assessment for tsunami hazard. For determining a tsunami risk for NPP site, a development of tsunami hazard is one of the most important. Through the tsunami hazard analysis, a tsunami return period can be determined. For the performing a tsunami hazard analysis, empirical method and numerical method should be needed. Kim et al, already developed tsunami hazard for east coast of Korea for the calculation of tsunami risk of nuclear power plant. In the case of tsunami hazard analysis, a development of tsunami catalog should be performed. In the previous research of Kim et al, the maximum wave height was assumed by the author's decision based on historical record in the annals of Chosun dynasty for evaluating the tsunami catalog. Therefore, in this study, a literature survey was performed for a quantitative measure of historical tsunami record transform to qualitative tsunami wave height for the evaluation of tsunami catalog. In this study, the 1741 tsunami was determined by using a literature review for the evaluation of tsunami hazard. The 1741 tsunami reveals a same tsunami between the historical records in Korea and Japan. The tsunami source of 1741 tsunami was not an earthquake and volcanic. Using the numerical analysis, the wave height of 1741 tsunami can be determined qualitatively. 15. The effect analysis of 1741 Oshima-Oshima tsunami in the West Coast of Japan to Korea International Nuclear Information System (INIS) Kim, Minkyu; Rhee, Hyunme; Choi, Inkil 2013-01-01 It is very difficult to determine and assessment for tsunami hazard. For determining a tsunami risk for NPP site, a development of tsunami hazard is one of the most important. Through the tsunami hazard analysis, a tsunami return period can be determined. For the performing a tsunami hazard analysis, empirical method and numerical method should be needed. Kim et al, already developed tsunami hazard for east coast of Korea for the calculation of tsunami risk of nuclear power plant. In the case of tsunami hazard analysis, a development of tsunami catalog should be performed. In the previous research of Kim et al, the maximum wave height was assumed by the author's decision based on historical record in the annals of Chosun dynasty for evaluating the tsunami catalog. Therefore, in this study, a literature survey was performed for a quantitative measure of historical tsunami record transform to qualitative tsunami wave height for the evaluation of tsunami catalog. In this study, the 1741 tsunami was determined by using a literature review for the evaluation of tsunami hazard. The 1741 tsunami reveals a same tsunami between the historical records in Korea and Japan. The tsunami source of 1741 tsunami was not an earthquake and volcanic. Using the numerical analysis, the wave height of 1741 tsunami can be determined qualitatively 16. Tsunami Deposits on Simeulue Island, Indonesia--A tale of two tsunamis Science.gov (United States) Jaffe, B. E.; Higman, B. 2007-12-01 As tsunami deposits become more widely used for evaluating tsunami risk, it has become increasingly valuable to improve the ability to interpret deposits to determine tsunami characteristics such as size and flow speed. A team of U.S. and Indonesian scientists went to Simeulue Island 125 km east of Sumatra in April 2005 to learn more about the relation between tsunami deposition and flow. Busong, on the southeast coast of Simeulue Island, was inundated twice in a three-months period by tsunamis. The 26 December 2004 tsunami inundated 130 m inland to an elevation of approximately 4 m. The 28 March 2005 tsunami inundated less than 100 m to an elevation of approximately 2 m. Both tsunamis created deposits that were observed to be an amalgamated 20- cm thick, predominately fine to medium sand overlying a sandy soil. The contact between 2004 and 2005 tsunami deposits is at 13 cm above the top of the sandy soil and is clearly marked by vegetation that grew on the 2004 deposit in the 3 months between tsunamis. Grass roots are present in the upper half of the 2004 deposit and absent both below that level and in the 2005 deposit. We analyzed the fine-scale sedimentary structures and vertical variation in grain size of the deposits to search for diagnostic criteria for unequivocally identifying deposits formed by multiple tsunamis. At Busung, we expected there to be differences between each tsunami's deposits because the tsunami height, period, and direction of the 2004 and 2005 tsunamis were different. Both the 2004 and 2005 deposits were predominately normally graded, although each had inversely graded and massive sections. Faint laminations, which became more defined in a peel of the deposit, were discontinuous and predominately quasi-parallel. Knowing where the contact between the two tsunamis was, subtle sedimentary differences were identified that may be used to tell that it is composed of two separate tsunamis. We will present quantitative analyses of the variations 17. Scenario-based numerical modelling and the palaeo-historic record of tsunamis in Wallis and Futuna, Southwest Pacific Science.gov (United States) Lamarche, G.; Popinet, S.; Pelletier, B.; Mountjoy, J.; Goff, J.; Delaux, S.; Bind, J. 2015-08-01 We investigated the tsunami hazard in the remote French territory of Wallis and Futuna, Southwest Pacific, using the Gerris flow solver to produce numerical models of tsunami generation, propagation and inundation. Wallis consists of the inhabited volcanic island of Uvéa that is surrounded by a lagoon delimited by a barrier reef. Futuna and the island of Alofi form the Horn Archipelago located ca. 240 km east of Wallis. They are surrounded by a narrow fringing reef. Futuna and Alofi emerge from the North Fiji Transform Fault that marks the seismically active Pacific-Australia plate boundary. We generated 15 tsunami scenarios. For each, we calculated maximum wave elevation (MWE), inundation distance and expected time of arrival (ETA). The tsunami sources were local, regional and distant earthquake faults located along the Pacific Rim. In Wallis, the outer reef may experience 6.8 m-high MWE. Uvéa is protected by the barrier reef and the lagoon, but inundation depths of 2-3 m occur in several coastal areas. In Futuna, flow depths exceeding 2 m are modelled in several populated areas, and have been confirmed by a post-September 2009 South Pacific tsunami survey. The channel between the islands of Futuna and Alofi amplified the 2009 tsunami, which resulted in inundation distance of almost 100 m and MWE of 4.4 m. This first ever tsunami hazard modelling study of Wallis and Futuna compares well with palaeotsunamis recognised on both islands and observation of the impact of the 2009 South Pacific tsunami. The study provides evidence for the mitigating effect of barrier and fringing reefs from tsunamis. 18. Tsunami.gov: NOAA's Tsunami Information Portal Science.gov (United States) Shiro, B.; Carrick, J.; Hellman, S. B.; Bernard, M.; Dildine, W. P. 2014-12-01 We present the new Tsunami.gov website, which delivers a single authoritative source of tsunami information for the public and emergency management communities. The site efficiently merges information from NOAA's Tsunami Warning Centers (TWC's) by way of a comprehensive XML feed called Tsunami Event XML (TEX). The resulting unified view allows users to quickly see the latest tsunami alert status in geographic context without having to understand complex TWC areas of responsibility. The new site provides for the creation of a wide range of products beyond the traditional ASCII-based tsunami messages. The publication of modern formats such as Common Alerting Protocol (CAP) can drive geographically aware emergency alert systems like FEMA's Integrated Public Alert and Warning System (IPAWS). Supported are other popular information delivery systems, including email, text messaging, and social media updates. The Tsunami.gov portal allows NOAA staff to easily edit content and provides the facility for users to customize their viewing experience. In addition to access by the public, emergency managers and government officials may be offered the capability to log into the portal for special access rights to decision-making and administrative resources relevant to their respective tsunami warning systems. The site follows modern HTML5 responsive design practices for optimized use on mobile as well as non-mobile platforms. It meets all federal security and accessibility standards. Moving forward, we hope to expand Tsunami.gov to encompass tsunami-related content currently offered on separate websites, including the NOAA Tsunami Website, National Tsunami Hazard Mitigation Program, NOAA Center for Tsunami Research, National Geophysical Data Center's Tsunami Database, and National Data Buoy Center's DART Program. This project is part of the larger Tsunami Information Technology Modernization Project, which is consolidating the software architectures of NOAA's existing TWC's into 19. Introduction to "Global Tsunami Science: Past and Future, Volume III" Science.gov (United States) Rabinovich, Alexander B.; Fritz, Hermann M.; Tanioka, Yuichiro; Geist, Eric L. 2018-04-01 Twenty papers on the study of tsunamis are included in Volume III of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Volume I of this topical issue was published as PAGEOPH, vol. 173, No. 12, 2016 and Volume II as PAGEOPH, vol. 174, No. 8, 2017. Two papers in Volume III focus on specific details of the 2009 Samoa and the 1923 northern Kamchatka tsunamis; they are followed by three papers related to tsunami hazard assessment for three different regions of the world oceans: South Africa, Pacific coast of Mexico and the northwestern part of the Indian Ocean. The next six papers are on various aspects of tsunami hydrodynamics and numerical modelling, including tsunami edge waves, resonant behaviour of compressible water layer during tsunamigenic earthquakes, dispersive properties of seismic and volcanically generated tsunami waves, tsunami runup on a vertical wall and influence of earthquake rupture velocity on maximum tsunami runup. Four papers discuss problems of tsunami warning and real-time forecasting for Central America, the Mediterranean coast of France, the coast of Peru, and some general problems regarding the optimum use of the DART buoy network for effective real-time tsunami warning in the Pacific Ocean. Two papers describe historical and paleotsunami studies in the Russian Far East. The final set of three papers importantly investigates tsunamis generated by non-seismic sources: asteroid airburst and meteorological disturbances. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation. 20. Emergency preparedness in the case of Makran tsunami: a case study on tsunami risk visualization for the western parts of Gujarat, India Directory of Open Access Journals (Sweden) V. M. Patel 2016-03-01 Full Text Available The west coast of India is affected by tsunamigenic earthquake along the Makran subduction zone. On 28 November 1945 at 21:56 coordinated universal time (UTC, a massive Makran earthquake (M8.0 generated a destructive tsunami that propagated across the Northern Arabian Sea and the Indian Ocean. This tsunamigenic earthquake was responsible for the loss of life and great destruction along the coasts of India, Pakistan, Iran and Oman. Modelling of tsunami stages has been made for the coasts of Pakistan, Iran, India and Oman using NAMI-DANCE computer code. The fault parameters of the earthquakes for the generation of tsunami are epicentre (25.15° N, 63.48° E, fault area (200 km length and 100 km width, angle of strike, dip and rake (246°, 7° and 90°, focal depth (15 km, slip magnitude (7 m. The bathymetry data are taken from General Bathymetric Chart of the Oceans (GEBCO and land topography data were collected using Shuttle Radar Topography Mission (SRTM. The present simulation is carried out for a duration of 360 min. It is observed that the maximum calculated tsunami run-ups were about 0.7–1.1 m along the coast of Oman, 0.5 m near Muscat, 0.1 m near Sur, 0.7–1.35 m along the western coast of India, 0.5–2.3 m along the southern coast of Iran and 1.2–5.8 m along the southern coast of Pakistan. After the tsunamigenic earthquake, the tsunami wave reached the Gulf of Kachchh in about 240 min, Okha in about 185 min, Dwarka in about 150 min, Porbandar in about 155 min, Mumbai in about 300 min and Goa in about 210 min. The calculated 2-hr tsunami travel time to the Indian coast is in good agreement with the available reports and published data. If the tsunami strikes during high tide, we should expect more serious hazards which would impact local coastal communities. The results obtained in this study are converted to be compatible with the geographic information system based applications for display and spatial analysis of 1. Tsunami hazard and risk assessment in El Salvador Science.gov (United States) González, M.; González-Riancho, P.; Gutiérrez, O. Q.; García-Aguilar, O.; Aniel-Quiroga, I.; Aguirre, I.; Alvarez, J. A.; Gavidia, F.; Jaimes, I.; Larreynaga, J. A. 2012-04-01 Tsunamis are relatively infrequent phenomena representing a greater threat than earthquakes, hurricanes and tornadoes, causing the loss of thousands of human lives and extensive damage to coastal infrastructure around the world. Several works have attempted to study these phenomena in order to understand their origin, causes, evolution, consequences, and magnitude of their damages, to finally propose mechanisms to protect coastal societies. Advances in the understanding and prediction of tsunami impacts allow the development of adaptation and mitigation strategies to reduce risk on coastal areas. This work -Tsunami Hazard and Risk Assessment in El Salvador-, funded by AECID during the period 2009-12, examines the state of the art and presents a comprehensive methodology for assessing the risk of tsunamis at any coastal area worldwide and applying it to the coast of El Salvador. The conceptual framework is based on the definition of Risk as the probability of harmful consequences or expected losses resulting from a given hazard to a given element at danger or peril, over a specified time period (European Commission, Schneiderbauer et al., 2004). The HAZARD assessment (Phase I of the project) is based on propagation models for earthquake-generated tsunamis, developed through the characterization of tsunamigenic sources -sismotectonic faults- and other dynamics under study -tsunami waves, sea level, etc.-. The study area is located in a high seismic activity area and has been hit by 11 tsunamis between 1859 and 1997, nine of them recorded in the twentieth century and all generated by earthquakes. Simulations of historical and potential tsunamis with greater or lesser affection to the country's coast have been performed, including distant sources, intermediate and close. Deterministic analyses of the threats under study -coastal flooding- have been carried out, resulting in different hazard maps (maximum wave height elevation, maximum water depth, minimum tsunami 2. What Causes Tsunamis? Science.gov (United States) Mogil, H. Michael 2005-01-01 On December 26, 2004, a disastrous tsunami struck many parts of South Asia. The scope of this disaster has resulted in an outpouring of aid throughout the world and brought attention to the science of tsunamis. "Tsunami" means "harbor wave" in Japanese, and the Japanese have a long history of tsunamis. The word… 3. Development of Real-time Tsunami Inundation Forecast Using Ocean Bottom Tsunami Networks along the Japan Trench Science.gov (United States) Aoi, S.; Yamamoto, N.; Suzuki, W.; Hirata, K.; Nakamura, H.; Kunugi, T.; Kubo, T.; Maeda, T. 2015-12-01 In the 2011 Tohoku earthquake, in which huge tsunami claimed a great deal of lives, the initial tsunami forecast based on hypocenter information estimated using seismic data on land were greatly underestimated. From this lesson, NIED is now constructing S-net (Seafloor Observation Network for Earthquakes and Tsunamis along the Japan Trench) which consists of 150 ocean bottom observatories with seismometers and pressure gauges (tsunamimeters) linked by fiber optic cables. To take full advantage of S-net, we develop a new methodology of real-time tsunami inundation forecast using ocean bottom observation data and construct a prototype system that implements the developed forecasting method for the Pacific coast of Chiba prefecture (Sotobo area). We employ a database-based approach because inundation is a strongly non-linear phenomenon and its calculation costs are rather heavy. We prepare tsunami scenario bank in advance, by constructing the possible tsunami sources, and calculating the tsunami waveforms at S-net stations, coastal tsunami heights and tsunami inundation on land. To calculate the inundation for target Sotobo area, we construct the 10-m-mesh precise elevation model with coastal structures. Based on the sensitivities analyses, we construct the tsunami scenario bank that efficiently covers possible tsunami scenarios affecting the Sotobo area. A real-time forecast is carried out by selecting several possible scenarios which can well explain real-time tsunami data observed at S-net from tsunami scenario bank. An advantage of our method is that tsunami inundations are estimated directly from the actual tsunami data without any source information, which may have large estimation errors. In addition to the forecast system, we develop Web services, APIs, and smartphone applications and brush them up through social experiments to provide the real-time tsunami observation and forecast information in easy way to understand toward urging people to evacuate. 4. A probabilistic tsunami hazard assessment for Indonesia Science.gov (United States) Horspool, N.; Pranantyo, I.; Griffin, J.; Latief, H.; Natawidjaja, D. H.; Kongko, W.; Cipta, A.; Bustaman, B.; Anugrah, S. D.; Thio, H. K. 2014-11-01 Probabilistic hazard assessments are a fundamental tool for assessing the threats posed by hazards to communities and are important for underpinning evidence-based decision-making regarding risk mitigation activities. Indonesia has been the focus of intense tsunami risk mitigation efforts following the 2004 Indian Ocean tsunami, but this has been largely concentrated on the Sunda Arc with little attention to other tsunami prone areas of the country such as eastern Indonesia. We present the first nationally consistent probabilistic tsunami hazard assessment (PTHA) for Indonesia. This assessment produces time-independent forecasts of tsunami hazards at the coast using data from tsunami generated by local, regional and distant earthquake sources. The methodology is based on the established monte carlo approach to probabilistic seismic hazard assessment (PSHA) and has been adapted to tsunami. We account for sources of epistemic and aleatory uncertainty in the analysis through the use of logic trees and sampling probability density functions. For short return periods (100 years) the highest tsunami hazard is the west coast of Sumatra, south coast of Java and the north coast of Papua. For longer return periods (500-2500 years), the tsunami hazard is highest along the Sunda Arc, reflecting the larger maximum magnitudes. The annual probability of experiencing a tsunami with a height of > 0.5 m at the coast is greater than 10% for Sumatra, Java, the Sunda islands (Bali, Lombok, Flores, Sumba) and north Papua. The annual probability of experiencing a tsunami with a height of > 3.0 m, which would cause significant inundation and fatalities, is 1-10% in Sumatra, Java, Bali, Lombok and north Papua, and 0.1-1% for north Sulawesi, Seram and Flores. The results of this national-scale hazard assessment provide evidence for disaster managers to prioritise regions for risk mitigation activities and/or more detailed hazard or risk assessment. 5. TSUNAMI LOADING ON BUILDINGS WITH OPENINGS Directory of Open Access Journals (Sweden) P. Lukkunaprasit 2009-01-01 Full Text Available Reinforced concrete (RC buildings with openings in the masonry infill panels have shown superior performance to those without openings in the devastating 2004 Indian Ocean Tsunami. Understanding the effect of openings and the resulting tsunami force is essential for an economical and safe design of vertical evacuation shelters against tsunamis. One-to-one hundred scale building models with square shape in plan were tested in a 40 m long hydraulic flume with 1 m x 1 m cross section. A mild slope of 0.5 degree representing the beach condition at Phuket, Thailand was simulated in the hydraulic laboratory. The model dimensions were 150 mm x 150 mm x 150 mm. Two opening configurations of the front and back walls were investigated, viz., 25% and 50% openings. Pressure sensors were placed on the faces of the model to measure the pressure distribution. A high frequency load cell was mounted at the base of the model to record the tsunami forces. A bi-linear pressure profile is proposed for determining the maximum tsunami force acting on solid square buildings. The influence of openings on the peak pressures on the front face of the model is found to be practically insignificant. For 25% and 50% opening models, the tsunami forces reduce by about 15% and 30% from the model without openings, respectively. The reduction in the tsunami force clearly demonstrates the benefit of openings in reducing the effect of tsunami on such buildings. 6. High Resolution Tsunami Modeling and Assessment of Harbor Resilience; Case Study in Istanbul Science.gov (United States) Cevdet Yalciner, Ahmet; Aytore, Betul; Gokhan Guler, Hasan; Kanoglu, Utku; Duzgun, Sebnem; Zaytsev, Andrey; Arikawa, Taro; Tomita, Takashi; Ozer Sozdinler, Ceren; Necmioglu, Ocal; Meral Ozel, Nurcan 2014-05-01 Ports and harbors are the major vulnerable coastal structures under tsunami attack. Resilient harbors against tsunami impacts are essential for proper, efficient and successful rescue operations and reduction of the loss of life and property by tsunami disasters. There are several critical coastal structures as such in the Marmara Sea. Haydarpasa and Yenikapi ports are located in the Marmara Sea coast of Istanbul. These two ports are selected as the sites of numerical experiments to test their resilience under tsunami impact. Cargo, container and ro-ro handlings, and short/long distance passenger transfers are the common services in both ports. Haydarpasa port has two breakwaters with the length of three kilometers in total. Yenikapi port has one kilometer long breakwater. The accurate resilience analysis needs high resolution tsunami modeling and careful assessment of the site. Therefore, building data with accurate coordinates of their foot prints and elevations are obtained. The high resolution bathymetry and topography database with less than 5m grid size is developed for modeling. The metadata of the several types of structures and infrastructure of the ports and environs are processed. Different resistances for the structures/buildings/infrastructures are controlled by assigning different friction coefficients in a friction matrix. Two different tsunami conditions - high expected and moderate expected - are selected for numerical modeling. The hybrid tsunami simulation and visualization codes NAMI DANCE, STOC-CADMAS System are utilized to solve all necessary tsunami parameters and obtain the spatial and temporal distributions of flow depth, current velocity, inundation distance and maximum water level in the study domain. Finally, the computed critical values of tsunami parameters are evaluated and structural performance of the port components are discussed in regard to a better resilience. ACKNOWLEDGEMENTS: Support by EU 603839 ASTARTE Project, UDAP-Ç-12 7. Tohoku-Oki Earthquake Tsunami Runup and Inundation Data for Sites Around the Island of Hawaiʻi Science.gov (United States) Trusdell, Frank A.; Chadderton, Amy; Hinchliffe, Graham; Hara, Andrew; Patenge, Brent; Weber, Tom 2012-01-01 At 0546 U.t.c. March 11, 2011, a Mw 9.0 ("great") earthquake occurred near the northeast coast of Honshu Island, Japan, generating a large tsunami that devastated the east coast of Japan and impacted many far-flung coastal sites around the Pacific Basin. After the earthquake, the Pacific Tsunami Warning Center issued a tsunami alert for the State of Hawaii, followed by a tsunami-warning notice from the local State Civil Defense on March 10, 2011 (Japan is 19 hours ahead of Hawaii). After the waves passed the islands, U.S. Geological Survey (USGS) scientists from the Hawaiian Volcano Observatory (HVO) measured inundation (maximum inland distance of flooding), runup (elevation at maximum extent of inundation) and took photographs in coastal areas around the Island of Hawaiʻi. Although the damage in West Hawaiʻi is well documented, HVO's mapping revealed that East Hawaiʻi coastlines were also impacted by the tsunami. The intent of this report is to provide runup and inundation data for sites around the Island of Hawaiʻi. 8. Tsunami geology in paleoseismology Science.gov (United States) Yuichi Nishimura,; Jaffe, Bruce E. 2015-01-01 The 2004 Indian Ocean and 2011 Tohoku-oki disasters dramatically demonstrated the destructiveness and deadliness of tsunamis. For the assessment of future risk posed by tsunamis it is necessary to understand past tsunami events. Recent work on tsunami deposits has provided new information on paleotsunami events, including their recurrence interval and the size of the tsunamis (e.g. [187–189]). Tsunamis are observed not only on the margin of oceans but also in lakes. The majority of tsunamis are generated by earthquakes, but other events that displace water such as landslides and volcanic eruptions can also generate tsunamis. These non-earthquake tsunamis occur less frequently than earthquake tsunamis; it is, therefore, very important to find and study geologic evidence for past eruption and submarine landslide triggered tsunami events, as their rare occurrence may lead to risks being underestimated. Geologic investigations of tsunamis have historically relied on earthquake geology. Geophysicists estimate the parameters of vertical coseismic displacement that tsunami modelers use as a tsunami's initial condition. The modelers then let the simulated tsunami run ashore. This approach suffers from the relationship between the earthquake and seafloor displacement, the pertinent parameter in tsunami generation, being equivocal. In recent years, geologic investigations of tsunamis have added sedimentology and micropaleontology, which focus on identifying and interpreting depositional and erosional features of tsunamis. For example, coastal sediment may contain deposits that provide important information on past tsunami events [190, 191]. In some cases, a tsunami is recorded by a single sand layer. Elsewhere, tsunami deposits can consist of complex layers of mud, sand, and boulders, containing abundant stratigraphic evidence for sediment reworking and redeposition. These onshore sediments are geologic evidence for tsunamis and are called ‘tsunami deposits’ (Figs. 26 9. Modeling for the SAFRR Tsunami Scenario-generation, propagation, inundation, and currents in ports and harbors: Chapter D in The SAFRR (Science Application for Risk Reduction) Tsunami Scenario Science.gov (United States) , 2013-01-01 This U.S. Geological Survey (USGS) Open-File report presents a compilation of tsunami modeling studies for the Science Application for Risk Reduction (SAFRR) tsunami scenario. These modeling studies are based on an earthquake source specified by the SAFRR tsunami source working group (Kirby and others, 2013). The modeling studies in this report are organized into three groups. The first group relates to tsunami generation. The effects that source discretization and horizontal displacement have on tsunami initial conditions are examined in section 1 (Whitmore and others). In section 2 (Ryan and others), dynamic earthquake rupture models are explored in modeling tsunami generation. These models calculate slip distribution and vertical displacement of the seafloor as a result of realistic fault friction, physical properties of rocks surrounding the fault, and dynamic stresses resolved on the fault. The second group of papers relates to tsunami propagation and inundation modeling. Section 3 (Thio) presents a modeling study for the entire California coast that includes runup and inundation modeling where there is significant exposure and estimates of maximum velocity and momentum flux at the shoreline. In section 4 (Borrero and others), modeling of tsunami propagation and high-resolution inundation of critical locations in southern California is performed using the National Oceanic and Atmospheric Administration’s (NOAA) Method of Splitting Tsunami (MOST) model and NOAA’s Community Model Interface for Tsunamis (ComMIT) modeling tool. Adjustments to the inundation line owing to fine-scale structures such as levees are described in section 5 (Wilson). The third group of papers relates to modeling of hydrodynamics in ports and harbors. Section 6 (Nicolsky and Suleimani) presents results of the model used at the Alaska Earthquake Information Center for the Ports of Los Angeles and Long Beach, as well as synthetic time series of the modeled tsunami for other selected 10. Predicting natural catastrophes tsunamis CERN Multimedia CERN. Geneva 2005-01-01 1. Tsunamis - Introduction - Definition of phenomenon - basic properties of the waves Propagation and dispersion Interaction with coasts - Geological and societal effects Origin of tsunamis - natural sources Scientific activities in connection with tsunamis. Ideas about simulations 2. Tsunami generation - The earthquake source - conventional theory The earthquake source - normal mode theory The landslide source Near-field observation - The Plafker index Far-field observation - Directivity 3. Tsunami warning - General ideas - History of efforts Mantle magnitudes and TREMOR algorithms The challenge of "tsunami earthquakes" Energy-moment ratios and slow earthquakes Implementation and the components of warning centers 4. Tsunami surveys - Principles and methodologies Fifteen years of field surveys and related milestones. Reconstructing historical tsunamis: eyewitnesses and geological evidence 5. Lessons from the 2004 Indonesian tsunami - Lessons in seismology Lessons in Geology The new technologies Lessons in civ... 11. Transient Tsunamis in Lakes Science.gov (United States) Couston, L.; Mei, C.; Alam, M. 2013-12-01 A large number of lakes are surrounded by steep and unstable mountains with slopes prone to failure. As a result, landslides are likely to occur and impact water sitting in closed reservoirs. These rare geological phenomena pose serious threats to dam reservoirs and nearshore facilities because they can generate unexpectedly large tsunami waves. In fact, the tallest wave experienced by contemporary humans occurred because of a landslide in the narrow bay of Lituya in 1958, and five years later, a deadly landslide tsunami overtopped Lake Vajont's dam, flooding and damaging villages along the lakefront and in the Piave valley. If unstable slopes and potential slides are detected ahead of time, inundation maps can be drawn to help people know the risks, and mitigate the destructive power of the ensuing waves. These maps give the maximum wave runup height along the lake's vertical and sloping boundaries, and can be obtained by numerical simulations. Keeping track of the moving shorelines along beaches is challenging in classical Eulerian formulations because the horizontal extent of the fluid domain can change over time. As a result, assuming a solid slide and nonbreaking waves, here we develop a nonlinear shallow-water model equation in the Lagrangian framework to address the problem of transient landslide-tsunamis. In this manner, the shorelines' three-dimensional motion is part of the solution. The model equation is hyperbolic and can be solved numerically by finite differences. Here, a 4th order Runge-Kutta method and a compact finite-difference scheme are implemented to integrate in time and spatially discretize the forced shallow-water equation in Lagrangian coordinates. The formulation is applied to different lake and slide geometries to better understand the effects of the lake's finite lengths and slide's forcing mechanism on the generated wavefield. Specifically, for a slide moving down a plane beach, we show that edge-waves trapped by the shoreline and free 12. Tsunami Casualty Model Science.gov (United States) Yeh, H. 2007-12-01 More than 4500 deaths by tsunamis were recorded in the decade of 1990. For example, the 1992 Flores Tsunami in Indonesia took away at least 1712 lives, and more than 2182 people were victimized by the 1998 Papua New Guinea Tsunami. Such staggering death toll has been totally overshadowed by the 2004 Indian Ocean Tsunami that claimed more than 220,000 lives. Unlike hurricanes that are often evaluated by economic losses, death count is the primary measure for tsunami hazard. It is partly because tsunamis kill more people owing to its short lead- time for warning. Although exact death tallies are not available for most of the tsunami events, there exist gender and age discriminations in tsunami casualties. Significant gender difference in the victims of the 2004 Indian Ocean Tsunami was attributed to women's social norms and role behavior, as well as cultural bias toward women's inability to swim. Here we develop a rational casualty model based on humans' limit to withstand the tsunami flows. The application to simple tsunami runup cases demonstrates that biological and physiological disadvantages also make a significant difference in casualty rate. It further demonstrates that the gender and age discriminations in casualties become most pronounced when tsunami is marginally strong and the difference tends to diminish as tsunami strength increases. 13. Sedimentology of onshore tsunami deposits of the Indian Ocean tsunami, 2004 in the mangrove forest of the Curieuse Marine National Park, Seychelles Science.gov (United States) Nentwig, V.; Bahlburg, H.; Monthy, D. 2012-12-01 The Seychelles were severely affected by the December 26, 2004 tsunami in the Indian Ocean. Since the tsunami history of small islands often remains unclear due to a young historiography we conducted a study of onshore tsunami deposits on the Seychelles in order to understand the scale of impact of the 2004 Indian Ocean tsunami and potential predecessors. As part of this project we found and studied onshore tsunami deposits in the mangrove forest at Old Turtle Pond bay on the east coast of Curieuse Island. The 2004 Indian Ocean tsunami caused a change of habitat due to sedimentation of an extended sand sheet in the mangrove forest. We present results of the first detailed sedimentological study of onshore tsunami deposits of the 2004 Indian Ocean tsunami conducted on the Seychelles. The Curieuse mangrove forest at Old Turtle Pond bay is part of the Curieuse Marine National Park. It is thus protected from anthropogenic interference. Towards the sea it was shielded until the tsunami by a 500 m long and 1.5 m high causeway which was set up in 1909 as a sediment trap. The causeway was destroyed by the 2004 Indian Ocean Tsunami. The silt to fine sand sized and organic rich mangrove soil was subsequently covered by carbonate fine to medium sand (1.5 to 2.1 Φ) containing coarser carbonate shell debris which had been trapped outside the mangrove bay before the tsunami. The tsunami deposited a sand sheet which is organized into different lobes. They extend landwards to different inundation distances as a function of morphology. Maximum inundation distance is 200 m. The sediments often cover the pneumatophores of the mangroves. No landward fining trend of the sand sheet has been observed. On the different sand lobes carbonate-cemented sandstone debris ranging in size from 0.5 up to 12 cm occurs. Also numerous mostly fragmented shells of bivalves and molluscs were distributed on top of the sand lobes. Intact bivalve shells were mostly positioned with the convex side upwards 14. Sedimentology of Coastal Deposits in the Seychelles Islands—Evidence of the Indian Ocean Tsunami 2004 Science.gov (United States) Nentwig, Vanessa; Bahlburg, Heinrich; Monthy, Devis 2015-03-01 The Seychelles, an archipelago in the Indian Ocean at a distance of 4,500-5,000 km from the west coast of Sumatra, were severely affected by the December 26, 2004 tsunami with wave heights up to 4 m. Since the tsunami history of small islands often remains unclear due to a young historical record, it is important to study the geological traces of high energy events preserved along their coasts. We conducted a survey of the impact of the 2004 Indian Ocean tsunami on the inner Seychelles islands. In detail we studied onshore tsunami deposits in the mangrove forest at Old Turtle Pond in the Curieuse Marine National Park on the east coast of Curieuse Island. It is thus protected from anthropogenic interference. Towards the sea it was shielded until the tsunami in 2004 by a 500 m long and 1.5 m high causeway which was set up in 1909 as a sediment trap and assuring a low energetic hydrodynamic environment for the protection of the mangroves. The causeway was destroyed by the 2004 Indian Ocean Tsunami. The tsunami caused a change of habitat by the sedimentation of sand lobes in the mangrove forest. The dark organic rich mangrove soil (1.9 Φ) was covered by bimodal fine to medium carbonate sand (1.7-2.2 Φ) containing coarser carbonate shell fragments and debris. Intertidal sediments and the mangrove soil acted as sources of the lobe deposits. The sand sheet deposited by the tsunami is organized into different lobes. They extend landwards to different inundation distances as a function of the morphology of the onshore area. The maximum extent of 180 m from the shoreline indicates the minimum inundation distance to the tsunami. The top parts of the sand lobes cover the pneumatophores of the mangroves. There is no landward fining trend along the sand lobes and normal grading of the deposits is rare, occurring only in 1 of 7 sites. The sand lobe deposits also lack sedimentary structures. On the surface of the sand lobes numerous mostly fragmented shells of bivalves and 15. RIP Input Tables From WAPDEG for LA Design Selection: Repository Horizon Elevation - 2-Level AML 50% and Near Maximum International Nuclear Information System (INIS) B.E. Bullard 1999-01-01 The purpose of this calculation is to document the WAPDEG version 3.09 (CRWMS M and O 1998b). Software Routine Report for WAPDEG (Version 3.09) simulations used to analyze waste package degradation and failure under the repository exposure conditions characterized by a two-tier thermal loading repository design. Also documented is the post-processing of these results into tables of waste-package-degradation-time histories suitable for use as input into the Integrated Probabilistic Simulator for Environmental Systems (RIP) version 5.19.01 (Golder Associates 1998) computer program. Specifically, the WAPDEG simulations discussed in this calculation correspond to waste package emplacement conditions (repository environment and design) as defined in the Total System Performance Assessment-Viability Assessment (CRWMS M and O 1998a). Total System Performance Assessment-Viability Assessment (TSPA-VA) Analyses Technical Basis Document--Chapter 5, Waste Package Degradation Modeling And Abstraction, pp. 5-27 to 5-29, with the exception that a two-tier thermal loading design feature as specified in the License Application Design Selection (LADS) study was analyzed. The particular design feature evaluated in this report is a modification of the repository horizon elevation and layout within the Topopah Springs Member of Yucca Mountain. Specifically, the modification consists of adding a second level, 50-m above the base case repository layout. Two options were considered, representing two variations in thermal loading. In Design Feature 25e (designated DF25e), each level has an Areal Mass Loading (AML) of 42.5 MTU/acre (i.e., half the VA base case). In Design Feature 25f (designated DF25), each level has an AML of 64MTU/acre. As a result of the change in waste package placement relative to the TSPA-VA base-case design, different temperature and relative humidity time histories at the waste package surface are calculated (input to the WAPDEG simulations), and consequently 16. Tsunami hazard at the Western Mediterranean Spanish coast from seismic sources Science.gov (United States) Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; González, M.; Otero, L. 2011-01-01 Spain represents an important part of the tourism sector in the Western Mediterranean, which has been affected in the past by tsunamis. Although the tsunami risk at the Spanish coasts is not the highest of the Mediterranean, the necessity of tsunami risk mitigation measures should not be neglected. In the Mediterranean area, Spain is exposed to two different tectonic environments with contrasting characteristics. On one hand, the Alboran Basin characterised by transcurrent and transpressive tectonics and, on the other hand, the North Algerian fold and thrust belt, characterised by compressive tectonics. A set of 22 seismic tsunamigenic sources has been used to estimate the tsunami threat over the Spanish Mediterranean coast of the Iberian peninsula and the Balearic Islands. Maximum wave elevation maps and tsunami travel times have been computed by means of numerical modelling and we have obtained estimations of threat levels for each source over the Spanish coast. The sources on the Western edge of North Algeria are the most dangerous, due to their threat to the South-Eastern coast of the Iberian Peninsula and to the Western Balearic Islands. In general, the Northern Algerian sources pose a greater risk to the Spanish coast than the Alboran Sea sources, which only threaten the peninsular coast. In the Iberian Peninsula, the Spanish provinces of Almeria and Murcia are the most exposed, while all the Balearic Islands can be affected by the North Algerian sources with probable severe damage, specially the islands of Ibiza and Minorca. The results obtained in this work are useful to plan future regional and local warning systems, as well as to set the priority areas to conduct research on detailed tsunami risk. 17. Tsunami hazard at the Western Mediterranean Spanish coast from seismic sources Directory of Open Access Journals (Sweden) J. A. Álvarez-Gómez 2011-01-01 Full Text Available Spain represents an important part of the tourism sector in the Western Mediterranean, which has been affected in the past by tsunamis. Although the tsunami risk at the Spanish coasts is not the highest of the Mediterranean, the necessity of tsunami risk mitigation measures should not be neglected. In the Mediterranean area, Spain is exposed to two different tectonic environments with contrasting characteristics. On one hand, the Alboran Basin characterised by transcurrent and transpressive tectonics and, on the other hand, the North Algerian fold and thrust belt, characterised by compressive tectonics. A set of 22 seismic tsunamigenic sources has been used to estimate the tsunami threat over the Spanish Mediterranean coast of the Iberian peninsula and the Balearic Islands. Maximum wave elevation maps and tsunami travel times have been computed by means of numerical modelling and we have obtained estimations of threat levels for each source over the Spanish coast. The sources on the Western edge of North Algeria are the most dangerous, due to their threat to the South-Eastern coast of the Iberian Peninsula and to the Western Balearic Islands. In general, the Northern Algerian sources pose a greater risk to the Spanish coast than the Alboran Sea sources, which only threaten the peninsular coast. In the Iberian Peninsula, the Spanish provinces of Almeria and Murcia are the most exposed, while all the Balearic Islands can be affected by the North Algerian sources with probable severe damage, specially the islands of Ibiza and Minorca. The results obtained in this work are useful to plan future regional and local warning systems, as well as to set the priority areas to conduct research on detailed tsunami risk. 18. Characteristics of the 2011 Tohoku Tsunami and introduction of two level tsunamis for tsunami disaster mitigation. Science.gov (United States) Sato, Shinji 2015-01-01 Characteristics of the 2011 Tohoku Tsunami have been revealed by collaborative tsunami surveys extensively performed under the coordination of the Joint Tsunami Survey Group. The complex behaviors of the mega-tsunami were characterized by the unprecedented scale and the low occurrence frequency. The limitation and the performance of tsunami countermeasures were described on the basis of tsunami surveys, laboratory experiments and numerical analyses. These findings contributed to the introduction of two-level tsunami hazards to establish a new strategy for tsunami disaster mitigation, combining structure-based flood protection designed by the Level-1 tsunami and non-structure-based damage reduction planned by the Level-2 tsunami. 19. Sheet-gravel evidence for a late Holocene tsunami run-up on beach dunes, Great Barrier Island, New Zealand Science.gov (United States) Nichol, Scott L.; Lian, Olav B.; Carter, Charles H. 2003-01-01 A semi-continuous sheet of granule to cobble-size clasts forms a distinctive deposit on sand dunes located on a coastal barrier in Whangapoua Bay, Great Barrier Island, New Zealand. The gravel sheet extends from the toe of the foredune to 14.3 m above mean sea level and 200 m landward from the beach. Clasts are rounded to sub-rounded and comprise lithologies consistent with local bedrock. Terrestrial sources for the gravel are considered highly unlikely due to the isolation of the dunes from hillslopes and streams. The only source for the clasts is the nearshore to inner shelf of Whangapoua Bay, where gravel sediments have been previously documented. The mechanism for transport of the gravel is unlikely to be storm surge due to the elevation of the deposit; maximum-recorded storm surge on this coast is 0.8 m above mean high water spring tide. Aeolian processes are also discounted due to the size of clasts and the elevation at which they occur. Tsunami is therefore considered the most probable mechanism for gravel transport. Minimum run-up height of the tsunami was 14.3 m, based on maximum elevation of gravel deposits. Optical ages on dune sands beneath and covering the gravel allow age bracketing to 0-4.7 ka. Within this time frame, numerous documented regional seismic and volcanic events could have generated the tsunami, notably submarine volcanism along the southern Kermadec arc to the east-southeast of Great Barrier Island where large magnitude events are documented for the late Holocene. Radiocarbon ages on shell from Maori middens that appear to have been reworked by tsunami run-up constrain the age of this event to post ca. 1400 AD. Regardless of the precise age of this event, the well-preserved nature of the Whangapoua gravel deposit provides for an improved understanding of the high degree of spatial variability in tsunami run-up. 20. The tsunami phenomenon Science.gov (United States) Röbke, B. R.; Vött, A. 2017-12-01 With human activity increasingly concentrating on coasts, tsunamis (from Japanese tsu = harbour, nami = wave) are a major natural hazard to today's society. Stimulated by disastrous tsunami impacts in recent years, for instance in south-east Asia (2004) or in Japan (2011), tsunami science has significantly flourished, which has brought great advances in hazard assessment and mitigation plans. Based on tsunami research of the last decades, this paper provides a thorough treatise on the tsunami phenomenon from a geoscientific point of view. Starting with the wave features, tsunamis are introduced as long shallow water waves or wave trains crossing entire oceans without major energy loss. At the coast, tsunamis typically show wave shoaling, funnelling and resonance effects as well as a significant run-up and backflow. Tsunami waves are caused by a sudden displacement of the water column due to a number of various trigger mechanisms. Such are earthquakes as the main trigger, submarine and subaerial mass wastings, volcanic activity, atmospheric disturbances (meteotsunamis) and cosmic impacts, as is demonstrated by giving corresponding examples from the past. Tsunamis are known to have a significant sedimentary and geomorphological off- and onshore response. So-called tsunamites form allochthonous high-energy deposits that are left at the coast during tsunami landfall. Tsunami deposits show typical sedimentary features, as basal erosional unconformities, fining-upward and -landward, a high content of marine fossils, rip-up clasts from underlying units and mud caps, all reflecting the hydrodynamic processes during inundation. The on- and offshore behaviour of tsunamis and related sedimentary processes can be simulated using hydro- and morphodynamic numerical models. The paper provides an overview of the basic tsunami modelling techniques, including discretisation, guidelines for appropriate temporal and spatial resolution as well as the nesting method. Furthermore, the 1. Assessing historical rate changes in global tsunami occurrence Science.gov (United States) Geist, E.L.; Parsons, T. 2011-01-01 The global catalogue of tsunami events is examined to determine if transient variations in tsunami rates are consistent with a Poisson process commonly assumed for tsunami hazard assessments. The primary data analyzed are tsunamis with maximum sizes >1m. The record of these tsunamis appears to be complete since approximately 1890. A secondary data set of tsunamis >0.1m is also analyzed that appears to be complete since approximately 1960. Various kernel density estimates used to determine the rate distribution with time indicate a prominent rate change in global tsunamis during the mid-1990s. Less prominent rate changes occur in the early- and mid-20th century. To determine whether these rate fluctuations are anomalous, the distribution of annual event numbers for the tsunami catalogue is compared to Poisson and negative binomial distributions, the latter of which includes the effects of temporal clustering. Compared to a Poisson distribution, the negative binomial distribution model provides a consistent fit to tsunami event numbers for the >1m data set, but the Poisson null hypothesis cannot be falsified for the shorter duration >0.1m data set. Temporal clustering of tsunami sources is also indicated by the distribution of interevent times for both data sets. Tsunami event clusters consist only of two to four events, in contrast to protracted sequences of earthquakes that make up foreshock-main shock-aftershock sequences. From past studies of seismicity, it is likely that there is a physical triggering mechanism responsible for events within the tsunami source 'mini-clusters'. In conclusion, prominent transient rate increases in the occurrence of global tsunamis appear to be caused by temporal grouping of geographically distinct mini-clusters, in addition to the random preferential location of global M >7 earthquakes along offshore fault zones. 2. Real-time tsunami inundation forecasting and damage mapping towards enhancing tsunami disaster resiliency Science.gov (United States) Koshimura, S.; Hino, R.; Ohta, Y.; Kobayashi, H.; Musa, A.; Murashima, Y. 2014-12-01 With use of modern computing power and advanced sensor networks, a project is underway to establish a new system of real-time tsunami inundation forecasting, damage estimation and mapping to enhance society's resilience in the aftermath of major tsunami disaster. The system consists of fusion of real-time crustal deformation monitoring/fault model estimation by Ohta et al. (2012), high-performance real-time tsunami propagation/inundation modeling with NEC's vector supercomputer SX-ACE, damage/loss estimation models (Koshimura et al., 2013), and geo-informatics. After a major (near field) earthquake is triggered, the first response of the system is to identify the tsunami source model by applying RAPiD Algorithm (Ohta et al., 2012) to observed RTK-GPS time series at GEONET sites in Japan. As performed in the data obtained during the 2011 Tohoku event, we assume less than 10 minutes as the acquisition time of the source model. Given the tsunami source, the system moves on to running tsunami propagation and inundation model which was optimized on the vector supercomputer SX-ACE to acquire the estimation of time series of tsunami at offshore/coastal tide gauges to determine tsunami travel and arrival time, extent of inundation zone, maximum flow depth distribution. The implemented tsunami numerical model is based on the non-linear shallow-water equations discretized by finite difference method. The merged bathymetry and topography grids are prepared with 10 m resolution to better estimate the tsunami inland penetration. Given the maximum flow depth distribution, the system performs GIS analysis to determine the numbers of exposed population and structures using census data, then estimates the numbers of potential death and damaged structures by applying tsunami fragility curve (Koshimura et al., 2013). Since the tsunami source model is determined, the model is supposed to complete the estimation within 10 minutes. The results are disseminated as mapping products to 3. Seismically generated tsunamis. Science.gov (United States) Arcas, Diego; Segur, Harvey 2012-04-13 People around the world know more about tsunamis than they did 10 years ago, primarily because of two events: a tsunami on 26 December 2004 that killed more than 200,000 people around the shores of the Indian Ocean; and an earthquake and tsunami off the coast of Japan on 11 March 2011 that killed nearly 15,000 more and triggered a nuclear accident, with consequences that are still unfolding. This paper has three objectives: (i) to summarize our current knowledge of the dynamics of tsunamis; (ii) to describe how that knowledge is now being used to forecast tsunamis; and (iii) to suggest some policy changes that might protect people better from the dangers of future tsunamis. 4. Observation-based Quantitative Uncertainty Estimation for Realtime Tsunami Inundation Forecast using ABIC and Ensemble Simulation Science.gov (United States) Takagawa, T. 2016-12-01 An ensemble forecasting scheme for tsunami inundation is presented. The scheme consists of three elemental methods. The first is a hierarchical Bayesian inversion using Akaike's Bayesian Information Criterion (ABIC). The second is Montecarlo sampling from a probability density function of multidimensional normal distribution. The third is ensamble analysis of tsunami inundation simulations with multiple tsunami sources. Simulation based validation of the model was conducted. A tsunami scenario of M9.1 Nankai earthquake was chosen as a target of validation. Tsunami inundation around Nagoya Port was estimated by using synthetic tsunami waveforms at offshore GPS buoys. The error of estimation of tsunami inundation area was about 10% even if we used only ten minutes observation data. The estimation accuracy of waveforms on/off land and spatial distribution of maximum tsunami inundation depth is demonstrated. 5. Introduction to “Global tsunami science: Past and future, Volume III” Science.gov (United States) Rabinovich, Alexander B.; Fritz, Hermann M.; Tanioka, Yuichiro; Geist, Eric L. 2018-01-01 Twenty papers on the study of tsunamis are included in Volume III of the PAGEOPH topical issue “Global Tsunami Science: Past and Future”. Volume I of this topical issue was published as PAGEOPH, vol. 173, No. 12, 2016 and Volume II as PAGEOPH, vol. 174, No. 8, 2017. Two papers in Volume III focus on specific details of the 2009 Samoa and the 1923 northern Kamchatka tsunamis; they are followed by three papers related to tsunami hazard assessment for three different regions of the world oceans: South Africa, Pacific coast of Mexico and the northwestern part of the Indian Ocean. The next six papers are on various aspects of tsunami hydrodynamics and numerical modelling, including tsunami edge waves, resonant behaviour of compressible water layer during tsunamigenic earthquakes, dispersive properties of seismic and volcanically generated tsunami waves, tsunami runup on a vertical wall and influence of earthquake rupture velocity on maximum tsunami runup. Four papers discuss problems of tsunami warning and real-time forecasting for Central America, the Mediterranean coast of France, the coast of Peru, and some general problems regarding the optimum use of the DART buoy network for effective real-time tsunami warning in the Pacific Ocean. Two papers describe historical and paleotsunami studies in the Russian Far East. The final set of three papers importantly investigates tsunamis generated by non-seismic sources: asteroid airburst and meteorological disturbances. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation. 6. Probabilistic Tsunami Hazard Analysis Science.gov (United States) Thio, H. K.; Ichinose, G. A.; Somerville, P. G.; Polet, J. 2006-12-01 The recent tsunami disaster caused by the 2004 Sumatra-Andaman earthquake has focused our attention to the hazard posed by large earthquakes that occur under water, in particular subduction zone earthquakes, and the tsunamis that they generate. Even though these kinds of events are rare, the very large loss of life and material destruction caused by this earthquake warrant a significant effort towards the mitigation of the tsunami hazard. For ground motion hazard, Probabilistic Seismic Hazard Analysis (PSHA) has become a standard practice in the evaluation and mitigation of seismic hazard to populations in particular with respect to structures, infrastructure and lifelines. Its ability to condense the complexities and variability of seismic activity into a manageable set of parameters greatly facilitates the design of effective seismic resistant buildings but also the planning of infrastructure projects. Probabilistic Tsunami Hazard Analysis (PTHA) achieves the same goal for hazards posed by tsunami. There are great advantages of implementing such a method to evaluate the total risk (seismic and tsunami) to coastal communities. The method that we have developed is based on the traditional PSHA and therefore completely consistent with standard seismic practice. Because of the strong dependence of tsunami wave heights on bathymetry, we use a full waveform tsunami waveform computation in lieu of attenuation relations that are common in PSHA. By pre-computing and storing the tsunami waveforms at points along the coast generated for sets of subfaults that comprise larger earthquake faults, we can efficiently synthesize tsunami waveforms for any slip distribution on those faults by summing the individual subfault tsunami waveforms (weighted by their slip). This efficiency make it feasible to use Green's function summation in lieu of attenuation relations to provide very accurate estimates of tsunami height for probabilistic calculations, where one typically computes 7. Worst-Case Scenario Tsunami Hazard Assessment in Two Historically and Economically Important Districts in Eastern Sicily (Italy) Science.gov (United States) Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.; Paparo, M. A. 2015-12-01 The portion of the eastern Sicily coastline (southern Italy), ranging from the southern part of the Catania Gulf (to the north) down to the southern-eastern end of the island, represents a very important geographical domain from the industrial, commercial, military, historical and cultural points of view. Here the two major cities of Augusta and Siracusa are found. In particular, the Augusta bay hosts one of the largest petrochemical poles in the Mediterranean, and Siracusa is listed among the UNESCO World Heritage Sites since 2005. This area was hit by at least seven tsunamis in the approximate time interval from 1600 BC to present, the most famous being the 365, 1169, 1693 and 1908 tsunamis. The choice of this area as one of the sites for the testing of innovative methods for tsunami hazard, vulnerability and risk assessment and reduction is then fully justified. This is being developed in the frame of the EU Project called ASTARTE - Assessment, STrategy And Risk Reduction for Tsunamis in Europe (Grant 603839, 7th FP, ENV.2013.6.4-3). We assess the tsunami hazard for the Augusta-Siracusa area through the worst-case credible scenario technique, which can be schematically divided into the following steps: 1) Selection of five main source areas, both in the near- and in the far-field (Hyblaean-Malta escarpment, Messina Straits, Ionian subduction zone, Calabria offshore, western Hellenic Trench); 2) Choice of potential and credible tsunamigenic faults in each area: 38 faults were selected, with properly assigned magnitude, geometry and focal mechanism; 3) Computation of the maximum tsunami wave elevations along the eastern Sicily coast on a coarse grid (by means of the in-house code UBO-TSUFD) and extraction of the 9 scenarios that produce the largest effects in the target areas of Augusta and Siracusa; 4) For each of the 9 scenarios we run numerical UBO-TSUFD simulations over a set of five nested grids, with grid cells size decreasing from 3 km in the open Ionian 8. Variation of Maximum Tree Height and Annual Shoot Growth of Smith Fir at Various Elevations in the Sygera Mountains, Southeastern Tibetan Plateau Science.gov (United States) Wang, Yafeng; Čufar, Katarina; Eckstein, Dieter; Liang, Eryuan 2012-01-01 Little is known about tree height and height growth (as annual shoot elongation of the apical part of vertical stems) of coniferous trees growing at various altitudes on the Tibetan Plateau, which provides a high-elevation natural platform for assessing tree growth performance in relation to future climate change. We here investigated the variation of maximum tree height and annual height increment of Smith fir (Abies georgei var. smithii) in seven forest plots (30 m×40 m) along two altitudinal transects between 3,800 m and 4,200/4,390 m above sea level (a.s.l.) in the Sygera Mountains, southeastern Tibetan Plateau. Four plots were located on north-facing slopes and three plots on southeast-facing slopes. At each site, annual shoot growth was obtained by measuring the distance between successive terminal bud scars along the main stem of 25 trees that were between 2 and 4 m high. Maximum/mean tree height and mean annual height increment of Smith fir decreased with increasing altitude up to the tree line, indicative of a stress gradient (the dominant temperature gradient) along the altitudinal transect. Above-average mean minimum summer (particularly July) temperatures affected height increment positively, whereas precipitation had no significant effect on shoot growth. The time series of annual height increments of Smith fir can be used for the reconstruction of past climate on the southeastern Tibetan Plateau. In addition, it can be expected that the rising summer temperatures observed in the recent past and anticipated for the future will enhance Smith fir's growth throughout its altitudinal distribution range. PMID:22396738 9. Tsunami on Sanriku Coast in 1586: Orphan or Ghost Tsunami ? Science.gov (United States) Satake, K. 2017-12-01 The Peruvian earthquake on July 9, 1586 was the oldest earthquake that damaged Lima. The tsunami height was assigned as 24 m in Callao and 1-2 m in Miyagi prefecture in Japan by Soloviev and Go (1975). Dorbath et al. (1990) studied historical earthquakes in Peru and estimated that the 1586 earthquake was similar to the 1974 event (Mw 8.1) with source length of 175 km. They referred two different tsunami heights, 3. 7m and 24 m, in Callao, and judged that the latter was exaggerated. Okal et al. (2006) could not make a source model to explain both tsunami heights in Callao and Japan. More recently, Butler et al. (2017) estimated the age of coral boulders in Hawaii as AD 1572 +/- 21, speculated the tsunami source in Aleutians, and attributed it to the source of the 1586 tsunami in Japan. Historical tsunamis, both near-field and far-field, have been documented along the Sanriku coast since 1586 (e.g., Watanabe, 1998). However, there is no written document for the 1586 tsunami (Tsuji et al., 2013). Ninomiya (1960) compiled the historical tsunami records on the Sanriku coast soon after the 1960 Chilean tsunami, and correlated the legend of tsunami in Tokura with the 1586 Peruvian earthquake, although he noted that the dates were different. About the legend, he referred to Kunitomi(1933) who compiled historical tsunami data after the 1933 Showa Sanriku tsunami. Kunitomi referred to "Tsunami history of Miyagi prefecture" published after the 1896 Meiji Sanriku tsunami. "Tsunami history" described the earthquake and tsunami damage of Tensho earthquake on January 18 (Gregorian),1586 in central Japan, and correlated the tsunami legend in Tokura on June 30, 1586 (G). Following the 2011 Tohoku tsunami, tsunami legend in Tokura was studied again (Ebina, 2015). A local person published a story he heard from his grandfather that many small valleys were named following the 1611 tsunami, which inundated further inland than the 2011 tsunami. Ebina (2015), based on historical documents 10. Characteristics of Recent Tsunamis Science.gov (United States) Sweeney, A. D.; Eble, M. C.; Mungov, G. 2017-12-01 How long do tsunamis impact a coast? How often is the largest tsunami wave the first to arrive? How do measurements in the far field differ from those made close to the source? Extending the study of Eblé et al. (2015) who showed the prevalence of a leading negative phase, we assimilate and summarize characteristics of known tsunami events recorded on bottom pressure and coastal water level stations throughout the world oceans to answer these and other questions. An extensive repository of data from the National Centers for Environmental Information (NCEI) archive for tsunami-ready U.S. tide gauge stations, housing more than 200 sites going back 10 years are utilized as are some of the more 3000 marigrams (analog or paper tide gauge records) for tsunami events. The focus of our study is on five tsunamis generated by earthquakes: 2010 Chile (Maule), 2011 East Japan (Tohoku), 2012 Haida Gwaii, 2014 Chile (Iquique), and 2015 Central Chile and one meteorologically generated tsunami on June 2013 along the U.S. East Coast and Caribbean. Reference: Eblé, M., Mungov, G. & Rabinovich, A. On the Leading Negative Phase of Major 2010-2014 Tsunamis. Pure Appl. Geophys. (2015) 172: 3493. https://doi.org/10.1007/s00024-015-1127-5 11. Probabilistic tsunami hazard assessment from incomplete and uncertain historical catalogues with application to tsunamigenic regions in the Pacific Ocean. NARCIS (Netherlands) Smit, A.; Kijko, Andrzej; Stein, A. The paper presents a new method for empirical assessment of tsunami recurrence parameters, namely the mean tsunami activity rate λT , the Soloviev–Imamura frequency–magnitude power law bT -value, and the coastline-characteristic, maximum possible tsunami intensity imax . The three 12. Airburst-Generated Tsunamis Science.gov (United States) Berger, Marsha; Goodman, Jonathan 2018-04-01 This paper examines the questions of whether smaller asteroids that burst in the air over water can generate tsunamis that could pose a threat to distant locations. Such airburst-generated tsunamis are qualitatively different than the more frequently studied earthquake-generated tsunamis, and differ as well from tsunamis generated by asteroids that strike the ocean. Numerical simulations are presented using the shallow water equations in several settings, demonstrating very little tsunami threat from this scenario. A model problem with an explicit solution that demonstrates and explains the same phenomena found in the computations is analyzed. We discuss the question of whether compressibility and dispersion are important effects that should be included, and show results from a more sophisticated model problem using the linearized Euler equations that begins to addresses this. 13. A~probabilistic tsunami hazard assessment for Indonesia Science.gov (United States) Horspool, N.; Pranantyo, I.; Griffin, J.; Latief, H.; Natawidjaja, D. H.; Kongko, W.; Cipta, A.; Bustaman, B.; Anugrah, S. D.; Thio, H. K. 2014-05-01 Probabilistic hazard assessments are a fundamental tool for assessing the threats posed by hazards to communities and are important for underpinning evidence based decision making on risk mitigation activities. Indonesia has been the focus of intense tsunami risk mitigation efforts following the 2004 Indian Ocean Tsunami, but this has been largely concentrated on the Sunda Arc, with little attention to other tsunami prone areas of the country such as eastern Indonesia. We present the first nationally consistent Probabilistic Tsunami Hazard Assessment (PTHA) for Indonesia. This assessment produces time independent forecasts of tsunami hazard at the coast from tsunami generated by local, regional and distant earthquake sources. The methodology is based on the established monte-carlo approach to probabilistic seismic hazard assessment (PSHA) and has been adapted to tsunami. We account for sources of epistemic and aleatory uncertainty in the analysis through the use of logic trees and through sampling probability density functions. For short return periods (100 years) the highest tsunami hazard is the west coast of Sumatra, south coast of Java and the north coast of Papua. For longer return periods (500-2500 years), the tsunami hazard is highest along the Sunda Arc, reflecting larger maximum magnitudes along the Sunda Arc. The annual probability of experiencing a tsunami with a height at the coast of > 0.5 m is greater than 10% for Sumatra, Java, the Sunda Islands (Bali, Lombok, Flores, Sumba) and north Papua. The annual probability of experiencing a tsunami with a height of >3.0 m, which would cause significant inundation and fatalities, is 1-10% in Sumatra, Java, Bali, Lombok and north Papua, and 0.1-1% for north Sulawesi, Seram and Flores. The results of this national scale hazard assessment provide evidence for disaster managers to prioritise regions for risk mitigation activities and/or more detailed hazard or risk assessment. 14. Tsunami Simulators in Physical Modelling - Concept to Practical Solutions Science.gov (United States) Chandler, Ian; Allsop, William; Robinson, David; Rossetto, Tiziana; McGovern, David; Todd, David 2017-04-01 Whilst many researchers have conducted simple 'tsunami impact' studies, few engineering tools are available to assess the onshore impacts of tsunami, with no agreed methods available to predict loadings on coastal defences, buildings or related infrastructure. Most previous impact studies have relied upon unrealistic waveforms (solitary or dam-break waves and bores) rather than full-duration tsunami waves, or have used simplified models of nearshore and over-land flows. Over the last 10+ years, pneumatic Tsunami Simulators for the hydraulic laboratory have been developed into an exciting and versatile technology, allowing the forces of real-world tsunami to be reproduced and measured in a laboratory environment for the first time. These devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example coastal defences and infrastructure. They have also reproduced full-duration tsunamis including Mercator 2004 and Tohoku 2011, both at 1:50 scale. Engineering scale models of these tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences, pressures / forces on buildings, and scour at idealised buildings. This presentation will describe how these Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facilities within which they operate, and will present research results from three generations of Tsunami Simulators. Highlights of direct importance to natural hazard modellers and coastal engineers include measurements of wave run-up levels, forces on single and multiple buildings and comparison with previous theoretical predictions. Multiple buildings have two malign effects. The density of buildings to flow area (blockage ratio) increases water depths and flow velocities in the 'streets'. But the increased building densities themselves also increase the cost of flow per unit area (both personal and monetary). The most recent study with the Tsunami 15. The 2017 México Tsunami Record, Numerical Modeling and Threat Assessment in Costa Rica Science.gov (United States) Chacón-Barrantes, Silvia 2018-03-01 An M w 8.2 earthquake and tsunami occurred offshore the Pacific coast of México on 2017-09-08, at 04:49 UTC. Costa Rican tide gauges have registered a total of 21 local, regional and far-field tsunamis. The Quepos gauge registered 12 tsunamis between 1960 and 2014 before it was relocated inside a harbor by late 2014, where it registered two more tsunamis. This paper analyzes the 2017 México tsunami as recorded by the Quepos gauge. It took 2 h for the tsunami to arrive to Quepos, with a first peak height of 9.35 cm and a maximum amplitude of 18.8 cm occurring about 6 h later. As a decision support tool, this tsunami was modeled for Quepos in real time using ComMIT (Community Model Interface for Tsunami) with the finer grid having a resolution of 1 arcsec ( 30 m). However, the model did not replicate the tsunami record well, probably due to the lack of a finer and more accurate bathymetry. In 2014, the National Tsunami Monitoring System of Costa Rica (SINAMOT) was created, acting as a national tsunami warning center. The occurrence of the 2017 México tsunami raised concerns about warning dissemination mechanisms for most coastal communities in Costa Rica, due to its short travel time. 16. Numerical Tsunami Hazard Assessment of the Only Active Lesser Antilles Arc Submarine Volcano: Kick 'em Jenny. Science.gov (United States) Dondin, F. J. Y.; Dorville, J. F. M.; Robertson, R. E. A. 2015-12-01 The Lesser Antilles Volcanic Arc has potentially been hit by prehistorical regional tsunamis generated by voluminous volcanic landslides (volume > 1 km3) among the 53 events recognized so far. No field evidence of these tsunamis are found in the vincity of the sources. Such a scenario taking place nowadays would trigger hazardous tsunami waves bearing potentially catastrophic consequences for the closest islands and regional offshore oil platforms.Here we applied a complete hazard assessment method on the only active submarine volcano of the arc Kick 'em Jenny (KeJ). KeJ is the southernmost edifice with recognized associated volcanic landslide deposits. From the three identified landslide episodes one is associated with a collapse volume ca. 4.4 km3. Numerical simulations considering a single pulse collapse revealed that this episode would have produced a regional tsunami. An edifice current volume estimate is ca. 1.5 km3.Previous study exists in relationship to assessment of regional tsunami hazard related to shoreline surface elevation (run-up) in the case of a potential flank collapse scenario at KeJ. However this assessment was based on inferred volume of collapse material. We aim to firstly quantify potential initial volumes of collapse material using relative slope instability analysis (RSIA); secondly to assess first order run-ups and maximum inland inundation distance for Barbados and Trinidad and Tobago, i.e. two important economic centers of the Lesser Antilles. In this framework we present for seven geomechanical models tested in the RSIA step maps of critical failure surface associated with factor of stability (Fs) for twelve sectors of 30° each; then we introduce maps of expected potential run-ups (run-up × the probability of failure at a sector) at the shoreline.The RSIA evaluates critical potential failure surface associated with Fs <1 as compared to areas of deficit/surplus of mass/volume identified on the volcanic edifice using (VolcanoFit 2 17. Mental Health in Sumatra After the Tsunami Science.gov (United States) Frankenberg, Elizabeth; Friedman, Jed; Gillespie, Thomas; Ingwersen, Nicholas; Pynoos, Robert; Rifai, Iip Umar; Sikoki, Bondan; Steinberg, Alan; Sumantri, Cecep; Suriastini, Wayan; Thomas, Duncan 2008-01-01 Objectives. We assessed the levels and correlates of posttraumatic stress reactivity (PTSR) of more than 20000 adult tsunami survivors by analyzing survey data from coastal Aceh and North Sumatra, Indonesia. Methods. A population-representative sample of individuals interviewed before the tsunami was traced in 2005 to 2006. We constructed 2 scales measuring PTSR by using 7 symptom items from the Post Traumatic Stress Disorder (PTSD) Checklist–Civilian Version. One scale measured PTSR at the time of interview, and the other measured PTSR at the point of maximum intensity since the disaster. Results. PTSR scores were highest for respondents from heavily damaged areas. In all areas, scores declined over time. Gender and age were significant predictors of PTSR; markers of socioeconomic status before the tsunami were not. Exposure to traumatic events, loss of kin, and property damage were significantly associated with higher PTSR scores. Conclusions. The tsunami produced posttraumatic stress reactions across a wide region of Aceh and North Sumatra. Public health will be enhanced by the provision of counseling services that reach not only people directly affected by the tsunami but also those living beyond the area of immediate impact. PMID:18633091 18. Global Tsunami Database: Adding Geologic Deposits, Proxies, and Tools Science.gov (United States) Brocko, V. R.; Varner, J. 2007-12-01 A result of collaboration between NOAA's National Geophysical Data Center (NGDC) and the Cooperative Institute for Research in the Environmental Sciences (CIRES), the Global Tsunami Database includes instrumental records, human observations, and now, information inferred from the geologic record. Deep Ocean Assessment and Reporting of Tsunamis (DART) data, historical reports, and information gleaned from published tsunami deposit research build a multi-faceted view of tsunami hazards and their history around the world. Tsunami history provides clues to what might happen in the future, including frequency of occurrence and maximum wave heights. However, instrumental and written records commonly span too little time to reveal the full range of a region's tsunami hazard. The sedimentary deposits of tsunamis, identified with the aid of modern analogs, increasingly complement instrumental and human observations. By adding the component of tsunamis inferred from the geologic record, the Global Tsunami Database extends the record of tsunamis backward in time. Deposit locations, their estimated age and descriptions of the deposits themselves fill in the tsunami record. Tsunamis inferred from proxies, such as evidence for coseismic subsidence, are included to estimate recurrence intervals, but are flagged to highlight the absence of a physical deposit. Authors may submit their own descriptions and upload digital versions of publications. Users may sort by any populated field, including event, location, region, age of deposit, author, publication type (extract information from peer reviewed publications only, if you wish), grain size, composition, presence/absence of plant material. Users may find tsunami deposit references for a given location, event or author; search for particular properties of tsunami deposits; and even identify potential collaborators. Users may also download public-domain documents. Data and information may be viewed using tools designed to extract and 19. Long-term statistics of extreme tsunami height at Crescent City Science.gov (United States) Dong, Sheng; Zhai, Jinjin; Tao, Shanshan 2017-06-01 Historically, Crescent City is one of the most vulnerable communities impacted by tsunamis along the west coast of the United States, largely attributed to its offshore geography. Trans-ocean tsunamis usually produce large wave runup at Crescent Harbor resulting in catastrophic damages, property loss and human death. How to determine the return values of tsunami height using relatively short-term observation data is of great significance to assess the tsunami hazards and improve engineering design along the coast of Crescent City. In the present study, the extreme tsunami heights observed along the coast of Crescent City from 1938 to 2015 are fitted using six different probabilistic distributions, namely, the Gumbel distribution, the Weibull distribution, the maximum entropy distribution, the lognormal distribution, the generalized extreme value distribution and the generalized Pareto distribution. The maximum likelihood method is applied to estimate the parameters of all above distributions. Both Kolmogorov-Smirnov test and root mean square error method are utilized for goodness-of-fit test and the better fitting distribution is selected. Assuming that the occurrence frequency of tsunami in each year follows the Poisson distribution, the Poisson compound extreme value distribution can be used to fit the annual maximum tsunami amplitude, and then the point and interval estimations of return tsunami heights are calculated for structural design. The results show that the Poisson compound extreme value distribution fits tsunami heights very well and is suitable to determine the return tsunami heights for coastal disaster prevention. 20. Community exposure to tsunami hazards in California Science.gov (United States) Wood, Nathan J.; Ratliff, Jamie; Peters, Jeff 2013-01-01 Evidence of past events and modeling of potential events suggest that tsunamis are significant threats to low-lying communities on the California coast. To reduce potential impacts of future tsunamis, officials need to understand how communities are vulnerable to tsunamis and where targeted outreach, preparedness, and mitigation efforts may be warranted. Although a maximum tsunami-inundation zone based on multiple sources has been developed for the California coast, the populations and businesses in this zone have not been documented in a comprehensive way. To support tsunami preparedness and risk-reduction planning in California, this study documents the variations among coastal communities in the amounts, types, and percentages of developed land, human populations, and businesses in the maximum tsunami-inundation zone. The tsunami-inundation zone includes land in 94 incorporated cities, 83 unincorporated communities, and 20 counties on the California coast. According to 2010 U.S. Census Bureau data, this tsunami-inundation zone contains 267,347 residents (1 percent of the 20-county resident population), of which 13 percent identify themselves as Hispanic or Latino, 14 percent identify themselves as Asian, 16 percent are more than 65 years in age, 12 percent live in unincorporated areas, and 51 percent of the households are renter occupied. Demographic attributes related to age, race, ethnicity, and household status of residents in tsunami-prone areas demonstrate substantial range among communities that exceed these regional averages. The tsunami-inundation zone in several communities also has high numbers of residents in institutionalized and noninstitutionalized group quarters (for example, correctional facilities and military housing, respectively). Communities with relatively high values in the various demographic categories are identified throughout the report. The tsunami-inundation zone contains significant nonresidential populations based on 2011 economic 1. Probabilistic Tsunami Hazard Analysis of the Pacific Coast of Mexico: Case Study Based on the 1995 Colima Earthquake Tsunami Directory of Open Access Journals (Sweden) Nobuhito Mori 2017-06-01 Full Text Available This study develops a novel computational framework to carry out probabilistic tsunami hazard assessment for the Pacific coast of Mexico. The new approach enables the consideration of stochastic tsunami source scenarios having variable fault geometry and heterogeneous slip that are constrained by an extensive database of rupture models for historical earthquakes around the world. The assessment focuses upon the 1995 Jalisco–Colima Earthquake Tsunami from a retrospective viewpoint. Numerous source scenarios of large subduction earthquakes are generated to assess the sensitivity and variability of tsunami inundation characteristics of the target region. Analyses of nine slip models along the Mexican Pacific coast are performed, and statistical characteristics of slips (e.g., coherent structures of slip spectra are estimated. The source variability allows exploring a wide range of tsunami scenarios for a moment magnitude (Mw 8 subduction earthquake in the Mexican Pacific region to conduct thorough sensitivity analyses and to quantify the tsunami height variability. The numerical results indicate a strong sensitivity of maximum tsunami height to major slip locations in the source and indicate major uncertainty at the first peak of tsunami waves. 2. Observations and Modeling of the August 27, 2012 Earthquake and Tsunami affecting El Salvador and Nicaragua Science.gov (United States) Borrero, Jose C.; Kalligeris, Nikos; Lynett, Patrick J.; Fritz, Hermann M.; Newman, Andrew V.; Convers, Jaime A. 2014-12-01 On 27 August 2012 (04:37 UTC, 26 August 10:37 p.m. local time) a magnitude M w = 7.3 earthquake occurred off the coast of El Salvador and generated surprisingly large local tsunami. Following the event, local and international tsunami teams surveyed the tsunami effects in El Salvador and northern Nicaragua. The tsunami reached a maximum height of ~6 m with inundation of up to 340 m inland along a 25 km section of coastline in eastern El Salvador. Less severe inundation was reported in northern Nicaragua. In the far-field, the tsunami was recorded by a DART buoy and tide gauges in several locations of the eastern Pacific Ocean but did not cause any damage. The field measurements and recordings are compared to numerical modeling results using initial conditions of tsunami generation based on finite-fault earthquake and tsunami inversions and a uniform slip model. 3. Safety evaluation of nuclear power plant against the virtual tsunami International Nuclear Information System (INIS) Chin, S. B.; Imamura, Fumihiko 2004-01-01 The main scope of this study is the numerical analysis of virtual tsunami event near the Ulchin Nuclear Power Plants. In the numerical analysis, the maximum run-up height and draw-down are estimated at the Ulchin Nuclear Power Plants. The computer program developed in this study describes the propagation and associated run-up process of tsunamis by solving linear and nonlinear shallow-water equations with finite difference methods. It can be used to check the safety of a nuclear power plant against tsunami attacks. The program can also be used to calculate run-up height of wave and provide proper design criteria for coastal facilities and structures. A maximum inundation zone along the coastline can be developed by using the moving boundary condition. As a result, it is predicted that the Ulchin Nuclear Power Plants might be safe against the virtual tsunami event. Although the Ulchin Nuclear Power Plants are safe against the virtual tsunami event, the occurrence of a huge tsunami in the seismic gap should be investigated in detail. Furthermore, the possibility of nearshore tsunamis around the Korean Peninsula should also be studied and monitored continuously 4. The Three Tsunamis Science.gov (United States) Antcliff, Richard R. 2007-01-01 We often talk about how different our world is from our parent's world. We then extrapolate this thinking to our children and try to imagine the world they will face. This is hard enough. However, change is changing! The rate at which change is occurring is accelerating. These new ideas, technologies and ecologies appear to be coming at us like tsunamis. Our approach to responding to these oncoming tsunamis will frame the future our children will live in. There are many of these tsunamis; I am just going to focus on three really big ones heading our way. 5. St. Croix, U.S. Virgin Islands Coastal Digital Elevation Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The 1/3 arc-second St. Croix, U.S. Virgin Islands Coastal Digital Elevation Model will be used to support NOAA's tsunami forecast system and for tsunami inundation... 6. Test of TEDA, Tsunami Early Detection Algorithm Science.gov (United States) Bressan, Lidia; Tinti, Stefano 2010-05-01 Tsunami detection in real-time, both offshore and at the coastline, plays a key role in Tsunami Warning Systems since it provides so far the only reliable and timely proof of tsunami generation, and is used to confirm or cancel tsunami warnings previously issued on the basis of seismic data alone. Moreover, in case of submarine or coastal landslide generated tsunamis, which are not announced by clear seismic signals and are typically local, real-time detection at the coastline might be the fastest way to release a warning, even if the useful time for emergency operations might be limited. TEDA is an algorithm for real-time detection of tsunami signal on sea-level records, developed by the Tsunami Research Team of the University of Bologna. The development and testing of the algorithm has been accomplished within the framework of the Italian national project DPC-INGV S3 and the European project TRANSFER. The algorithm is to be implemented at station level, and it is based therefore only on sea-level data of a single station, either a coastal tide-gauge or an offshore buoy. TEDA's principle is to discriminate the first tsunami wave from the previous background signal, which implies the assumption that the tsunami waves introduce a difference in the previous sea-level signal. Therefore, in TEDA the instantaneous (most recent) and the previous background sea-level elevation gradients are characterized and compared by proper functions (IS and BS) that are updated at every new data acquisition. Detection is triggered when the instantaneous signal function passes a set threshold and at the same time it is significantly bigger compared to the previous background signal. The functions IS and BS depend on temporal parameters that allow the algorithm to be adapted different situations: in general, coastal tide-gauges have a typical background spectrum depending on the location where the instrument is installed, due to local topography and bathymetry, while offshore buoys are 7. Tsunami Forecasting in the Atlantic Basin Science.gov (United States) Knight, W. R.; Whitmore, P.; Sterling, K.; Hale, D. A.; Bahng, B. 2012-12-01 -computation - starting with those sources that carry the highest risk. Model computation zones are confined to regions at risk to save computation time. For example, Atlantic sources have been shown to not propagate into the Gulf of Mexico. Therefore, fine grid computations are not performed in the Gulf for Atlantic sources. Outputs from the Atlantic model include forecast marigrams at selected sites, maximum amplitudes, drawdowns, and currents for all coastal points. The maximum amplitude maps will be supplemented with contoured energy flux maps which show more clearly the effects of bathymetric features on tsunami wave propagation. During an event, forecast marigrams will be compared to observations to adjust the model results. The modified forecasts will then be used to set alert levels between coastal breakpoints, and provided to emergency management. 8. Development of tsunami hazard analysis Energy Technology Data Exchange (ETDEWEB) NONE 2012-08-15 The NSC (the Nuclear Safety Commission of Japan) demand to survey on tsunami deposits by use of various technical methods (Dec. 2011), because tsunami deposits have useful information on tsunami activity, tsunami source etc. However, there are no guidebooks on tsunami deposit survey in JAPAN. In order to prepare the guidebook of tsunami deposits survey and to develop the method of tsunami source estimation on the basis of tsunami deposits, JNES carried out the following issues; (1) organizing information of paleoseismological record and tsunami deposit by literature research, and (2) field survey on tsunami deposit to prepare the guidebook. As to (1), we especially gear to tsunami deposits distributed in the Pacific coast of Tohoku region, and organize the information gained about tsunami deposits in the database. In addition, as to (2), we consolidate methods for surveying and identifying tsunami deposits in the lake based on results of the field survey in Fukui Pref., carried out by JNES. These results are reflected in the guidebook on the tsunami deposits in the lake as needed. (author) 9. Development of tsunami hazard analysis International Nuclear Information System (INIS) 2012-01-01 The NSC (the Nuclear Safety Commission of Japan) demand to survey on tsunami deposits by use of various technical methods (Dec. 2011), because tsunami deposits have useful information on tsunami activity, tsunami source etc. However, there are no guidebooks on tsunami deposit survey in JAPAN. In order to prepare the guidebook of tsunami deposits survey and to develop the method of tsunami source estimation on the basis of tsunami deposits, JNES carried out the following issues; (1) organizing information of paleoseismological record and tsunami deposit by literature research, and (2) field survey on tsunami deposit to prepare the guidebook. As to (1), we especially gear to tsunami deposits distributed in the Pacific coast of Tohoku region, and organize the information gained about tsunami deposits in the database. In addition, as to (2), we consolidate methods for surveying and identifying tsunami deposits in the lake based on results of the field survey in Fukui Pref., carried out by JNES. These results are reflected in the guidebook on the tsunami deposits in the lake as needed. (author) 10. Statistical emulation of landslide-induced tsunamis at the Rockall Bank, NE Atlantic. Science.gov (United States) Salmanidou, D M; Guillas, S; Georgiopoulou, A; Dias, F 2017-04-01 Statistical methods constitute a useful approach to understand and quantify the uncertainty that governs complex tsunami mechanisms. Numerical experiments may often have a high computational cost. This forms a limiting factor for performing uncertainty and sensitivity analyses, where numerous simulations are required. Statistical emulators, as surrogates of these simulators, can provide predictions of the physical process in a much faster and computationally inexpensive way. They can form a prominent solution to explore thousands of scenarios that would be otherwise numerically expensive and difficult to achieve. In this work, we build a statistical emulator of the deterministic codes used to simulate submarine sliding and tsunami generation at the Rockall Bank, NE Atlantic Ocean, in two stages. First we calibrate, against observations of the landslide deposits, the parameters used in the landslide simulations. This calibration is performed under a Bayesian framework using Gaussian Process (GP) emulators to approximate the landslide model, and the discrepancy function between model and observations. Distributions of the calibrated input parameters are obtained as a result of the calibration. In a second step, a GP emulator is built to mimic the coupled landslide-tsunami numerical process. The emulator propagates the uncertainties in the distributions of the calibrated input parameters inferred from the first step to the outputs. As a result, a quantification of the uncertainty of the maximum free surface elevation at specified locations is obtained. 11. Tsunamis and marine life Digital Repository Service at National Institute of Oceanography (India) Rao, D.V.S.; Ingole, B.S.; Tang, D.; Satyanarayan, B.; Zhao, H. The 26 December 2004 tsunami in the Indian Ocean exerted far reaching temporal and spatial impacts on marine biota. Our synthesis was based on satellite data acquired by the Laboratory for Tropical Marine Environmental Dynamics (LED) of the South... 12. Tsunami Risk Assessment Modelling in Chabahar Port, Iran Science.gov (United States) Delavar, M. R.; Mohammadi, H.; Sharifi, M. A.; Pirooz, M. D. 2017-09-01 The well-known historical tsunami in the Makran Subduction Zone (MSZ) region was generated by the earthquake of November 28, 1945 in Makran Coast in the North of Oman Sea. This destructive tsunami killed over 4,000 people in Southern Pakistan and India, caused great loss of life and devastation along the coasts of Western India, Iran and Oman. According to the report of "Remembering the 1945 Makran Tsunami", compiled by the Intergovernmental Oceanographic Commission (UNESCO/IOC), the maximum inundation of Chabahar port was 367 m toward the dry land, which had a height of 3.6 meters from the sea level. In addition, the maximum amount of inundation at Pasni (Pakistan) reached to 3 km from the coastline. For the two beaches of Gujarat (India) and Oman the maximum run-up height was 3 m from the sea level. In this paper, we first use Makran 1945 seismic parameters to simulate the tsunami in generation, propagation and inundation phases. The effect of tsunami on Chabahar port is simulated using the ComMIT model which is based on the Method of Splitting Tsunami (MOST). In this process the results are compared with the documented eyewitnesses and some reports from researchers for calibration and validation of the result. Next we have used the model to perform risk assessment for Chabahar port in the south of Iran with the worst case scenario of the tsunami. The simulated results showed that the tsunami waves will reach Chabahar coastline 11 minutes after generation and 9 minutes later, over 9.4 Km2 of the dry land will be flooded with maximum wave amplitude reaching up to 30 meters. 13. TSUNAMI RISK ASSESSMENT MODELLING IN CHABAHAR PORT, IRAN Directory of Open Access Journals (Sweden) M. R. Delavar 2017-09-01 Full Text Available The well-known historical tsunami in the Makran Subduction Zone (MSZ region was generated by the earthquake of November 28, 1945 in Makran Coast in the North of Oman Sea. This destructive tsunami killed over 4,000 people in Southern Pakistan and India, caused great loss of life and devastation along the coasts of Western India, Iran and Oman. According to the report of "Remembering the 1945 Makran Tsunami", compiled by the Intergovernmental Oceanographic Commission (UNESCO/IOC, the maximum inundation of Chabahar port was 367 m toward the dry land, which had a height of 3.6 meters from the sea level. In addition, the maximum amount of inundation at Pasni (Pakistan reached to 3 km from the coastline. For the two beaches of Gujarat (India and Oman the maximum run-up height was 3 m from the sea level. In this paper, we first use Makran 1945 seismic parameters to simulate the tsunami in generation, propagation and inundation phases. The effect of tsunami on Chabahar port is simulated using the ComMIT model which is based on the Method of Splitting Tsunami (MOST. In this process the results are compared with the documented eyewitnesses and some reports from researchers for calibration and validation of the result. Next we have used the model to perform risk assessment for Chabahar port in the south of Iran with the worst case scenario of the tsunami. The simulated results showed that the tsunami waves will reach Chabahar coastline 11 minutes after generation and 9 minutes later, over 9.4 Km2 of the dry land will be flooded with maximum wave amplitude reaching up to 30 meters. 14. Development of a Tsunami Scenario Database for Marmara Sea Science.gov (United States) Ozer Sozdinler, Ceren; Necmioglu, Ocal; Meral Ozel, Nurcan 2016-04-01 Due to the very short travel times in Marmara Sea, a Tsunami Early Warning System (TEWS) has to be strongly coupled with the earthquake early warning system and should be supported with a pre-computed tsunami scenario database to be queried in near real-time based on the initial earthquake parameters. To address this problem, 30 different composite earthquake scenarios with maximum credible Mw values based on 32 fault segments have been identified to produce a detailed scenario database for all possible earthquakes in the Marmara Sea with a tsunamigenic potential. The bathy/topo data of Marmara Sea was prepared using GEBCO and ASTER data, bathymetric measurements along Bosphorus, Istanbul and Dardanelle, Canakkale and the coastline digitized from satellite images. The coarser domain in 90m-grid size was divided into 11 sub-regions having 30m-grid size in order to increase the data resolution and precision of the calculation results. The analyses were performed in nested domains with numerical model NAMIDANCE using non-linear shallow water equations. In order to cover all the residential areas, industrial facilities and touristic locations, more than 1000 numerical gauge points were selected along the coasts of Marmara Sea, which are located at water depth of 5 to 10m in finer domain. The distributions of tsunami hydrodynamic parameters were investigated together with the change of water surface elevations, current velocities, momentum fluxes and other important parameters at the gauge points. This work is funded by the project MARsite - New Directions in Seismic Hazard assessment through Focused Earth Observation in the Marmara Supersite (FP7-ENV.2012 6.4-2, Grant 308417 - see NH2.3/GMPV7.4/SM7.7) and supported by SATREPS-MarDim Project (Earthquake and Tsunami Disaster Mitigation in the Marmara Region and Disaster Education in Turkey) and JICA (Japan International Cooperation Agency). The authors would like to acknowledge Ms. Basak Firat for her assistance in 15. Floods and tsunamis. Science.gov (United States) Llewellyn, Mark 2006-06-01 Floods and tsunamis cause few severe injuries, but those injuries can overwhelm local areas, depending on the magnitude of the disaster. Most injuries are extremity fractures, lacerations, and sprains. Because of the mechanism of soft tissue and bone injuries, infection is a significant risk. Aspiration pneumonias are also associated with tsunamis. Appropriate precautionary interventions prevent communicable dis-ease outbreaks. Psychosocial health issues must be considered. 16. Near Field Modeling for the Maule Tsunami from DART, GPS and Finite Fault Solutions (Invited) Science.gov (United States) Arcas, D.; Chamberlin, C.; Lagos, M.; Ramirez-Herrera, M.; Tang, L.; Wei, Y. 2010-12-01 The earthquake and tsunami of February, 27, 2010 in central Chile has rekindled an interest in developing techniques to predict the impact of near field tsunamis along the Chilean coastline. Following the earthquake, several initiatives were proposed to increase the density of seismic, pressure and motion sensors along the South American trench, in order to provide field data that could be used to estimate tsunami impact on the coast. However, the precise use of those data in the elaboration of a quantitative assessment of coastal tsunami damage has not been clarified. The present work makes use of seismic, Deep-ocean Assessment and Reporting of Tsunamis (DART®) systems, and GPS measurements obtained during the Maule earthquake to initiate a number of tsunami inundation models along the rupture area by expressing different versions of the seismic crustal deformation in terms of NOAA’s tsunami unit source functions. Translation of all available real-time data into a feasible tsunami source is essential in near-field tsunami impact prediction in which an impact assessment must be generated under very stringent time constraints. Inundation results from each different source are then contrasted with field and tide gauge data by comparing arrival time, maximum wave height, maximum inundation and tsunami decay rate, using field data collected by the authors. 17. Source of high tsunamis along the southernmost Ryukyu trench inferred from tsunami stratigraphy Science.gov (United States) Ando, Masataka; Kitamura, Akihisa; Tu, Yoko; Ohashi, Yoko; Imai, Takafumi; Nakamura, Mamoru; Ikuta, Ryoya; Miyairi, Yosuke; Yokoyama, Yusuke; Shishikura, Masanobu 2018-01-01 Four paleotsunamis deposits are exposed in a trench on the coastal lowland north of the southern Ryukyu subduction zone trench. Radiocarbon ages on coral and bivalve shells show that the four deposits record tsunamis date from the last 2000 yrs., including a historical tsunami with a maximum run-up of 30 m in 1771, for an average recurrence interval of approximately 600 yrs. Ground fissures in a soil beneath the 1771 tsunami deposit may have been generated by stronger shaking than recorded by historical documents. The repeated occurrence of the paleotsunami deposits supports a tectonic source model on the plate boundary rather than a nontectonic source model, such as submarine landslides. Assuming a thrust model at the subduction zone, the seismic coupling ratio may be as low as 20%. 18. TSUNAMI HAZARD ASSESSMENT IN THE NORTHERN AEGEAN SEA Directory of Open Access Journals (Sweden) Barbara Theilen-Willige 2008-01-01 Full Text Available Emergency planning for the assessment of tsunami hazard inundation and of secondary effects of erosion and landslides, requires mapping that can help identify coastal areas that are potentially vulnerable. The present study reviews tsunami susceptibility mapping for coastal areas of Turkey and Greece in the Aegean Sea. Potential tsunami vulnerable locations were identified from LANDSAT ETM imageries, Shuttle Radar Topography Mission (SRTM, 2000 data and QuickBird imageries and from a GIS integrated spatial database. LANDSAT ETM and Digital Elevation Model (DEM data derived by the SRTM-Mission were investigated to help detect traces of past flooding events. LANDSAT ETM imageries, merged with digitally processed and enhanced SRTM data, clearly indicate the areas that may be prone to flooding if catastrophic tsunami events or storm surges occur. 19. 2004 INDIAN OCEAN TSUNAMI ON THE MALDIVES ISLANDS: INITIAL OBSERVATIONS Directory of Open Access Journals (Sweden) Barbara H. Keating 2005-01-01 Full Text Available Post-tsunami field surveys of the Maldives Islands where carried out to document the effects of the tsunami inundation. The study area was situated in the islands of South Male Atoll that were some of the most heavily damaged islands of the Maldive Islands. The tsunami damaged the natural environment, vegetation, man-made structures, and residents. The maximum tsunami wave height was 3-4 m. This level of inundation exceeded the height of most residents. The wave height was greatest on the eastern rim of the South Male Atoll (closest to the tsunami source and these islands were completely flooded. The islands within the interior of the atoll saw the lowest wave heights, and these were only marginally flooded.Surveys of flood lines left on the exterior and interior of structures were measured but proved to be substantially less than that reported by survivors. It appears that the highest inundation was not preserved as flood lines. We suggest that the turbulence associated with the tsunami inundation erased the highest lines or that they did not form due to an absence of debris and organic compounds that acted as adhesion during the initial flooding.Significant erosion was documented. Deposition took place in the form of sand sheets while only desultory deposition of coral clasts in marginal areas was found. Seasonal erosion, and storms are likely to remove most or all of the traces of the tsunami within these islands. 20. Probabilistic tsunami hazard assessment at Seaside, Oregon, for near-and far-field seismic sources Science.gov (United States) Gonzalez, F.I.; Geist, E.L.; Jaffe, B.; Kanoglu, U.; Mofjeld, H.; Synolakis, C.E.; Titov, V.V.; Areas, D.; Bellomo, D.; Carlton, D.; Horning, T.; Johnson, J.; Newman, J.; Parsons, T.; Peters, R.; Peterson, C.; Priest, G.; Venturato, A.; Weber, J.; Wong, F.; Yalciner, A. 2009-01-01 The first probabilistic tsunami flooding maps have been developed. The methodology, called probabilistic tsunami hazard assessment (PTHA), integrates tsunami inundation modeling with methods of probabilistic seismic hazard assessment (PSHA). Application of the methodology to Seaside, Oregon, has yielded estimates of the spatial distribution of 100- and 500-year maximum tsunami amplitudes, i.e., amplitudes with 1% and 0.2% annual probability of exceedance. The 100-year tsunami is generated most frequently by far-field sources in the Alaska-Aleutian Subduction Zone and is characterized by maximum amplitudes that do not exceed 4 m, with an inland extent of less than 500 m. In contrast, the 500-year tsunami is dominated by local sources in the Cascadia Subduction Zone and is characterized by maximum amplitudes in excess of 10 m and an inland extent of more than 1 km. The primary sources of uncertainty in these results include those associated with interevent time estimates, modeling of background sea level, and accounting for temporal changes in bathymetry and topography. Nonetheless, PTHA represents an important contribution to tsunami hazard assessment techniques; viewed in the broader context of risk analysis, PTHA provides a method for quantifying estimates of the likelihood and severity of the tsunami hazard, which can then be combined with vulnerability and exposure to yield estimates of tsunami risk. Copyright 2009 by the American Geophysical Union. 1. Survey Report on the Tsunami of the Michoacan, Mexico Earthquake of September 19, 1985 OpenAIRE Abe, Katsuyuki; Hakuno, Motohiko; Takeuchi, Mikio; Katada, Toshiyuki 1987-01-01 The tsunami was caused by the Michoacan, Mexico earthquake (M. 8.1) of September 19, 1985. According to the site survey, sea water ran up to an elevation of 2 meters or more above sea level in the coastal areas of Mexico from Petatlan to Playa Azul. The tsunami was as high as 4 meters at Barra del Potosi and Playa Linda, where minor tsunami damages occurred; some thatched huts on the beaches were destroyed and pieces of furniture were swept out to sea. The tsunami magnitude Mt is estimated to... 2. Tsunami Arrival Detection with High Frequency (HF Radar Directory of Open Access Journals (Sweden) Donald Barrick 2012-05-01 Full Text Available Quantitative real-time observations of a tsunami have been limited to deep-water, pressure-sensor observations of changes in the sea surface elevation and observations of sea level fluctuations at the coast, which are essentially point measurements. Constrained by these data, models have been used for predictions and warning of the arrival of a tsunami, but to date no system exists for local detection of an actual incoming wave with a significant warning capability. Networks of coastal high frequency (HF-radars are now routinely observing surface currents in many countries. We report here on an empirical method for the detection of the initial arrival of a tsunami, and demonstrate its use with results from data measured by fourteen HF radar sites in Japan and USA following the magnitude 9.0 earthquake off Sendai, Japan, on 11 March 2011. The distance offshore at which the tsunami can be detected, and hence the warning time provided, depends on the bathymetry: the wider the shallow continental shelf, the greater this time. We compare arrival times at the radars with those measured by neighboring tide gauges. Arrival times measured by the radars preceded those at neighboring tide gauges by an average of 19 min (Japan and 15 min (USA The initial water-height increase due to the tsunami as measured by the tide gauges was moderate, ranging from 0.3 to 2 m. Thus it appears possible to detect even moderate tsunamis using this method. Larger tsunamis could obviously be detected further from the coast. We find that tsunami arrival within the radar coverage area can be announced 8 min (i.e., twice the radar spectral time resolution after its first appearance. This can provide advance warning of the tsunami approach to the coastline locations. 3. Tsunami Generation and Propagation by 3D deformable Landslides and Application to Scenarios Science.gov (United States) McFall, Brian C.; Fritz, Hermann M. 2014-05-01 Tsunamis generated by landslides and volcano flank collapse account for some of the most catastrophic natural disasters recorded and can be particularly devastative in the near field region due to locally high wave amplitudes and runup. The events of 1958 Lituya Bay, 1963 Vajont reservoir, 1980 Spirit Lake, 2002 Stromboli and 2010 Haiti demonstrate the danger of tsunamis generated by landslides or volcano flank collapses. Unfortunately critical field data from these events is lacking. Source and runup scenarios based on real world events are physically modeled using generalized Froude similarity in the three dimensional NEES tsunami wave basin at Oregon State University. A novel pneumatic landslide tsunami generator (LTG) was deployed to simulate landslides with varying geometry and kinematics. The bathymetric and topographic scenarios tested with the LTG are the basin-wide propagation and runup, fjord, curved headland fjord and a conical island setting representing a landslide off an island or a volcano flank collapse. The LTG consists of a sliding box filled with 1,350 kg of landslide material which is accelerated by means of four pneumatic pistons down a 2H:1V slope. The landslide is launched from the sliding box and continues to accelerate by gravitational forces up to velocities of 5 m/s. The landslide Froude number at impact with the water is in the range 1 elevations are recorded by an array of resistance wave gauges. The landslide deformation is measured from above and underwater camera recordings. The landslide deposit is measured on the basin floor with a multiple transducer acoustic array (MTA). Landslide surface reconstruction and kinematics are determined with a stereo particle image velocimetry (PIV) system. Wave runup is recorded with resistance wave gauges along the slope and verified 4. Tsunamis triggered by the 12 January 2010 Earthquake in Haiti Science.gov (United States) Fritz, H. M.; Hillaire, J. V.; Molière, E.; Mohammed, F.; Wei, Y. 2010-12-01 On 12 January 2010 a magnitude Mw 7.0 earthquake occurred 25 km west-southwest of Haiti’s Capital of Port-au-Prince, which resulted in more than 230,000 fatalities. In addition tsunami waves triggered by the earthquake caused at least 3 fatalities at Petit Paradis. Unfortunately, the people of Haiti had neither ancestral knowledge nor educational awareness of tsunami hazards despite the 1946 Dominican Republic tsunami at Hispaniola’s northeast coast. In sharp contrast Sri Lankan UN-soldiers on duty at Jacmel self-evacuated given the memory of the 2004 Indian Ocean tsunami. The International Tsunami Survey Team (ITST) documented flow depths, runup heights, inundation distances, sediment deposition, damage patterns at various scales, and performance of the man-made infrastructure and impact on the natural environment. The 31 January to 7 February 2010 ITST covered the greater Bay of Port-au-Prince and more than 100 km of Hispaniola’s south coast between Pedernales, Dominican Republic and Jacmel, Haiti. The Hispaniola survey data includes more than 20 runup and flow depth measurements. The tsunami impacts peaked with maximum flow depths exceeding 3 m both at Petit Paradis inside the Bay of Grand Goâve located 45 km west-southwest of Port-au-Prince and at Jacmel on Haiti’s south coast. A significant variation in tsunami impact was observed on Hispaniola and tsunami runup of more than 1 m was still observed at Pedernales in the Dominican Republic. Jacmel, which is near the center of the south coast, represents an unfortunate example of a village and harbor that was located for protection from storm waves but is vulnerable to tsunami waves with runup doubling from the entrance to the head of the bay. Inundation and damage was limited to less than 100 m inland at both Jacmel and Petit Paradis. Differences in wave period were documented between the tsunami waves at Petit Paradis and Jacmel. The Petit Paradis tsunami is attributed to a coastal submarine landslide 5. Leading Wave Amplitude of a Tsunami Science.gov (United States) Kanoglu, U. 2015-12-01 Okal and Synolakis (EGU General Assembly 2015, Geophysical Research Abstracts-Vol. 17-7622) recently discussed that why the maximum amplitude of a tsunami might not occur for the first wave. Okal and Synolakis list observations from 2011 Japan tsunami, which reached to Papeete, Tahiti with a fourth wave being largest and 72 min later after the first wave; 1960 Chilean tsunami reached Hilo, Hawaii with a maximum wave arriving 1 hour later with a height of 5m, first wave being only 1.2m. Largest later waves is a problem not only for local authorities both in terms of warning to the public and rescue efforts but also mislead the public thinking that it is safe to return shoreline or evacuated site after arrival of the first wave. Okal and Synolakis considered Hammack's (1972, Ph.D. Dissertation, Calif. Inst. Tech., 261 pp., Pasadena) linear dispersive analytical solution with a tsunami generation through an uplifting of a circular plug on the ocean floor. They performed parametric study for the radius of the plug and the depth of the ocean since these are the independent scaling lengths in the problem. They identified transition distance, as the second wave being larger, regarding the parameters of the problem. Here, we extend their analysis to an initial wave field with a finite crest length and, in addition, to a most common tsunami initial wave form of N-wave as presented by Tadepalli and Synolakis (1994, Proc. R. Soc. A: Math. Phys. Eng. Sci., 445, 99-112). We compare our results with non-dispersive linear shallow water wave results as presented by Kanoglu et al. (2013, Proc. R. Soc. A: Math. Phys. Eng. Sci., 469, 20130015), investigating focusing feature. We discuss the results both in terms of leading wave amplitude and tsunami focusing. Acknowledgment: The research leading to these results has received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 603839 (Project ASTARTE - Assessment, Strategy and Risk 6. Unusually large tsunamis frequent a currently creeping part of the Aleutian megathrust Science.gov (United States) Witter, Robert C.; Carver, G.A.; Briggs, Richard; Gelfenbaum, Guy R.; Koehler, R.D.; La Selle, SeanPaul M.; Bender, Adrian M.; Engelhart, S.E.; Hemphill-Haley, E.; Hill, Troy D. 2016-01-01 Current models used to assess earthquake and tsunami hazards are inadequate where creep dominates a subduction megathrust. Here we report geological evidence for large tsunamis, occurring on average every 300–340 years, near the source areas of the 1946 and 1957 Aleutian tsunamis. These areas bookend a postulated seismic gap over 200 km long where modern geodetic measurements indicate that the megathrust is currently creeping. At Sedanka Island, evidence for large tsunamis includes six sand sheets that blanket a lowland facing the Pacific Ocean, rise to 15 m above mean sea level, contain marine diatoms, cap terraces, adjoin evidence for scour, and date from the past 1700 years. The youngest sheet, and modern drift logs found as far as 800 m inland and >18 m elevation, likely record the 1957 tsunami. Modern creep on the megathrust coexists with previously unrecognized tsunami sources along this part of the Aleutian Subduction Zone. 7. Probability-Based Design Criteria of the ASCE 7 Tsunami Loads and Effects Provisions (Invited) Science.gov (United States) Chock, G. 2013-12-01 Mitigation of tsunami risk requires a combination of emergency preparedness for evacuation in addition to providing structural resilience of critical facilities, infrastructure, and key resources necessary for immediate response and economic and social recovery. Critical facilities would include emergency response, medical, tsunami refuges and shelters, ports and harbors, lifelines, transportation, telecommunications, power, financial institutions, and major industrial/commercial facilities. The Tsunami Loads and Effects Subcommittee of the ASCE/SEI 7 Standards Committee is developing a proposed new Chapter 6 - Tsunami Loads and Effects for the 2016 edition of the ASCE 7 Standard. ASCE 7 provides the minimum design loads and requirements for structures subject to building codes such as the International Building Code utilized in the USA. In this paper we will provide a review emphasizing the intent of these new code provisions and explain the design methodology. The ASCE 7 provisions for Tsunami Loads and Effects enables a set of analysis and design methodologies that are consistent with performance-based engineering based on probabilistic criteria. . The ASCE 7 Tsunami Loads and Effects chapter will be initially applicable only to the states of Alaska, Washington, Oregon, California, and Hawaii. Ground shaking effects and subsidence from a preceding local offshore Maximum Considered Earthquake will also be considered prior to tsunami arrival for Alaska and states in the Pacific Northwest regions governed by nearby offshore subduction earthquakes. For national tsunami design provisions to achieve a consistent reliability standard of structural performance for community resilience, a new generation of tsunami inundation hazard maps for design is required. The lesson of recent tsunami is that historical records alone do not provide a sufficient measure of the potential heights of future tsunamis. Engineering design must consider the occurrence of events greater than 8. Paleo-tsunami history along the northern Japan Trench: evidence from Noda Village, northern Sanriku coast, Japan Science.gov (United States) Inoue, Taiga; Goto, Kazuhisa; Nishimura, Yuichi; Watanabe, Masashi; Iijima, Yasutaka; Sugawara, Daisuke 2017-12-01 Throughout history, large tsunamis have frequently affected the Sanriku area of the Pacific coast of the Tohoku region, Japan, which faces the Japan Trench. Although a few studies have examined paleo-tsunami deposits along the Sanriku coast, additional studies of paleo-earthquakes and tsunamis are needed to improve our knowledge of the timing, recurrence interval, and size of historical and pre-historic tsunamis. At Noda Village, in Iwate Prefecture on the northern Sanriku coast, we found at least four distinct gravelly sand layers based on correlation and chronological data. Sedimentary features such as grain size and thickness suggest that extreme waves from the sea formed these layers. Numerical modeling of storm waves further confirmed that even extremely large storm waves cannot account for the distribution of the gravelly sand layers, suggesting that these deposits are highly likely to have formed by tsunami waves. The numerical method of storm waves can be useful to identify sand layers as tsunami deposits if the deposits are observed far inland or at high elevations. The depositional age of the youngest tsunami deposit is consistent with the AD 869 Jogan earthquake tsunami, a possible predecessor of the AD 2011 Tohoku-oki tsunami. If this is the case, then the study site currently defines the possible northern extent of the AD 869 Jogan tsunami deposit, which is an important step in improving the tsunami source model of the AD 869 Jogan tsunami. Our results suggest that four large tsunamis struck the Noda site between 1100 and 2700 cal BP. The local tsunami sizes are comparable to the AD 2011 and AD 1896 Meiji Sanriku tsunamis, considering the landward extent of each tsunami deposit. 9. Tsunami hazard assessment in El Salvador, Central America, from seismic sources through flooding numerical models. Science.gov (United States) Álvarez-Gómez, J. A.; Aniel-Quiroga, Í.; Gutiérrez-Gutiérrez, O. Q.; Larreynaga, J.; González, M.; Castro, M.; Gavidia, F.; Aguirre-Ayerbe, I.; González-Riancho, P.; Carreño, E. 2013-11-01 El Salvador is the smallest and most densely populated country in Central America; its coast has an approximate length of 320 km, 29 municipalities and more than 700 000 inhabitants. In El Salvador there were 15 recorded tsunamis between 1859 and 2012, 3 of them causing damages and resulting in hundreds of victims. Hazard assessment is commonly based on propagation numerical models for earthquake-generated tsunamis and can be approached through both probabilistic and deterministic methods. A deterministic approximation has been applied in this study as it provides essential information for coastal planning and management. The objective of the research was twofold: on the one hand the characterization of the threat over the entire coast of El Salvador, and on the other the computation of flooding maps for the three main localities of the Salvadorian coast. For the latter we developed high-resolution flooding models. For the former, due to the extension of the coastal area, we computed maximum elevation maps, and from the elevation in the near shore we computed an estimation of the run-up and the flooded area using empirical relations. We have considered local sources located in the Middle America Trench, characterized seismotectonically, and distant sources in the rest of Pacific Basin, using historical and recent earthquakes and tsunamis. We used a hybrid finite differences-finite volumes numerical model in this work, based on the linear and non-linear shallow water equations, to simulate a total of 24 earthquake-generated tsunami scenarios. Our results show that at the western Salvadorian coast, run-up values higher than 5 m are common, while in the eastern area, approximately from La Libertad to the Gulf of Fonseca, the run-up values are lower. The more exposed areas to flooding are the lowlands in the Lempa River delta and the Barra de Santiago Western Plains. The results of the empirical approximation used for the whole country are similar to the results 10. Tsunami evacuation plans for future megathrust earthquakes in Padang, Indonesia, considering stochastic earthquake scenarios Directory of Open Access Journals (Sweden) A. Muhammad 2017-12-01 Full Text Available This study develops tsunami evacuation plans in Padang, Indonesia, using a stochastic tsunami simulation method. The stochastic results are based on multiple earthquake scenarios for different magnitudes (Mw 8.5, 8.75, and 9.0 that reflect asperity characteristics of the 1797 historical event in the same region. The generation of the earthquake scenarios involves probabilistic models of earthquake source parameters and stochastic synthesis of earthquake slip distributions. In total, 300 source models are generated to produce comprehensive tsunami evacuation plans in Padang. The tsunami hazard assessment results show that Padang may face significant tsunamis causing the maximum tsunami inundation height and depth of 15 and 10 m, respectively. A comprehensive tsunami evacuation plan – including horizontal evacuation area maps, assessment of temporary shelters considering the impact due to ground shaking and tsunami, and integrated horizontal–vertical evacuation time maps – has been developed based on the stochastic tsunami simulation results. The developed evacuation plans highlight that comprehensive mitigation policies can be produced from the stochastic tsunami simulation for future tsunamigenic events. 11. Tsunami engineering study in India Digital Repository Service at National Institute of Oceanography (India) Mandal, S. ronmental Laboratory at NOAA, USA has the tsunami - research program ( http://www.pmel.noaa.gov/tsunami/). Th e t sunami research group is part of the Civi l Engineering Department at the Universit y of Southern California where undergra - duate... the engineering point of view. The Tsunami Engineering Labor a tory at the graduate School of Engineering, Tohoku Unive r sit y (http://www.tsunami.civil.tohoku.a c.jp/ hokusai2/main/eng/index.html) offers r e- se arch programmes on tsunami. The Uni - versity... 12. Tides and tsunamis Science.gov (United States) Zetler, B. D. 1972-01-01 Although tides and tsunamis are both shallow water waves, it does not follow that they are equally amenable to an observational program using an orbiting altimeter on a satellite. A numerical feasibility investigation using a hypothetical satellite orbit, real tide observations, and sequentially increased levels of white noise has been conducted to study the degradation of the tidal harmonic constants caused by adding noise to the tide data. Tsunami waves, possibly a foot high and one hundred miles long, must be measured in individual orbits, thus requiring high relative resolution. 13. Proposal of evaluation method of tsunami wave pressure using 2D depth-integrated flow simulation International Nuclear Information System (INIS) Arimitsu, Tsuyoshi; Ooe, Kazuya; Kawasaki, Koji 2012-01-01 To design and construct land structures resistive to tsunami force, it is most essential to evaluate tsunami pressure quantitatively. The existing hydrostatic formula, in general, tended to underestimate tsunami wave pressure under the condition of inundation flow with large Froude number. Estimation method of tsunami pressure acting on a land structure was proposed using inundation depth and horizontal velocity at the front of the structure, which were calculated employing a 2D depth-integrated flow model based on the unstructured grid system. The comparison between the numerical and experimental results revealed that the proposed method could reasonably reproduce the vertical distribution of the maximum tsunami pressure as well as the time variation of the tsunami pressure exerting on the structure. (author) 14. Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering Science.gov (United States) Geist, Eric L. 2012-01-01 Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources. 15. Reconstruction of far-field tsunami amplitude distributions from earthquake sources Science.gov (United States) Geist, Eric L.; Parsons, Thomas E. 2016-01-01 The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes. 16. On the moroccan tsunami catalogue Directory of Open Access Journals (Sweden) F. Kaabouben 2009-07-01 Full Text Available A primary tool for regional tsunami hazard assessment is a reliable historical and instrumental catalogue of events. Morocco by its geographical situation, with two marine sides, stretching along the Atlantic coast to the west and along the Mediterranean coast to the north, is the country of Western Africa most exposed to the risk of tsunamis. Previous information on tsunami events affecting Morocco are included in the Iberian and/or the Mediterranean lists of tsunami events, as it is the case of the European GITEC Tsunami Catalogue, but there is a need to organize this information in a dataset and to assess the likelihood of claimed historical tsunamis in Morocco. Due to the fact that Moroccan sources are scarce, this compilation rely on historical documentation from neighbouring countries (Portugal and Spain and so the compatibility between the new tsunami catalogue presented here and those that correspond to the same source areas is also discussed. 17. Physical Modeling of Landslide Generated Tsunamis and the 50th Anniversary of the Vajont Dam Disaster Science.gov (United States) McFall, Brian C.; Mohammed, Fahad; Fritz, Hermann M. 2013-04-01 The Vajont river is an affluent of the Piave River located in the Dolomite Alps of the Veneto Region, about 100km north of Venice. A 265.5 m high double curved arch dam was built across a V-shaped gorge creating a reservoir with a maximum storage capacity of 0.169 km3. A maximum water depth of 250 m was reached by early September 1963 during the third filling attempt of the reservoir, but as creeping on the southern flank increased the third reservoir draw down was initiated. By October 9, 1963 the water depth was lowered to 240m as the southern flank of Vajont reservoir catastrophically collapsed on a length of more than 2km. Collapse occurred during reservoir drawdown in a final attempt to reduce flank creeping and the reservoir was only about two-thirds full. The partially submerged rockslide with a volume of 0.24 km3 penetrated into the reservoir at velocities up to 30 m/s. The wave runup in direct prolongation of slide axis reached the lowest houses of Casso 270m above reservoir level before impact corresponding to 245m above dam crest (Müller, 1964). The rockslide deposit came within 50m of the left abutment and towers up to 140m above the dam crest. The lateral spreading of the surge overtopped the dam crest by more than 100m. The thin arch dam withstood the overtopping and sustained no damage to the structural shell and the abutments. The flood wave dropped more than 500m down the Vajont gorge and into the Piave Valley causing utter destruction to the villages of Longarone, Pirago, Villanova, Rivalta and Fae. More than 2000 persons perished. The Vajont disaster highlights an extreme landslide tsunami event in the narrowly confined water body of a reservoir. Landslide tsunami hazards exist even in areas not exposed to tectonic tsunamis. Source and runup scenarios based on real world events are physically modeled in the three dimensional NEES tsunami wave basin (TWB) at Oregon State University (OSU). A novel pneumatic landslide tsunami generator (LTG) was 18. Tsunami Forecasting: The 10 August 2009 Andaman tsunami Demonstrates Progress Science.gov (United States) Titov, Vasily; Moore, Christopher; Uslu, Burak; Kanoglu, Utku 2010-05-01 The 10 August 2009 Andaman non-destructive tsunami in the Indian Ocean demonstrated advances in creating a tsunami-resilient global society. Following the Indian Ocean tsunami on 26 December 2004, scientists at the National Oceanic and Atmospheric Administration Center for Tsunami Research (NCTR) at the Pacific Marine Environmental Laboratory (PMEL) developed an interface for its validated and verified tsunami numerical model Method of Splitting Tsunamis (MOST). MOST has been benchmarked substantially through analytical solutions, experimental results and field measurements (Synolakis et al., 2008). MOST and its interface the Community Model Interface for Tsunami (ComMIT) are distributed through extensive capacity-building sessions for the Indian Ocean nations using UNESCO/Intergovernmental Oceanographic Commission (IOC), AusAID, and USAID funding. Over one hundred-sixty scientists have been trained in tsunami inundation mapping, leading to the first generation of inundation models for many Indian Ocean shorelines. During the 10 August 2009 Andaman tsunami event, NCTR scientists exercised the forecast system in research mode using the first generation inundation models developed during ComMIT trainings. Assimilating key data from a Kingdom of Thailand tsunameter, coastal tsunami amplitudes were predicted in Indonesia, Thailand, and India coastlines, before the first tsunami arrival, using models developed by ComMIT trainees. Since its first test in 2003, one more time, NCTR's forecasting methodology proved the effectiveness of operational tsunami forecasting using real-time deep-ocean data assimilated into forecast models (Wei et al., 2008 and Titov, 2009). The 2009 Andaman tsunami demonstrated that operational tsunami forecasting tools are now available and coupled with inundation mapping tools can be effective and can reduce false alarms. International collaboration is required to fully utilize this technology's potential. Enhanced educational efforts both at 19. Tsunamigenic Ratio of the Pacific Ocean earthquakes and a proposal for a Tsunami Index Directory of Open Access Journals (Sweden) A. Suppasri 2012-01-01 Full Text Available The Pacific Ocean is the location where two-thirds of tsunamis have occurred, resulting in a great number of casualties. Once information on an earthquake has been issued, it is important to understand if there is a tsunami generation risk in relation with a specific earthquake magnitude or focal depth. This study proposes a Tsunamigenic Ratio (TR that is defined as the ratio between the number of earthquake-generated tsunamis and the total number of earthquakes. Earthquake and tsunami data used in this study were selected from a database containing tsunamigenic earthquakes from prior 1900 to 2011. The TR is calculated from earthquake events with a magnitude greater than 5.0, a focal depth shallower than 200 km and a sea depth less than 7 km. The results suggest that a great earthquake magnitude and a shallow focal depth have a high potential to generate tsunamis with a large tsunami height. The average TR in the Pacific Ocean is 0.4, whereas the TR for specific regions of the Pacific Ocean varies from 0.3 to 0.7. The TR calculated for each region shows the relationship between three influential parameters: earthquake magnitude, focal depth and sea depth. The three parameters were combined and proposed as a dimensionless parameter called the Tsunami Index (TI. TI can express better relationship with the TR and with maximum tsunami height, while the three parameters mentioned above cannot. The results show that recent submarine earthquakes had a higher potential to generate a tsunami with a larger tsunami height than during the last century. A tsunami is definitely generated if the TI is larger than 7.0. The proposed TR and TI will help ascertain the tsunami generation risk of each earthquake event based on a statistical analysis of the historical data and could be an important decision support tool during the early tsunami warning stage. 20. Impacts of tides on tsunami propagation due to potential Nankai Trough earthquakes in the Seto Inland Sea, Japan Science.gov (United States) Lee, Han Soo; Shimoyama, Tomohisa; Popinet, Stéphane 2015-10-01 The impacts of tides on extreme tsunami propagation due to potential Nankai Trough earthquakes in the Seto Inland Sea (SIS), Japan, are investigated through numerical experiments. Tsunami experiments are conducted based on five scenarios that consider tides at four different phases, such as flood, high, ebb, and low tides. The probes that were selected arbitrarily in the Bungo and Kii Channels show less significant effects of tides on tsunami heights and the arrival times of the first waves than those that experience large tidal ranges in inner basins and bays of the SIS. For instance, the maximum tsunami height and the arrival time at Toyomaesi differ by more than 0.5 m and nearly 1 h, respectively, depending on the tidal phase. The uncertainties defined in terms of calculated maximum tsunami heights due to tides illustrate that the calculated maximum tsunami heights in the inner SIS with standing tides have much larger uncertainties than those of two channels with propagating tides. Particularly in Harima Nada, the uncertainties due to the impacts of tides are greater than 50% of the tsunami heights without tidal interaction. The results recommend simulate tsunamis together with tides in shallow water environments to reduce the uncertainties involved with tsunami modeling and predictions for tsunami hazards preparedness. This article was corrected on 26 OCT 2015. See the end of the full text for details. 1. Alternative Tsunami Models Science.gov (United States) Tan, A.; Lyatskaya, I. 2009-01-01 The interesting papers by Margaritondo (2005 "Eur. J. Phys." 26 401) and by Helene and Yamashita (2006 "Eur. J. Phys." 27 855) analysed the great Indian Ocean tsunami of 2004 using a simple one-dimensional canal wave model, which was appropriate for undergraduate students in physics and related fields of discipline. In this paper, two additional,… 2. The 1946 Unimak Tsunami Earthquake Area: revised tectonic structure in reprocessed seismic images and a suspect near field tsunami source Science.gov (United States) Miller, John J.; von Huene, Roland E.; Ryan, Holly F. 2014-01-01 In 1946 at Unimak Pass, Alaska, a tsunami destroyed the lighthouse at Scotch Cap, Unimak Island, took 159 lives on the Hawaiian Islands, damaged island coastal facilities across the south Pacific, and destroyed a hut in Antarctica. The tsunami magnitude of 9.3 is comparable to the magnitude 9.1 tsunami that devastated the Tohoku coast of Japan in 2011. Both causative earthquake epicenters occurred in shallow reaches of the subduction zone. Contractile tectonism along the Alaska margin presumably generated the far-field tsunami by producing a seafloor elevation change. However, the Scotch Cap lighthouse was destroyed by a near-field tsunami that was probably generated by a coeval large undersea landslide, yet bathymetric surveys showed no fresh large landslide scar. We investigated this problem by reprocessing five seismic lines, presented here as high-resolution graphic images, both uninterpreted and interpreted, and available for the reader to download. In addition, the processed seismic data for each line are available for download as seismic industry-standard SEG-Y files. One line, processed through prestack depth migration, crosses a 10 × 15 kilometer and 800-meter-high hill presumed previously to be basement, but that instead is composed of stratified rock superimposed on the slope sediment. This image and multibeam bathymetry illustrate a slide block that could have sourced the 1946 near-field tsunami because it is positioned within a distance determined by the time between earthquake shaking and the tsunami arrival at Scotch Cap and is consistent with the local extent of high runup of 42 meters along the adjacent Alaskan coast. The Unimak/Scotch Cap margin is structurally similar to the 2011 Tohoku tsunamigenic margin where a large landslide at the trench, coeval with the Tohoku earthquake, has been documented. Further study can improve our understanding of tsunami sources along Alaska’s erosional margins. 3. NOAA/WDC Global Tsunami Deposits Database Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Discover where, when and how severely tsunamis affected Earth in geologic history. Information regarding Tsunami Deposits and Proxies for Tsunami Events complements... 4. Identifying the role of initial wave parameters on tsunami focusing Science.gov (United States) Aydın, Baran 2018-04-01 Unexpected local tsunami amplification, which is referred to as tsunami focusing, is attributed to two different mechanisms: bathymetric features of the ocean bottom such as underwater ridges and dipolar shape of the initial wave itself. In this study, we characterize the latter; that is, we explore how amplitude and location of the focusing point vary with certain geometric parameters of the initial wave such as its steepness and crest length. Our results reveal two important features of tsunami focusing: for mild waves maximum wave amplitude increases significantly with transverse length of wave crest, while location of the focusing point is almost invariant. For steep waves, on the other hand, increasing crest length dislocates focusing point significantly, while it causes a rather small increase in wave maximum. 5. When is a Tsunami a Mega-Tsunami? Science.gov (United States) Chague-Goff, C.; Goff, J. R.; Terry, J. P.; Goto, K. 2014-12-01 The 2004 Indian Ocean Tsunami is commonly called a mega-tsunami, and this attribute has also been linked to the 2011 Tohoku-oki tsunami. However, since this term was first coined in the early 1990's there have been very few attempts to define it. As such it has been applied in a rather arbitrary fashion to a number of tsunami characteristics, such as wave height or amplitude at both the source and at distant locations, run-up height, geographical extent and impact. The first use of the term is related to a tsunami generated by a large bolide impact and indeed it seems entirely appropriate that the term should be used for such rare events on geological timescales. However, probably as a result of media-driven hyperbole, scientists have used this term at least twice in the last decade, which is hardly a significant portion of the geological timescale. It therefore seems reasonable to suggest that these recent unexpectedly large events do not fall in the category of mega-tsunami but into a category of exceptional events within historical experience and local perspective. The use of the term mega-tsunami over the past 14 years is discussed and a definition is provided that marks the relative uniqueness of these events and a new term, appropriately Japanese in origin, namely that of souteigai-tsunami, is proposed. Examples of these tsunamis will be provided. 6. Improving tsunami resiliency: California's Tsunami Policy Working Group Science.gov (United States) Real, Charles R.; Johnson, Laurie; Jones, Lucile M.; Ross, Stephanie L.; Kontar, Y.A.; Santiago-Fandiño, V.; Takahashi, T. 2014-01-01 California has established a Tsunami Policy Working Group to facilitate development of policy recommendations for tsunami hazard mitigation. The Tsunami Policy Working Group brings together government and industry specialists from diverse fields including tsunami, seismic, and flood hazards, local and regional planning, structural engineering, natural hazard policy, and coastal engineering. The group is acting on findings from two parallel efforts: The USGS SAFRR Tsunami Scenario project, a comprehensive impact analysis of a large credible tsunami originating from an M 9.1 earthquake in the Aleutian Islands Subduction Zone striking California’s coastline, and the State’s Tsunami Preparedness and Hazard Mitigation Program. The unique dual-track approach provides a comprehensive assessment of vulnerability and risk within which the policy group can identify gaps and issues in current tsunami hazard mitigation and risk reduction, make recommendations that will help eliminate these impediments, and provide advice that will assist development and implementation of effective tsunami hazard risk communication products to improve community resiliency. 7. Evaluation of tsunami risk in Heraklion city, Crete, Greece, by using GIS methods Science.gov (United States) Triantafyllou, Ioanna; Fokaefs, Anna; Novikova, Tatyana; Papadopoulos, Gerasimos A.; Vaitis, Michalis 2016-04-01 The Hellenic Arc is the most active seismotectonic structure in the Mediterranean region. The island of Crete occupies the central segment of the arc which is characterized by high seismic and tsunami activity. Several tsunamis generated by large earthquakes, volcanic eruptions and landslides were reported that hit the capital city of Heraklion in the historical past. We focus our tsunami risk study in the northern coastal area of Crete (ca. 6 km in length and 1 km in maximum width) which includes the western part of the city of Heraklion and a large part of the neighboring municipality of Gazi. The evaluation of tsunami risk included calculations and mapping with QGIS of (1) cost for repairing buildings after tsunami damage, (2) population exposed to tsunami attack, (3) optimum routes and times for evacuation. To calculate the cost for building reparation after a tsunami attack we have determined the tsunami inundation zone in the study area after numerical simulations for extreme tsunami scenarios. The geographical distribution of buildings per building block, obtained from the 2011 census data of the Hellenic Statistical Authority (EL.STAT) and satellite data, was mapped. By applying the SCHEMA Damage Tool we assessed the building vulnerability to tsunamis according to the types of buildings and their expected damage from the hydrodynamic impact. A set of official cost rates varying with the building types and the damage levels, following standards set by the state after the strong damaging earthquakes in Greece in 2014, was applied to calculate the cost of rebuilding or repairing buildings damaged by the tsunami. In the investigation of the population exposed to tsunami inundation we have used the interpolation method to smooth out the population geographical distribution per building block within the inundation zone. Then, the population distribution was correlated with tsunami hydrodynamic parameters in the inundation zone. The last approach of tsunami risk 8. Study on tsunami due to offshore earthquakes for Korea coast. Literature survey and numerical simulation on earthquake and tsunami in the Japan Sea and the East China Sea International Nuclear Information System (INIS) Matsuyama, Masafumi; Aoyagi, Yasuhira; Inoue, Daiei; Choi, Weon-Hack; Kang, Keum-Seok 2008-01-01 In Korea, there has been a concern on tsumami risks for the Nuclear Power Plants since the 1983 Nihonkai-Chubu earthquake tsunami. The maximum run-up height reached 4 m to north of the Ulchin nuclear power plant site. The east coast of Korea was also attacked by a few meters high tsunami generated by the 1993 Hokkaido Nansei-Oki earthquake. Both source areas of them were in the areas western off Hokkaido to the eastern margin of the Japan Sea, which remains another tsunami potential. Therefore it is necessary to study tsunami risks for coast of Korea by means of geological investigation and numerical simulation. Historical records of earthquake and tsunami in the Japan Sea were re-compiled to evaluate tsunami potential. A database of marine active faults in the Japan Sea was compiled to decide a regional potential of tsunami. Many developed reverse faults are found in the areas western off Hokkaido to the eastern margin of the Japan Sea. The authors have found no historical earthquake in the East China Sea which caused tunami observed at coast of Korea. Therefore five fault models were determined on the basis of the analysis results of historical records and recent research results of fault parameter and tunami. Tsunami heights were estimated by numerical simulation of nonlinear dispersion wave theory. The results of the simulations indicate that the tsunami heights in these cases are less than 0.25 m along the coast of Korea, and the tsunami risk by these assumed faults does not lead to severe impact. It is concluded that tsunami occurred in the areas western off Hokkaido to the eastern margin of the Japan Sea leads the most significant impact to Korea consequently. (author) 9. Evaluation of Tsunami-HySEA for tsunami forecasting at selected locations in U.S. Science.gov (United States) Gonzalez Vida, J. M., Sr.; Ortega, S.; Castro, M. J.; de la Asuncion, M.; Arcas, D. 2017-12-01 The GPU-based Tsunami-HySEA model (Macias, J. et al., Pure and Applied Geophysics, 1-37, 2017, Lynett, P. et al., Ocean modeling, 114, 2017) is used to test four tsunami events: the January, 13, 2007 earthquake in Kuril islands (Mw 8.1), the September, 29, 2009 earthquake in Samoa (Mw 8.3), the February, 27, 2010 earthquake in Chile (Mw 9.8) and the March, 11, 2011 earthquake in Tohoku (Mw 9.0). Initial conditions have been provided by NOAA Center for Tsunami Research (NCTR) obtained from DART inversion results. All simulations have been performed using a global 4 arc-min grid of the Ocean Pacific and three nested-meshes levels around the selected locations. Wave amplitudes time series have been computed at selected tide gauges located at each location and maximum amplitudes compared with both MOST model results and observations where they are available. In addition, inundation also has been computed at selected U.S. locations for the 2011 Tohoku and 2009 Samoa events under the assumption of a steady mean high water level. Finally, computational time is also evaluated in order to study the operational capabilities of Tsunami-HySEA for these kind of events. Ackowledgements: This work has been funded by WE133R16SE1418 contract between PMEL (NOAA) and the Universidad de Málaga (Spain). 10. Probabilistic Tsunami Hazard Assessment from Incomplete and Uncertain Historical Catalogues with Application to Tsunamigenic Regions in the Pacific Ocean Science.gov (United States) Smit, Ansie; Kijko, Andrzej; Stein, Alfred 2017-08-01 The paper presents a new method for empirical assessment of tsunami recurrence parameters, namely the mean tsunami activity rate λT, the Soloviev-Imamura frequency-magnitude power law bT-value, and the coastline-characteristic, maximum possible tsunami intensity i_{ max }. The three coastline-characteristic recurrence parameters are estimated locally by maximum likelihood techniques using only tsunami event catalogues. The method provides for incompleteness of the tsunami catalogue, uncertainty in the tsunami intensity determination, and uncertainty associated with the parameters in the applied tsunami occurrence models. Aleatory and epistemic uncertainty is introduced in the tsunami models by means of the use of mixture distributions. Both the mean tsunami activity rate λT of the Poisson occurrence model, and the bT-value of the Soloviev-Imamura frequency-intensity power law are random variables. The proposed procedure was applied to estimate the probabilities of exceedance and return periods for tsunamis in the tsunamigenic regions of Japan, Kuril-Kamchatka, and South America. 11. TSUNAMI INFORMATION SOURCES PART 3 Directory of Open Access Journals (Sweden) Robert L. Wiegel 2009-01-01 Full Text Available This is Part 3 of Tsunami Information Sources published by Robert L. Wiegel, as Technical Report UCB/HEL 2006-3 of the Hydraulic Engineering Laboratory of the Department of Civil & Environmental Engineering of the University of California at Berkeley. Part 3 is published in "SCIENCE OF TSUNAMI HAZARDS" -with the author's permission -so that it can receive wider distribution and use by the Tsunami Scientific Community. 12. Modelling of Charles Darwin's tsunami reports Science.gov (United States) Galiev, Shamil 2010-05-01 Darwin landed at Valdivia and Concepcion, Chile, just before, during, and after a great 1835 earthquake. He described his impressions and results of the earthquake-induced natural catastrophe in The Voyage of the Beagle. His description of the tsunami could easily be read as a report from Indonesia or Sri Lanka, after the catastrophic tsunami of 26 December 2004. In particular, Darwin emphasised the dependence of earthquake-induced waves on a form of the coast and the coastal depth: ‘… Talcuhano and Callao are situated at the head of great shoaling bays, and they have always suffered from this phenomenon; whereas, the town of Valparaiso, which is seated close on the border of a profound ocean... has never been overwhelmed by one of these terrific deluges…' . He reports also, that ‘… the whole body of the sea retires from the coast, and then returns in great waves of overwhelming force ...' (we cite the Darwin's sentences following researchspace. auckland. ac. nz/handle/2292/4474). The coastal evolution of a tsunami was analytically studied in many publications (see, for example, Synolakis, C.E., Bernard, E.N., 2006. Philos. Trans. R. Soc., Ser. A, 364, 2231-2265; Tinti, S., Tonini, R. 205. J.Fluid Mech., 535, 11-21). However, the Darwin's reports and the influence of the coastal depth on the formation and the evolution of the steep front and the profile of tsunami did not practically discuss. Recently, a mathematical theory of these phenomena was presented in researchspace. auckland. ac. nz/handle/2292/4474. The theory describes the waves which are excited due to nonlinear effects within a shallow coastal zone. The tsunami elevation is described by two components: . Here is the linear (prime) component. It describes the wave coming from the deep ocean. is the nonlinear component. This component may become very important near the coastal line. After that the theory of the shallow waves is used. This theory yields the linear equation for and the weakly 13. A Case Study of Array-based Early Warning System for Tsunami Offshore Ventura, California Science.gov (United States) Xie, Y.; Meng, L. 2017-12-01 Extreme scenarios of M 7.5+ earthquakes on the Red Mountain and Pitas Point faults can potentially generate significant local tsunamis in southern California. The maximum water elevation could be as large as 10 m in the nearshore region of Oxnard and Santa Barbara. Recent development in seismic array processing enables rapid tsunami prediction and early warning based on the back-projection approach (BP). The idea is to estimate the rupture size by back-tracing the seismic body waves recorded by stations at local and regional distances. A simplified source model of uniform slip is constructed and used as an input for tsunami simulations that predict the tsunami wave height and arrival time. We demonstrate the feasibility of this approach in southern California by implementing it in a simulated real-time environment and applying to a hypothetical M 7.7 Dip-slip earthquake scenario on the Pitas Point fault. Synthetic seismograms are produced using the SCEC broadband platform based on the 3D SoCal community velocity model. We use S-wave instead of P-wave to avoid S-minus-P travel times shorter than rupture duration. Two clusters of strong-motion stations near Bakersfield and Palmdale are selected to determine the back-azimuth of the strongest high-frequency radiations (0.5-1 Hz). The back-azimuths of the two clusters are then intersected to locate the source positions. The rupture area is then approximated by enclosing these BP radiators with an ellipse or a polygon. Our preliminary results show that the extent of 1294 square kilometers rupture area and magnitude of 7.6 obtained by this approach is reasonably close to the 1849 square kilometers and 7.7 of the input model. The average slip of 7.3 m is then estimated according to the scaling relation between slip and rupture area, which is close to the actual average dislocation amount, 8.3 m. Finally, a tsunami simulation is conducted to assess the wave height and arrival time. The errors of -3 to +9 s in arrival time 14. Tsunami evacuation mathematical model for the city of Padang International Nuclear Information System (INIS) Kusdiantara, R.; Hadianti, R.; Badri Kusuma, M. S.; Soewono, E. 2012-01-01 Tsunami is a series of wave trains which travels with high speed on the sea surface. This traveling wave is caused by the displacement of a large volume of water after the occurrence of an underwater earthquake or volcano eruptions. The speed of tsunami decreases when it reaches the sea shore along with the increase of its amplitudes. Two large tsunamis had occurred in the last decades in Indonesia with huge casualties and large damages. Indonesian Tsunami Early Warning System has been installed along the west coast of Sumatra. This early warning system will give about 10-15 minutes to evacuate people from high risk regions to the safe areas. Here in this paper, a mathematical model for Tsunami evacuation is presented with the city of Padang as a study case. In the model, the safe areas are chosen from the existing and selected high rise buildings, low risk region with relatively high altitude and (proposed to be built) a flyover ring road. Each gathering points are located in the radius of approximately 1 km from the ring road. The model is formulated as an optimization problem with the total normalized evacuation time as the objective function. The constraints consist of maximum allowable evacuation time in each route, maximum capacity of each safe area, and the number of people to be evacuated. The optimization problem is solved numerically using linear programming method with Matlab. Numerical results are shown for various evacuation scenarios for the city of Padang. 15. Tsunami evacuation mathematical model for the city of Padang Energy Technology Data Exchange (ETDEWEB) Kusdiantara, R.; Hadianti, R.; Badri Kusuma, M. S.; Soewono, E. [Department of Mathematics Institut Teknologi Bandung, Bandung 40132 (Indonesia); Department of Civil Engineering Institut Teknologi Bandung, Bandung 40132 (Indonesia); Department of Mathematics Institut Teknologi Bandung, Bandung 40132 (Indonesia) 2012-05-22 Tsunami is a series of wave trains which travels with high speed on the sea surface. This traveling wave is caused by the displacement of a large volume of water after the occurrence of an underwater earthquake or volcano eruptions. The speed of tsunami decreases when it reaches the sea shore along with the increase of its amplitudes. Two large tsunamis had occurred in the last decades in Indonesia with huge casualties and large damages. Indonesian Tsunami Early Warning System has been installed along the west coast of Sumatra. This early warning system will give about 10-15 minutes to evacuate people from high risk regions to the safe areas. Here in this paper, a mathematical model for Tsunami evacuation is presented with the city of Padang as a study case. In the model, the safe areas are chosen from the existing and selected high rise buildings, low risk region with relatively high altitude and (proposed to be built) a flyover ring road. Each gathering points are located in the radius of approximately 1 km from the ring road. The model is formulated as an optimization problem with the total normalized evacuation time as the objective function. The constraints consist of maximum allowable evacuation time in each route, maximum capacity of each safe area, and the number of people to be evacuated. The optimization problem is solved numerically using linear programming method with Matlab. Numerical results are shown for various evacuation scenarios for the city of Padang. 16. Inundation mapping – a study based on December 2004 Tsunami Hazard along Chennai coast, Southeast India Directory of Open Access Journals (Sweden) C. Satheesh Kumar 2008-07-01 Full Text Available Tsunami impact study has been undertaken along Chennai coast starting from Pulicat to Kovalam. The study area Chennai coast is mainly devoted to prepare large scale action plan maps on tsunami inundation incorporating land use details derived from satellite data along with cadastral data using a GIS tool. Under tsunami inundation mapping along Chennai coast an integrated approach was adopted to prepare thematic maps on land use/land cover and coastal geomorphology using multispectral remote sensing data. The RTK dGPS instruments are used to collect elevation contour data at 0.5 m intervals for the Chennai coast. The GIS tool has been used to incorporate the elevation data, tsunami inundation markings obtained immediately after tsunami and thematic maps derived from remote sensing data. The outcome of this study provides an important clue on variations in tsunami inundation along Chennai coast, which is mainly controlled by local geomorphologic set-up, coastal zone elevation including coastal erosion protection measures and near shore bathymetry. This study highlights the information regarding most vulnerable areas of tsunami and also provides indication to demarcate suitable sites for rehabilitation. 17. Coastal Amplification Laws for the French Tsunami Warning Center: Numerical Modeling and Fast Estimate of Tsunami Wave Heights Along the French Riviera Science.gov (United States) Gailler, A.; Hébert, H.; Schindelé, F.; Reymond, D. 2018-04-01 Tsunami modeling tools in the French tsunami Warning Center operational context provide rapidly derived warning levels with a dimensionless variable at basin scale. A new forecast method based on coastal amplification laws has been tested to estimate the tsunami onshore height, with a focus on the French Riviera test-site (Nice area). This fast prediction tool provides a coastal tsunami height distribution, calculated from the numerical simulation of the deep ocean tsunami amplitude and using a transfer function derived from the Green's law. Due to a lack of tsunami observations in the western Mediterranean basin, coastal amplification parameters are here defined regarding high resolution nested grids simulations. The preliminary results for the Nice test site on the basis of nine historical and synthetic sources show a good agreement with the time-consuming high resolution modeling: the linear approximation is obtained within 1 min in general and provides estimates within a factor of two in amplitude, although the resonance effects in harbors and bays are not reproduced. In Nice harbor especially, variation in tsunami amplitude is something that cannot be really assessed because of the magnitude range and maximum energy azimuth of possible events to account for. However, this method is well suited for a fast first estimate of the coastal tsunami threat forecast. 18. The SAFRR Tsunami Scenario Science.gov (United States) Porter, K.; Jones, Lucile M.; Ross, Stephanie L.; Borrero, J.; Bwarie, J.; Dykstra, D.; Geist, Eric L.; Johnson, L.; Kirby, Stephen H.; Long, K.; Lynett, P.; Miller, K.; Mortensen, Carl E.; Perry, S.; Plumlee, G.; Real, C.; Ritchie, L.; Scawthorn, C.; Thio, H.K.; Wein, Anne; Whitmore, P.; Wilson, R.; Wood, Nathan J.; Ostbo, Bruce I.; Oates, Don 2013-01-01 The U.S. Geological Survey and several partners operate a program called Science Application for Risk Reduction (SAFRR) that produces (among other things) emergency planning scenarios for natural disasters. The scenarios show how science can be used to enhance community resiliency. The SAFRR Tsunami Scenario describes potential impacts of a hypothetical, but realistic, tsunami affecting California (as well as the west coast of the United States, Alaska, and Hawaii) for the purpose of informing planning and mitigation decisions by a variety of stakeholders. The scenario begins with an Mw 9.1 earthquake off the Alaska Peninsula. With Pacific basin-wide modeling, we estimate up to 5m waves and 10 m/sec currents would strike California 5 hours later. In marinas and harbors, 13,000 small boats are damaged or sunk (1 in 3) at a cost of350 million, causing navigation and environmental problems. Damage in the Ports of Los Angeles and Long Beach amount to $110 million, half of it water damage to vehicles and containerized cargo. Flooding of coastal communities affects 1800 city blocks, resulting in$640 million in damage. The tsunami damages 12 bridge abutments and 16 lane-miles of coastal roadway, costing $85 million to repair. Fire and business interruption losses will substantially add to direct losses. Flooding affects 170,000 residents and workers. A wide range of environmental impacts could occur. An extensive public education and outreach program is underway, as well as an evaluation of the overall effort. 19. Post Fukushima tsunami simulations for Malaysian coasts Energy Technology Data Exchange (ETDEWEB) Koh, Hock Lye, E-mail: [email protected] [Office of Deputy Vice Chancellor for Research and Post Graduate Studies, UCSI University, Jalan Menara Gading, 56000 Kuala Lumpur (Malaysia); Teh, Su Yean, E-mail: [email protected] [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Pulau Pinang (Malaysia); Abas, Mohd Rosaidi Che [Malaysian Meteorological Department, MOSTI, Kuala Lumpur (Malaysia) 2014-10-24 The recent recurrences of mega tsunamis in the Asian region have rekindled concern regarding potential tsunamis that could inflict severe damage to affected coastal facilities and communities. The 11 March 2011 Fukushima tsunami that crippled nuclear power plants in Northern Japan has further raised the level of caution. The recent discovery of petroleum reserves in the coastal water surrounding Malaysia further ignites the concern regarding tsunami hazards to petroleum facilities located along affected coasts. Working in a group, federal government agencies seek to understand the dynamics of tsunami and their impacts under the coordination of the Malaysian National Centre for Tsunami Research, Malaysian Meteorological Department. Knowledge regarding the generation, propagation and runup of tsunami would provide the scientific basis to address safety issues. An in-house tsunami simulation models known as TUNA has been developed by the authors to assess tsunami hazards along affected beaches so that mitigation measures could be put in place. Capacity building on tsunami simulation plays a critical role in the development of tsunami resilience. This paper aims to first provide a simple introduction to tsunami simulation towards the achievement of tsunami simulation capacity building. The paper will also present several scenarios of tsunami dangers along affected Malaysia coastal regions via TUNA simulations to highlight tsunami threats. The choice of tsunami generation parameters reflects the concern following the Fukushima tsunami. 20. Physical Modeling of Tsunamis Generated By 3D Deformable Landslides in Various Scenarios From Fjords to Conical Islands Science.gov (United States) McFall, B. C.; Fritz, H. M. 2013-12-01 Tsunamis generated by landslides and volcano flank collapse can be particularly devastative in the near field region due to locally high wave amplitudes and runup. The events of 1958 Lituya Bay, 1963 Vajont reservoir, 1980 Spirit Lake, 2002 Stromboli and 2010 Haiti demonstrate the danger of tsunamis generated by landslides or volcano flank collapses. Unfortunately critical field data from these events is lacking. Source and runup scenarios based on real world events are physically modeled using generalized Froude similarity in the three dimensional NEES tsunami wave basin at Oregon State University. A novel pneumatic landslide tsunami generator (LTG) was deployed to simulate landslides with varying geometry and kinematics. Two different materials are used to simulate landslides to study the granulometry effects: naturally rounded river gravel and cobble mixtures. The LTG consists of a sliding box filled with 1,350 kg of landslide material which is accelerated by means of four pneumatic pistons down a 2H:1V slope. The landslide is launched from the sliding box and continues to accelerate by gravitational forces up to velocities of 5 m/s. The landslide Froude number at impact with the water is in the range 1 elevations are recorded by an array of resistance wave gauges. The landslide deformation is measured from above and underwater camera recordings. The landslide deposit is measured on the basin floor with a multiple transducer acoustic array (MTA). Landslide surface reconstruction and kinematics are determined with a stereo particle image velocimetry (PIV) system. Wave runup is recorded with resistance wave gauges along the slope and verified with video image processing. The measured landslide and wave parameters are 1. Stochastic evaluation of tsunami inundation and quantitative estimating tsunami risk International Nuclear Information System (INIS) Fukutani, Yo; Anawat, Suppasri; Abe, Yoshi; Imamura, Fumihiko 2014-01-01 We performed a stochastic evaluation of tsunami inundation by using results of stochastic tsunami hazard assessment at the Soma port in the Tohoku coastal area. Eleven fault zones along the Japan trench were selected as earthquake faults generating tsunamis. The results show that estimated inundation area of return period about 1200 years had good agreement with that in the 2011 Tohoku earthquake. In addition, we evaluated quantitatively tsunami risk for four types of building; a reinforced concrete, a steel, a brick and a wood at the Soma port by combining the results of inundation assessment and tsunami fragility assessment. The results of quantitative estimating risk would reflect properly vulnerability of the buildings, that the wood building has high risk and the reinforced concrete building has low risk. (author) 2. A Reverse Tracking Method to Analyze the 1867 Keelung Tsunami Event Science.gov (United States) Lee, C.; Wu, T.; Tsai, Y.; KO, L.; Chuang, M. 2013-12-01 event was most likely triggered by a near-field submarine landslide just outside the Keelung harbor. The potential tsunami sources from Mien-Hwa Canyon and submarine volcanos should also be noted. The result of this study is important not only for densely populated cities in northern Taiwan, but also for the three nuclear power plants nearby. The detailed scenario results will be presented in the full paper. Fig. 1. The map of Reverse Tracking Method (RTM) in northern Taiwan. Black dots show the relative location between Keelung city, Jinshan and Patouzu areas. Red dots present the nuclear power plants (NPP1, NPP2, and NPP4). Green dots present the sedimentary evidence discovered on Hoping Island. Color indicates the maximum flux of tsunami propagation. 3. Physical Observations of the Tsunami during the September 8th 2017 Tehuantepec, Mexico Earthquake Science.gov (United States) Ramirez-Herrera, M. T.; Corona, N.; Ruiz-Angulo, A.; Melgar, D.; Zavala-Hidalgo, J. 2017-12-01 The September 8th 2017, Mw8.2 earthquake offshore Chiapas, Mexico, is the largest earthquake recorded history in Chiapas since 1902. It caused damage in the states of Oaxaca, Chiapas and Tabasco; it had more than 100 fatalities, over 1.5 million people were affected, and 41,000 homes were damaged in the state of Chiapas alone. This earthquake, a deep intraplate event on a normal fault on the oceanic subducting plate, generated a tsunami recorded at several tide gauge stations in Mexico and on the Pacific Ocean. Here we report the physical effects of the tsunami on the Chiapas coast and analyze the societal implications of this tsunami on the basis of our field observations. Tide gauge data indicate 11.3 and 8.2 cm of coastal subsidence at Salina Cruz and Puerto Chiapas stations. The associated tsunami waves were recorded first at Salina Cruz tide gauge station at 5:13 (GMT). We covered ground observations along 41 km of the coast of Chiapas, encompassing the sites with the highest projected wave heights based on the preliminary tsunami model (maximum tsunami amplitudes between -94.5 and -93.0 W). Runup and inundation distances were measured with an RTK GPS and using a Sokkia B40 level along 8 sites. We corrected runup data with estimated astronomical tide levels at the time of the tsunami. The tsunami occurred at low tide. The maximum runup was 3 m at Boca del Cielo, and maximum inundation distance was 190 m in Puerto Arista, corresponding to the coast directly opposite the epicenter and in the central sector of the Gulf of Tehuantepec. In general, our field data agree with the predicted results from the preliminary tsunami model. Tsunami scour and erosion was evident on the Chiapas coast. Tsunami deposits, mainly sand, reached up to 32 cm thickness thinning landwards up to 172 m distance. Even though the Mexican tsunami early warning system (CAT) issued several warnings, the tsunami arrival struck the Chiapas coast prior to the arrival of official warnings to the 4. Tsunami Source Modeling of the 2015 Volcanic Tsunami Earthquake near Torishima, South of Japan Science.gov (United States) Sandanbata, O.; Watada, S.; Satake, K.; Fukao, Y.; Sugioka, H.; Ito, A.; Shiobara, H. 2017-12-01 An abnormal earthquake occurred at a submarine volcano named Smith Caldera, near Torishima Island on the Izu-Bonin arc, on May 2, 2015. The earthquake, which hereafter we call "the 2015 Torishima earthquake," has a CLVD-type focal mechanism with a moderate seismic magnitude (M5.7) but generated larger tsunami waves with an observed maximum height of 50 cm at Hachijo Island [JMA, 2015], so that the earthquake can be regarded as a "tsunami earthquake." In the region, similar tsunami earthquakes were observed in 1984, 1996 and 2006, but their physical mechanisms are still not well understood. Tsunami waves generated by the 2015 earthquake were recorded by an array of ocean bottom pressure (OBP) gauges, 100 km northeastern away from the epicenter. The waves initiated with a small downward signal of 0.1 cm and reached peak amplitude (1.5-2.0 cm) of leading upward signals followed by continuous oscillations [Fukao et al., 2016]. For modeling its tsunami source, or sea-surface displacement, we perform tsunami waveform simulations, and compare synthetic and observed waveforms at the OBP gauges. The linear Boussinesq equations are adapted with the tsunami simulation code, JAGURS [Baba et al., 2015]. We first assume a Gaussian-shaped sea-surface uplift of 1.0 m with a source size comparable to Smith Caldera, 6-7 km in diameter. By shifting source location around the caldera, we found the uplift is probably located within the caldera rim, as suggested by Sandanbata et al. [2016]. However, synthetic waves show no initial downward signal that was observed at the OBP gauges. Hence, we add a ring of subsidence surrounding the main uplift, and examine sizes and amplitudes of the main uplift and the subsidence ring. As a result, the model of a main uplift of around 1.0 m with a radius of 4 km surrounded by a ring of small subsidence shows good agreement of synthetic and observed waveforms. The results yield two implications for the deformation process that help us to understanding 5. Making Multi-Level Tsunami Evacuation Playbooks Operational in California and Hawaii Science.gov (United States) Wilson, R. I.; Peterson, D.; Fryer, G. J.; Miller, K.; Nicolini, T.; Popham, C.; Richards, K.; Whitmore, P.; Wood, N. J. 2016-12-01 In the aftermath of the 2010 Chile, 2011 Japan, and 2012 Haida Gwaii tsunamis in California and Hawaii, coastal emergency managers requested that state and federal tsunami programs investigate providing more detailed information about the flood potential and recommended evacuation for distant-source tsunamis well ahead of their arrival time. Evacuation "Playbooks" for tsunamis of variable sizes and source locations have been developed for some communities in the two states, providing secondary options to an all or nothing approach for evacuation. Playbooks have been finalized for nearly 70% of the coastal communities in California, and have been drafted for evaluation by the communities of Honolulu and Hilo in Hawaii. A key component to determining a recommended level of evacuation during a distant-source tsunami and making the Playbooks operational has been the development of the "FASTER" approach, an acronym for factors that influence the tsunami flood hazard for a community: Forecast Amplitude, Storm, Tides, Error in forecast, and the Run-up potential. Within the first couple hours after a tsunami is generated, the FASTER flood elevation value will be computed and used to select the appropriate minimum tsunami phase evacuation "Playbook" for use by the coastal communities. The states of California and Hawaii, the tsunami warning centers, and local weather service offices are working together to deliver recommendations on the appropriate evacuation Playbook plans for communities to use prior to the arrival of a distant-source tsunami. These partners are working closely with individual communities on developing conservative and consistent protocols on the use of the Playbooks. Playbooks help provide a scientifically-based, minimum response for small- to moderate-size tsunamis which could reduce the potential for over-evacuation of hundreds of thousands of people and save hundreds of millions of dollars in evacuation costs for communities and businesses. 6. Tsunami Simulators in Physical Modelling Laboratories - From Concept to Proven Technique Science.gov (United States) Allsop, W.; Chandler, I.; Rossetto, T.; McGovern, D.; Petrone, C.; Robinson, D. 2016-12-01 Before 2004, there was little public awareness around Indian Ocean coasts of the potential size and effects of tsunami. Even in 2011, the scale and extent of devastation by the Japan East Coast Tsunami was unexpected. There were very few engineering tools to assess onshore impacts of tsunami, so no agreement on robust methods to predict forces on coastal defences, buildings or related infrastructure. Modelling generally used substantial simplifications of either solitary waves (far too short durations) or dam break (unrealistic and/or uncontrolled wave forms).This presentation will describe research from EPI-centre, HYDRALAB IV, URBANWAVES and CRUST projects over the last 10 years that have developed and refined pneumatic Tsunami Simulators for the hydraulic laboratory. These unique devices have been used to model generic elevated and N-wave tsunamis up to and over simple shorelines, and at example defences. They have reproduced full-duration tsunamis including the Mercator trace from 2004 at 1:50 scale. Engineering scale models subjected to those tsunamis have measured wave run-up on simple slopes, forces on idealised sea defences and pressures / forces on buildings. This presentation will describe how these pneumatic Tsunami Simulators work, demonstrate how they have generated tsunami waves longer than the facility within which they operate, and will highlight research results from the three generations of Tsunami Simulator. Of direct relevance to engineers and modellers will be measurements of wave run-up levels and comparison with theoretical predictions. Recent measurements of forces on individual buildings have been generalized by separate experiments on buildings (up to 4 rows) which show that the greatest forces can act on the landward (not seaward) buildings. Continuing research in the 70m long 4m wide Fast Flow Facility on tsunami defence structures have also measured forces on buildings in the lee of a failed defence wall. 7. Prediction of Tsunami Inundation in the City of Lisbon (portugal) Science.gov (United States) Baptista, M.; Miranda, J.; Omira, R.; Catalao Fernandes, J. 2010-12-01 Lisbon city is located inside the estuary of Tagus river, 20 km away from the Atlantic ocean. The city suffered great damage from tsunamis and its downtown was flooded at least twice in 1531 and 1755. Since the installation of the tide-gage network, in the area, three tsunamis caused by submarine earthquakes, were recorded in November 1941, February 1969 and May 1975. The most destructive tsunamis listed along Tagus Estuary are the 26th January 1531, a local tsunami event restricted to the Tagus Estuary, and the well known 1st November 1755 transoceanic event, both following highly destructive earthquakes, which deeply affected Lisbon. The economic losses due to the impact of the 1755 tsunami in one of Europe’s 18t century main harbor and commercial fleets were enormous. Since then the Tagus estuary suffered strong morphologic changes manly due to dredging works, construction of commercial and industrial facilities and recreational docks, some of them already projected to preserve Lisbon. In this study we present preliminary inundation maps for the Tagus estuary area in the Lisbon County, for conditions similar to the 1755 tsunami event, but using present day bathymetric and topographic maps. Inundation modelling is made using non linear shallow water theory and the numerical code is based upon COMCOT code. Nested grids resolutions used in this study are 800 m, 200 m and 50 m, respectively. The inundation is discussed in terms of flow depth, run up height, maximum inundation area and current flow velocity. The effects of estuary modifications on tsunami propagation are also investigated. 8. On The Source Of The 25 November 1941 - Atlantic Tsunami Science.gov (United States) Baptista, M. A.; Lisboa, F. B.; Miranda, J. M. A. 2015-12-01 In this study we analyze the tsunami recorded in the North Atlantic following the 25 November 1941 earthquake. The earthquake with a magnitude of 8.3, located on the Gloria Fault, was one of the largest strike slip events recorded. The Gloria fault is a 500 km long scarp in the North Atlantic Ocean between 19W and 24W known to be a segment of the Eurasia-Nubia plate boundary between Iberia and the Azores. Ten tide stations recorded the tsunami. Six in Portugal (mainland, Azores and Madeira Islands), two in Morocco, one in the United Kingdom and one in Spain (Tenerife-Canary Islands). The tsunami waves reached Azores and Madeira Islands less than one hour after the main shock. The tide station of Casablanca (in Morocco) recorded the maximum amplitude of 0.54 m. All amplitudes recorded are lower than 0.5 m but the tsunami reached Portugal mainland in high tide conditions where the sea flooded some streets We analyze the 25 November 1941 tsunami data using the tide-records in the coasts of Portugal, Spain, Morocco and UK to infer its source. The use of wavelet analysis to characterize the frequency content of the tide-records shows predominant periods of 9-13min e 18-22min. A preliminary location of the tsunami source location was obtained Backward Ray Tracing (BRT). The results of the BRT technique are compatible with the epicenter location of the earthquake. We compute empirical Green functions for the earthquake generation area, and use a linear shallow water inversion technique to compute the initial water displacement. The comparison between forward modeling with observations shows a fair agreement with available data. This work received funding from the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement n° 603839 (Project ASTARTE - Assessment, Strategy and Risk Reduction for Tsunamis in Europe)" 9. Kaitan antara karakteristik pantai Provinsi Sumatera Barat dengan potensi kerawanan tsunami Directory of Open Access Journals (Sweden) Yudhicara Yudhicara 2014-06-01 Full Text Available http://dx.doi.org/10.17014/ijog.vol3no2.20084The coast of West Sumatera Province has two types of beaches: low lying sandy beach and steep rocky beach. Straight shoreline beaches at Padang beach until Air Bangis at the north and between Pasir Ganting and Salido beach at the south will have a potential tsunami height lower than bay shape beaches like at Kasai Bay, Kabung Bay, Batung Bay and Nibung Bay. A tsunami inundation will be further at a low lying area (low lying sandy beaches compared with a coastal area which has steep slope and high relief (steep rocky beaches. Gosong beach at Pariaman which has a steep angle of beach slope will have lower tsunami height compared with a low angle beach slope like at Sungai Beramas, Kasai, Kabung, Batung and Nibung bays which have a beach slope about 3° to 5°. The maximum tsunami inundation is assumed to be located at Pasaman and Pasir Pariaman Sub-regencies, while the maximum tsunami height is assumed to be located at the middle of mapped area which has a bay shape. Tsunami is assumed to be arrived early at the southern most of mapped area or close to Muko-muko (Bengkulu. The maximum height difference from sea level was found at Tabai - Pariaman about 5.394 m, while the minimum height difference was found at Carocok Anau about 1.821 m. The horizontal distance measured from the nearest building from the shoreline is about 119 to 173 m. The worst case of tsunami modeling assumed that the maximum tsunami height will be about 32 m and used for reference to make tsunami prone zonation, such as high, moderate and low prone area. 10. The Global Tsunami Model (GTM) Science.gov (United States) Lorito, S.; Basili, R.; Harbitz, C. B.; Løvholt, F.; Polet, J.; Thio, H. K. 2017-12-01 The tsunamis occurred worldwide in the last two decades have highlighted the need for a thorough understanding of the risk posed by relatively infrequent but often disastrous tsunamis and the importance of a comprehensive and consistent methodology for quantifying the hazard. In the last few years, several methods for probabilistic tsunami hazard analysis have been developed and applied to different parts of the world. In an effort to coordinate and streamline these activities and make progress towards implementing the Sendai Framework of Disaster Risk Reduction (SFDRR) we have initiated a Global Tsunami Model (GTM) working group with the aim of i) enhancing our understanding of tsunami hazard and risk on a global scale and developing standards and guidelines for it, ii) providing a portfolio of validated tools for probabilistic tsunami hazard and risk assessment at a range of scales, and iii) developing a global tsunami hazard reference model. This GTM initiative has grown out of the tsunami component of the Global Assessment of Risk (GAR15), which has resulted in an initial global model of probabilistic tsunami hazard and risk. Started as an informal gathering of scientists interested in advancing tsunami hazard analysis, the GTM is currently in the process of being formalized through letters of interest from participating institutions. The initiative has now been endorsed by the United Nations International Strategy for Disaster Reduction (UNISDR) and the World Bank's Global Facility for Disaster Reduction and Recovery (GFDRR). We will provide an update on the state of the project and the overall technical framework, and discuss the technical issues that are currently being addressed, including earthquake source recurrence models, the use of aleatory variability and epistemic uncertainty, and preliminary results for a probabilistic global hazard assessment, which is an update of the model included in UNISDR GAR15. 11. Tsunami hazard assessment along Diba-Oman and Diba-Al-Emirates coasts Directory of Open Access Journals (Sweden) El-Hussain Issa 2017-01-01 Full Text Available Tsunami is among the most devastating natural hazards phenomenon responsible for significant loss of life and property throughout history. The Sultanate of Oman and United Arab Emirates are among the Indian Ocean countries that were subjected to one confirmed tsunami in November 27, 1945 due to an Mw 8.1 earthquake in Makran Subduction Zone. In this study, we present preliminary deterministic tsunami hazard assessment for the coasts of Diba Oman and Diba Al-Emirates, which are located on the western coast of the Oman Sea. The tsunami vulnerability of these cities increases due to the construction of many critical infrastructures and urban concentration along their coasts. Therefore, tsunami hazard assessment is necessary to mitigate the risk on the socio-economic system and sustainable developments. The major known source of tsunamis able to impact both coasts of Oman and United Arab Emirates is the Makran Subduction Zone (MSZ which extends for approximately 900 km. The deterministic approach uses specific scenarios considering the maximum credible earthquakes occurring in the MSZ and computes the ensuing tsunami impact in the coasts of the study area. The maximum wave height graphs and inundation maps are obtained for tsunami scenarios caused by 8.8 earthquake magnitude in eastern MSZ and 8.2 magnitude from western MSZ. The Mw8.8 eastern MSZ causes a maximum inundation distance of 447 meters and a maximum flow depth of 1.37 meter. Maximum inundation distance larger than 420 meters occurs due to the Mw8.2 western MSZ scenario. For this scenario, numerical simulations show a maximum flow depth of about 2.34 meters. 12. NOAA's Integrated Tsunami Database: Data for improved forecasts, warnings, research, and risk assessments Science.gov (United States) Stroker, Kelly; Dunbar, Paula; Mungov, George; Sweeney, Aaron; McCullough, Heather; Carignan, Kelly 2015-04-01 The National Oceanic and Atmospheric Administration (NOAA) has primary responsibility in the United States for tsunami forecast, warning, research, and supports community resiliency. NOAA's National Geophysical Data Center (NGDC) and co-located World Data Service for Geophysics provide a unique collection of data enabling communities to ensure preparedness and resilience to tsunami hazards. Immediately following a damaging or fatal tsunami event there is a need for authoritative data and information. The NGDC Global Historical Tsunami Database (http://www.ngdc.noaa.gov/hazard/) includes all tsunami events, regardless of intensity, as well as earthquakes and volcanic eruptions that caused fatalities, moderate damage, or generated a tsunami. The long-term data from these events, including photographs of damage, provide clues to what might happen in the future. NGDC catalogs the information on global historical tsunamis and uses these data to produce qualitative tsunami hazard assessments at regional levels. In addition to the socioeconomic effects of a tsunami, NGDC also obtains water level data from the coasts and the deep-ocean at stations operated by the NOAA/NOS Center for Operational Oceanographic Products and Services, the NOAA Tsunami Warning Centers, and the National Data Buoy Center (NDBC) and produces research-quality data to isolate seismic waves (in the case of the deep-ocean sites) and the tsunami signal. These water-level data provide evidence of sea-level fluctuation and possible inundation events. NGDC is also building high-resolution digital elevation models (DEMs) to support real-time forecasts, implemented at 75 US coastal communities. After a damaging or fatal event NGDC begins to collect and integrate data and information from many organizations into the hazards databases. Sources of data include our NOAA partners, the U.S. Geological Survey, the UNESCO Intergovernmental Oceanographic Commission (IOC) and International Tsunami Information Center 13. The effects of wind and rainfall on suspended sediment concentration related to the 2004 Indian Ocean tsunami International Nuclear Information System (INIS) Zhang Xinfeng; Tang Danling; Li Zizhen; Zhang Fengpan 2009-01-01 The effects of rainfall and wind speed on the dynamics of suspended sediment concentration (SSC), during the 2004 Indian Ocean tsunami, were analyzed using spatial statistical models. The results showed a positive effect of wind speed on SSC, and inconsistent effects (positive and negative) of rainfall on SSC. The effects of wind speed and rainfall on SSC weakened immediately around the tsunami, indicating tsunami-caused floods and earthquake-induced shaking may have suddenly disturbed the ocean-atmosphere interaction processes, and thus weakened the effects of wind speed and rainfall on SSC. Wind speed and rainfall increased markedly, and reached their maximum values immediately after the tsunami week. Rainfall at this particular week exceeded twice the average for the same period over the previous 4 years. The tsunami-affected air-sea interactions may have increased both wind speed and rainfall immediately after the tsunami week, which directly lead to the variations in SSC. 14. Comparison between the Coastal Impacts of Cyclone Nargis and the Indian Ocean Tsunami Science.gov (United States) Fritz, H. M.; Blount, C. 2009-12-01 On 26 December 2004 a great earthquake with a moment magnitude of 9.3 occurred off the North tip of Sumatra, Indonesia. The Indian Ocean tsunami claimed 230,000 lives making it the deadliest in recorded history. Less than 4 years later tropical cyclone Nargis (Cat. 4) made landfall in Myanmar’s Ayeyarwady delta on 2 May 2008 causing the worst natural disaster in Myanmar’s recorded history. Official death toll estimates exceed 138,000 fatalities making it the 7th deadliest cyclone ever recorded worldwide. The Bay of Bengal counts seven tropical cyclones with death tolls in excess of 100,000 striking India and Bangladesh in the past 425 years, which highlights the difference in return periods between extreme cyclones and tsunamis. Damage estimates at over$10 billion made Nargis the most damaging cyclone ever recorded in the Indian Ocean. Although the two natural disasters are completely different in their generation mechanisms they both share massive coastal inundations as primary damage and death cause. While the damage patterns exhibit similarities the forcing differs. The primary tsunami impact is dominated by the runup of a few main waves washing rapidly ashore and inducing high lateral forces. On the contrary the tropical cyclone storm surge damage is the result of numerous storm waves continuously hitting the flooded structures on the elevated storm tide level. While coastal vegetation such as mangroves may be effective at reducing superimposed storm waves they are limited at reducing storm surge. Unfortunately, mangroves have been significantly cut for charcoal and land use as rice paddies in Myanmar due to rapid population growth and economic reasons, thereby increasing coastal vulnerability and land loss due to erosion (Figure 1). The period of a storm surge is typically an order of magnitude longer than the period of a tsunami resulting in significantly larger inundation distances along coastal plains and river deltas. The storm surge of cyclone Nargis 15. Far-field tsunami magnitude determined from ocean-bottom pressure gauge data around Japan Science.gov (United States) Baba, T.; Hirata, K.; Kaneda, Y. 2003-12-01 \\hspace*{3mm}Tsunami magnitude is the most fundamental parameter to scale tsunamigenic earthquakes. According to Abe (1979), the tsunami magnitude, Mt, is empirically related to the crest to trough amplitude, H, of the far-field tsunami wave in meters (Mt = logH + 9.1). Here we investigate the far-field tsunami magnitude using ocean-bottom pressure gauge data. The recent ocean-bottom pressure measurements provide more precise tsunami data with a high signal-to-noise ratio. \\hspace*{3mm}Japan Marine Science and Technology Center is monitoring ocean bottom pressure fluctuations using two submarine cables of depths of 1500 - 2400 m. These geophysical observatory systems are located off Cape Muroto, Southwest Japan, and off Hokkaido, Northern Japan. The ocean-bottom pressure data recorded with the Muroto and Hokkaido systems have been collected continuously since March, 1997 and October, 1999, respectively. \\hspace*{3mm}Over the period from March 1997 to June 2003, we have observed four far-field tsunami signals, generated by earthquakes, on ocean-bottom pressure records. These far-field tsunamis were generated by the 1998 Papua New Guinea eq. (Mw 7.0), 1999 Vanuatu eq. (Mw 7.2), 2001 Peru eq. (Mw 8.4) and 2002 Papua New Guinea eq. (Mw 7.6). Maximum amplitude of about 30 mm was recorded by the tsunami from the 2001 Peru earthquake. \\hspace*{3mm}Direct application of the Abe's empirical relation to ocean-bottom pressure gauge data underestimates tsunami magnitudes by about an order of magnitude. This is because the Abe's empirical relation was derived only from tsunami amplitudes with coastal tide gauges where tsunami is amplified by the shoaling of topography and the reflection at the coastline. However, these effects do not work for offshore tsunami in deep oceans. In general, amplification due to shoaling near the coastline is governed by the Green's Law, in which the tsunami amplitude is proportional to h-1/4, where h is the water depth. Wave amplitude also is 16. Assessment of tsunami hazard for coastal areas of Shandong Province, China Science.gov (United States) Feng, Xingru; Yin, Baoshu 2017-04-01 Shandong province is located on the east coast of China and has a coastline of about 3100 km. There are only a few tsunami events recorded in the history of Shandong Province, but the tsunami hazard assessment is still necessary as the rapid economic development and increasing population of this area. The objective of this study was to evaluate the potential danger posed by tsunamis for Shandong Province. The numerical simulation method was adopted to assess the tsunami hazard for coastal areas of Shandong Province. The Cornell multi-grid coupled tsunami numerical model (COMCOT) was used and its efficacy was verified by comparison with three historical tsunami events. The simulated maximum tsunami wave height agreed well with the observational data. Based on previous studies and statistical analyses, multiple earthquake scenarios in eight seismic zones were designed, the magnitudes of which were set as the potential maximum values. Then, the tsunamis they induced were simulated using the COMCOT model to investigate their impact on the coastal areas of Shandong Province. The numerical results showed that the maximum tsunami wave height, which was caused by the earthquake scenario located in the sea area of the Mariana Islands, could reach up to 1.39 m off the eastern coast of Weihai city. The tsunamis from the seismic zones of the Bohai Sea, Okinawa Trough, and Manila Trench could also reach heights of >1 m in some areas, meaning that earthquakes in these zones should not be ignored. The inundation hazard was distributed primarily in some northern coastal areas near Yantai and southeastern coastal areas of Shandong Peninsula. When considering both the magnitude and arrival time of tsunamis, it is suggested that greater attention be paid to earthquakes that occur in the Bohai Sea. In conclusion, the tsunami hazard facing the coastal area of Shandong Province is not very serious; however, disasters could occur if such events coincided with spring tides or other 17. Tsunami vs Infragravity Surge: Statistics and Physical Character of Extreme Runup Science.gov (United States) Lynett, P. J.; Montoya, L. H. 2017-12-01 Motivated by recent observations of energetic and impulsive infragravity (IG) flooding events - also known as sneaker waves - we will present recent work on the relative probabilities and dynamics of extreme flooding events from tsunamis and long period wind wave events. The discussion will be founded on videos and records of coastal flooding by both recent tsunamis and IG, such as those in the Philippines during Typhoon Haiyan. From these observations, it is evident that IG surges may approach the coast as breaking bores with periods of minutes; a very tsunami-like character. Numerical simulations will be used to estimate flow elevations and speeds from potential IG surges, and these will be compared with similar values from tsunamis, over a range of different beach profiles. We will examine the relative rareness of each type of flooding event, which for large values of IG runup is a particularly challenging topic. For example, for a given runup elevation or flooding speed, the related tsunami return period may be longer than that associated with IG, implying that deposit information associated with such elevations or speeds are more likely to be caused by IG. Our purpose is to provide a statistical and physical discriminant between tsunami and IG, such that in areas exposed to both, a proper interpretation of overland transport, deposition, and damage is possible. 18. Community exposure to tsunami hazards in Hawai‘i Science.gov (United States) Jones, Jamie L.; Jamieson, Matthew R.; Wood, Nathan J. 2016-06-17 Hawai‘i has experienced numerous destructive tsunamis and the potential for future inundation has been described over the years using various historical events and scenarios. To support tsunami preparedness and risk-reduction planning in Hawai‘i, this study documents the variations among 91 coastal communities and 4 counties in the amounts, types, and percentages of developed land, residents, employees, community-support businesses, dependent-care facilities, public venues, and critical facilities in a composite extreme tsunami-inundation zone associated with two great Aleutian moment magnitude (Mw) 9.3 and 9.6 earthquake scenarios. These earthquake scenarios are considered to provide the maximum tsunami scenario for the Hawaiian Islands. According to 2010 U.S. Census Bureau data, the Hawai‘i extreme tsunami-inundation zone contains approximately 248,749 residents and 91,528 households (18 and 20 percent, respectively, of State totals). The residential population in tsunami-prone areas is racially diverse, with most residents identifying themselves as White (47 percent of the total exposed population), Asian (48 percent), or Native Hawaiian and Other Pacific Islander (29 percent), either alone or in combination with one or more other races (note that race categories do not sum to 100 percent because individuals were able to report multiple races in the 2010 U.S. Census). A total of 50,016 households are renter-occupied, making up 55 percent of total households in the extreme inundation zone. The extreme tsunami-inundation zone contains 18,693 businesses (37 percent of State totals) and 245,827 employees (42 percent of the State labor force). The employee population in the extreme tsunami-inundation zone is largely in the accommodation and food services and retail-trade sectors. Although occupancy values are not known for each facility, the extreme tsunami-inundation zone also contains numerous community-support businesses (for example, religious organizations 19. Lessons from the Tōhoku tsunami: A model for island avifauna conservation prioritization Science.gov (United States) Reynolds, Michelle H.; Berkowitz, Paul; Klavitter, John; Courtot, Karen 2017-01-01 Earthquake-generated tsunamis threaten coastal areas and low-lying islands with sudden flooding. Although human hazards and infrastructure damage have been well documented for tsunamis in recent decades, the effects on wildlife communities rarely have been quantified. We describe a tsunami that hit the world's largest remaining tropical seabird rookery and estimate the effects of sudden flooding on 23 bird species nesting on Pacific islands more than 3,800 km from the epicenter. We used global positioning systems, tide gauge data, and satellite imagery to quantify characteristics of the Tōhoku earthquake-generated tsunami (11 March 2011) and its inundation extent across four Hawaiian Islands. We estimated short-term effects of sudden flooding to bird communities using spatially explicit data from Midway Atoll and Laysan Island, Hawai'i. We describe variation in species vulnerability based on breeding phenology, nesting habitat, and life history traits. The tsunami inundated 21%–100% of each island's area at Midway Atoll and Laysan Island. Procellariformes (albatrosses and petrels) chick and egg losses exceeded 258,500 at Midway Atoll while albatross chick losses at Laysan Island exceeded 21,400. The tsunami struck at night and during the peak of nesting for 14 colonial seabird species. Strongly philopatric Procellariformes were vulnerable to the tsunami. Nonmigratory, endemic, endangered Laysan Teal (Anas laysanensis) were sensitive to ecosystem effects such as habitat changes and carcass-initiated epizootics of avian botulism, and its populations declined approximately 40% on both atolls post-tsunami. Catastrophic flooding of Pacific islands occurs periodically not only from tsunamis, but also from storm surge and rainfall; with sea-level rise, the frequency of sudden flooding events will likely increase. As invasive predators occupy habitat on higher elevation Hawaiian Islands and globally important avian populations are concentrated on low-lying islands 20. On The Computation Of The Best-fit Okada-type Tsunami Source Science.gov (United States) Miranda, J. M. A.; Luis, J. M. F.; Baptista, M. A. 2017-12-01 The forward simulation of earthquake-induced tsunamis usually assumes that the initial sea surface elevation mimics the co-seismic deformation of the ocean bottom described by a simple "Okada-type" source (rectangular fault with constant slip in a homogeneous elastic half space). This approach is highly effective, in particular in far-field conditions. With this assumption, and a given set of tsunami waveforms recorded by deep sea pressure sensors and (or) coastal tide stations it is possible to deduce the set of parameters of the Okada-type solution that best fits a set of sea level observations. To do this, we build a "space of possible tsunami sources-solution space". Each solution consists of a combination of parameters: earthquake magnitude, length, width, slip, depth and angles - strike, rake, and dip. To constrain the number of possible solutions we use the earthquake parameters defined by seismology and establish a range of possible values for each parameter. We select the "best Okada source" by comparison of the results of direct tsunami modeling using the solution space of tsunami sources. However, direct tsunami modeling is a time-consuming process for the whole solution space. To overcome this problem, we use a precomputed database of Empirical Green Functions to compute the tsunami waveforms resulting from unit water sources and search which one best matches the observations. In this study, we use as a test case the Solomon Islands tsunami of 6 February 2013 caused by a magnitude 8.0 earthquake. The "best Okada" source is the solution that best matches the tsunami recorded at six DART stations in the area. We discuss the differences between the initial seismic solution and the final one obtained from tsunami data This publication received funding of FCT-project UID/GEO/50019/2013-Instituto Dom Luiz. 1. Numerical simulation of transoceanic propagation and run-up of tsunami Energy Technology Data Exchange (ETDEWEB) Cho, Yong-Sik; Yoon Sung-Bum [Hanyang University, Seoul(Korea) 2001-04-30 The propagation and associated run-up process of tsunami are numerically investigated in this study. A transoceanic propagation model is first used to simulate the distant propagation of tsunamis. An inundation model is then employed to simulate the subsequent run-up process near coastline. A case study is done for the 1960 Chilean tsunami. A detailed maximum inundation map at Hilo Bay is obtained and compared with field observation and other numerical model, predictions. A very reasonable agreement is observed. (author). refs., tabs., figs. 2. A plan for safety evaluation of tsunamis at the Uljin nuclear power plant site International Nuclear Information System (INIS) Lee, H. K.; Lee, D. S. 1999-01-01 The sites of many nuclear and thermal power plants are located along the coast line to obtain necessary cooling water. Therefore, they are vulnerable to coastal disasters like tsunamis. The safety evaluation on tsunamis of the site of Uljin nuclear power plants was performed with the maximum potential earthquake magnitude and related fault parameters in 1986. But according to the results of recent research, the possibility was suggested that the earthquake which has bigger magnitude than was expected is likely to happen in the seismic gaps near Akita, Japan. Therefore, a plan for safety evaluation of tsunamis at the Uljin nuclear power plants was laid out 3. Development of Parallel Code for the Alaska Tsunami Forecast Model Science.gov (United States) Bahng, B.; Knight, W. R.; Whitmore, P. 2014-12-01 The Alaska Tsunami Forecast Model (ATFM) is a numerical model used to forecast propagation and inundation of tsunamis generated by earthquakes and other means in both the Pacific and Atlantic Oceans. At the U.S. National Tsunami Warning Center (NTWC), the model is mainly used in a pre-computed fashion. That is, results for hundreds of hypothetical events are computed before alerts, and are accessed and calibrated with observations during tsunamis to immediately produce forecasts. ATFM uses the non-linear, depth-averaged, shallow-water equations of motion with multiply nested grids in two-way communications between domains of each parent-child pair as waves get closer to coastal waters. Even with the pre-computation the task becomes non-trivial as sub-grid resolution gets finer. Currently, the finest resolution Digital Elevation Models (DEM) used by ATFM are 1/3 arc-seconds. With a serial code, large or multiple areas of very high resolution can produce run-times that are unrealistic even in a pre-computed approach. One way to increase the model performance is code parallelization used in conjunction with a multi-processor computing environment. NTWC developers have undertaken an ATFM code-parallelization effort to streamline the creation of the pre-computed database of results with the long term aim of tsunami forecasts from source to high resolution shoreline grids in real time. Parallelization will also permit timely regeneration of the forecast model database with new DEMs; and, will make possible future inclusion of new physics such as the non-hydrostatic treatment of tsunami propagation. The purpose of our presentation is to elaborate on the parallelization approach and to show the compute speed increase on various multi-processor systems. 4. Evaluation of the impact of 1983 east sea tsunami at the site of Ulchin nuclear power plant International Nuclear Information System (INIS) Lee, H. K.; Lee, D. S.; Choi, S. H. 2001-01-01 In the past, we carried out the safety assessment study at the site of Ulchin NPP against tsunamis on the basis of maximum earthquake magnitude 7 3/4 and available tsunamigenic earthquake fault parameters. But, recently, based on the seismic gap theory some geologists and seismologists warned that the earthquakes with larger magnitude than was expected might occur in the East Sea region. And, the need of re-evaluation of safety is suggested. In this study, to investigate the applicability of a finite difference model, we simulated the 1983 East Sea Tsunami at the Imwon Harbor where the maximum run-up height of tsunami was observed. The general agreement was obtained in the viewpoint of maxium wave run-up height. Finally, we evaluated the rise and drop of sea water level at the site of Ulchin NPP and concluded that the site of Ulchin NPP is safe against tsunami of the same magnitude of 1983 East Sea Tsunami 5. Source mechanisms of volcanic tsunamis. Science.gov (United States) Paris, Raphaël 2015-10-28 Volcanic tsunamis are generated by a variety of mechanisms, including volcano-tectonic earthquakes, slope instabilities, pyroclastic flows, underwater explosions, shock waves and caldera collapse. In this review, we focus on the lessons that can be learnt from past events and address the influence of parameters such as volume flux of mass flows, explosion energy or duration of caldera collapse on tsunami generation. The diversity of waves in terms of amplitude, period, form, dispersion, etc. poses difficulties for integration and harmonization of sources to be used for numerical models and probabilistic tsunami hazard maps. In many cases, monitoring and warning of volcanic tsunamis remain challenging (further technical and scientific developments being necessary) and must be coupled with policies of population preparedness. © 2015 The Author(s). 6. TSUNAMI INFORMATION SOURCES PART 2 Directory of Open Access Journals (Sweden) Robert L. Wiegel 2006-01-01 Full Text Available Tsunami Information Sources (Robert L. Wiegel, University of California, Berkeley, CA, UCB/HEL 2005-1, 14 December 2005, 115 pages, is available in printed format, and on a diskette. It is also available in electronic format at the Water Resources Center Archives, University of California, Berkeley, CA http:www.lib.berkeley.edu/WRCA/tsunamis.htmland in the International Journal of The Tsunami Society, Science of Tsunami Hazards (Vol. 24, No. 2, 2006, pp 58-171 at http://www.sthjournal.org/sth6.htm.This is Part 2 of the report. It has two components. They are: 1.(Sections A and B. Sources added since the first report, and corrections to a few listed in the first report. 2.(Sections C and D. References from both the first report and this report, listed in two categories:Section C. Planning and engineering design for tsunami mitigation/protection; adjustments to the hazard; damage to structures and infrastructureSection D. Tsunami propagation nearshore; induced oscillations; runup/inundation (flooding and drawdown. 7. TSUNAMI DEPOSITS AT QUEEN’S BEACH, OAHU, HAWAII – INITIALRESULTS AND WAVE MODELING Directory of Open Access Journals (Sweden) Dr. Barbara Keating 2004-01-01 Full Text Available Photographs taken immediately after the 1946 Aleutian Tsunami inundated Queen’s Beach, southeastern Oahu, show the major highway around the island was inundated and the road bed was destroyed. That road bed remains visible today, in an undeveloped coastline that shows like change in sedimentary deposits between 1946 and today (based on photographic evidence. Tsunami catalog records however indicate that the beach was repeatedly inundated by tsunami in 1946, 1952, 1957, and 1960. Tsunami runup was reported to have reached between 3 and 11 m elevation. Eyewitness accounts however indicate inundations of up to 20 m in Kealakipapa Valley (Makapu’u Lookout during 1946 and photographic evidence indicated inundation reached 9 m in 1957. The inundation of Kealakipapa Valley has been successfully modeled using a 10-m tsunami wave model.A comparison of the modern beach deposits to those near the remains of the destroyed highway demonstrate that the sedimentary deposits within the two areas have very different rock characteristics. We conclude the modern beach is dominated by the rounding of rocks (mostly coral by wave activity. However, in the area that has experienced prior tsunami inundations, the rocks are characterized by fracturing and a high component of basaltic material. We conclude the area near the destroyed highway reflects past tsunami inundations combined with inevitable anthropogenic alteration. 8. Tsunami Source Inversion Using Tide Gauge and DART Tsunami Waveforms of the 2017 Mw8.2 Mexico Earthquake Science.gov (United States) Adriano, Bruno; Fujii, Yushiro; Koshimura, Shunichi; Mas, Erick; Ruiz-Angulo, Angel; Estrada, Miguel 2018-01-01 On September 8, 2017 (UTC), a normal-fault earthquake occurred 87 km off the southeast coast of Mexico. This earthquake generated a tsunami that was recorded at coastal tide gauge and offshore buoy stations. First, we conducted a numerical tsunami simulation using a single-fault model to understand the tsunami characteristics near the rupture area, focusing on the nearby tide gauge stations. Second, the tsunami source of this event was estimated from inversion of tsunami waveforms recorded at six coastal stations and three buoys located in the deep ocean. Using the aftershock distribution within 1 day following the main shock, the fault plane orientation had a northeast dip direction (strike = 320°, dip = 77°, and rake =-92°). The results of the tsunami waveform inversion revealed that the fault area was 240 km × 90 km in size with most of the largest slip occurring on the middle and deepest segments of the fault. The maximum slip was 6.03 m from a 30 × 30 km2 segment that was 64.82 km deep at the center of the fault area. The estimated slip distribution showed that the main asperity was at the center of the fault area. The second asperity with an average slip of 5.5 m was found on the northwest-most segments. The estimated slip distribution yielded a seismic moment of 2.9 × 10^{21} Nm (Mw = 8.24), which was calculated assuming an average rigidity of 7× 10^{10} N/m2. 9. Coastal Impacts of the March 11th Tsunami in the Galapagos Islands Science.gov (United States) Lynett, P. J.; Weiss, R.; Renteria, W. 2011-12-01 On March 11, 2011 at 5:46:23 UTC (March 10 11:46:23 PM Local Time, Galapagos), the magnitude 9.0 Mw Great East Japan Earthquake occurred near the Tohoku region off the east coast of Japan. The purpose of this presentation is to provide the results of a tsunami field survey in the Galapagos Islands performed by an International Tsunami Survey Team (ITST) with great assistance from INOCAR, the oceanographic service of the Ecuadorian Navy, and the Galapagos National Park. The Galapagos Islands are a volcanic chain composed of many islands of various sizes. The four largest islands are the focus of this survey, and are, from west to east, Isabela, Santiagio, Santa Cruz, and San Cristobal. Aside from approximately 10 sandy beaches that are open to tourists, all other shoreline locations are strictly off limits to anyone without a research permit. All access to the shoreline is coordinated through the Galapagos National Park, and any landing requires a chaperone, a Park Ranger. While a few of the visited areas in this survey were tourist sites, the vast majority were not. Due to time constraints and a generally inaccessibility of the coastline, the survey locations were strongly guided by numerical computations performed previous to the surveys. This numerical guidance accurately predicted the regions of highest impact, as well as regions of relatively low impact. Tide-corrected maximum flow elevations were generally in the range of 3-4 meters, while Isabela experienced the largest flow elevation of 6 m in a small pocket beach. The largest harbor in the Islands, Puerto Ayora, experienced moderate damage, with significant flooding and some structural damage. Currents in the Baltra Channel, a small waterway between Santa Cruz and Baltra, were strong enough to transport navigation buoys distances greater than 800 m. Extreme dune erosion, and the associated destruction of sea turtle nesting habit, was widespread and noted on all of the islands visited. 10. Tsunami recurrence in the eastern Alaska-Aleutian arc: A Holocene stratigraphic record from Chirikof Island, Alaska Science.gov (United States) Nelson, Alan R.; Briggs, Richard; Dura, Tina; Engelhart, Simon E.; Gelfenbaum, Guy; Bradley, Lee-Ann; Forman, S.L.; Vane, Christopher H.; Kelley, K.A. 2015-01-01 Despite the role of the Alaska-Aleutian megathrust as the source of some of the largest earthquakes and tsunamis, the history of its pre–twentieth century tsunamis is largely unknown west of the rupture zone of the great (magnitude, M 9.2) 1964 earthquake. Stratigraphy in core transects at two boggy lowland sites on Chirikof Island’s southwest coast preserves tsunami deposits dating from the postglacial to the twentieth century. In a 500-m-long basin 13–15 m above sea level and 400 m from the sea, 4 of 10 sandy to silty beds in a 3–5-m-thick sequence of freshwater peat were probably deposited by tsunamis. The freshwater peat sequence beneath a gently sloping alluvial fan 2 km to the east, 5–15 m above sea level and 550 m from the sea, contains 20 sandy to silty beds deposited since 3.5 ka; at least 13 were probably deposited by tsunamis. Although most of the sandy beds have consistent thicknesses (over distances of 10–265 m), sharp lower contacts, good sorting, and/or upward fining typical of tsunami deposits, the beds contain abundant freshwater diatoms, very few brackish-water diatoms, and no marine diatoms. Apparently, tsunamis traveling inland over low dunes and boggy lowland entrained largely freshwater diatoms. Abundant fragmented diatoms, and lake species in some sandy beds not found in host peat, were probably transported by tsunamis to elevations of >10 m at the eastern site. Single-aliquot regeneration optically stimulated luminescence dating of the third youngest bed is consistent with its having been deposited by the tsunami recorded at Russian hunting outposts in 1788, and with the second youngest bed being deposited by a tsunami during an upper plate earthquake in 1880. We infer from stratigraphy, 14C-dated peat deposition rates, and unpublished analyses of the island’s history that the 1938 tsunami may locally have reached an elevation of >10 m. As this is the first record of Aleutian tsunamis extending throughout the Holocene, we 11. A new survey method of tsunami inundation area using chemical analysis of soil. Application to the field survey on the 2010 Chilean tsunami at Chile International Nuclear Information System (INIS) Yoshii, Takumi; Matsuyama, Masafumi; Koshimura, Shunichi; Mas, Erick; Matsuoka, Masashi; Jimenez, Cesar 2011-01-01 The severe earthquake of Mw 8.8 occurred on 27 Feb. 2010 at the center of Chile. The tsunami generated by the earthquake attacked the coast of Chile and it propagated to the Pacific Ocean coastline. The field survey on the disaster damages due to the tsunami was conducted near Talcahuano in Chile to prepare for the great tsunamis accompanied by the earthquakes predicted to occur near Japan within several decades. The aims of this field survey were to survey disaster damages especially relevant to electric equipments and to develop the survey method based on a chemical analysis of the inundated soil which supplies objective data with high accuracy compared to the conventional methods. In the survey area, the average of inundation heights was 6 m, however it locally reached up to 25 m. The maximum sea-level height of the series of the tsunamis was recorded in the third or fourth wave (roughly 3 hours after the earthquake occurrence). The first floors of houses were severely destroyed and some ships were carried and left on land by the tsunamis. Furthermore, the large amount of sediment was deposited in towns. Removing the drifted ships and tsunami deposit is important consideration for quick recovery from a disaster due to a tsunami. The soil samples were obtained from both the inundated and the not-inundated position. The stirred solution was made by the soil and ultrapure water, then, the content of water-soluble ions, electric conductivity (EC), and pH were measured. The soil obtained in the tsunami inundated area contains much water-soluble ions (Na + , Mg 2+ , Cl - , Br - , SO 4 2- ) compared to the samples obtained in the not-inundated area. The discriminant analysis of the tsunami inundation was conducted using the amount of ions in the soil. High discriminant accuracy (over 90%) was obtained with Na + , Mg 2+ , Cl - , Br - , SO 4 2- and EC. Br - , Cl - , Na + are believed to be suitable for the discriminant analysis about tsunamis considering the contaminant 12. Characteristics and damage investigation of the 1998 Papua New Guinea earthquake tsunami International Nuclear Information System (INIS) Matsuyama, Masashi 1998-01-01 On 17 July, 1998, an earthquake with moment magnitude Mw 7.1 (estimated by Harvard Univ.) occurred at 18:49 (local time) on the north west part of Papua New Guinea. Several minutes after the main shock, huge tsunami attacked the north coast of Sissano and Malol, where the coast is composed of straight beach with white sand, and about 7,000 people had lived in high floor wooden houses. Due to the tsunami, more than 2,000 people were killed. To investigate damage by the tsunami, a survey team of seven members was organized in Japan. The author took part in the survey team, which was headed by Prof. Kawata, of Kyoto University. We stayed in the Papua New Guinea from 30th July through 10th August 1998 to investigate the maximum water level, to interview the people about the phenomena caused by the earthquake and the tsunami, and to set three seismographs. These results imply that: (1) By main shock, an earthquake intensity of 6 on the Richter scale was felt in Sissano and Malol. In the coast area near Sissano and Malol, liquefaction took place. (2) More than 2,000 people were killed mainly due to the tsunami. (3) The maximum water level of the tsunami was about 15 m. (4) It seems that the tsunami caused not only by crustal movement, but also by other factors. This is suggested by the fact that the measured maximum water level was beyond 10 times larger than the estimated one, which was calculated by numerical simulation based on known fault parameters. It is highly probable that a submarine landslide was one of main factors which amplified the tsunami. (author) 13. Midway Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Midway Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a suite... 14. Yakutat Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Yakutat, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 15. Historical Tsunami Event Locations with Runups Data.gov (United States) Department of Homeland Security — The Global Historical Tsunami Database provides information on over 2,400 tsunamis from 2100 BC to the present in the the Atlantic, Indian, and Pacific Oceans; and... 16. Bermuda Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Bermuda Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 17. Coastal Tsunami and Risk Assessment for Eastern Mediterranean Countries Science.gov (United States) Kentel, E.; Yavuz, C. 2017-12-01 Tsunamis are rarely experienced events that have enormous potential to cause large economic destruction on the critical infrastructures and facilities, social devastation due to mass casualty, and environmental adverse effects like erosion, accumulation and inundation. Especially for the past two decades, nations have encountered devastating tsunami events. The aim of this study is to investigate risks along the Mediterranean coastline due to probable tsunamis based on simulations using reliable historical data. In order to do this, 50 Critical Regions, CRs, (i.e. city centers, agricultural areas and summer villages) and 43 Critical Infrastructures, CIs, (i.e. airports, ports & marinas and industrial structures) are determined to perform people-centered risk assessment along Eastern Mediterranean region covering 7 countries. These countries include Turkey, Syria, Lebanon, Israel, Egypt, Cyprus, and Libya. Bathymetry of the region is given in Figure 1. In this study, NAMI-DANCE is used to carry out tsunami simulations. Source of a sample tsunami simulation and maximum wave propagation in the study area for this sample tsunami are given in Figures 2 and 3, respectively.Richter magnitude,, focal depth, time of occurrence in a day and season are considered as the independent parameters of the earthquake. Historical earthquakes are used to generate reliable probability distributions for these parameters. Monte Carlo (MC) Simulations are carried out to evaluate overall risks at the coastline. Inundation level, population density, number of passenger or employee, literacy rate, annually income level and existence of human are used in risk estimations. Within each MC simulation and for each grid in the study area, people-centered tsunami risk for each of the following elements at risk is calculated: i. City centers ii. Agricultural areas iii. Summer villages iv. Ports and marinas v. Airports vi. Industrial structures Risk levels at each grid along the shoreline are 18. Washington Tsunami Hazard Mitigation Program Science.gov (United States) Walsh, T. J.; Schelling, J. 2012-12-01 Washington State has participated in the National Tsunami Hazard Mitigation Program (NTHMP) since its inception in 1995. We have participated in the tsunami inundation hazard mapping, evacuation planning, education, and outreach efforts that generally characterize the NTHMP efforts. We have also investigated hazards of significant interest to the Pacific Northwest. The hazard from locally generated earthquakes on the Cascadia subduction zone, which threatens tsunami inundation in less than hour following a magnitude 9 earthquake, creates special problems for low-lying accretionary shoreforms in Washington, such as the spits of Long Beach and Ocean Shores, where high ground is not accessible within the limited time available for evacuation. To ameliorate this problem, we convened a panel of the Applied Technology Council to develop guidelines for construction of facilities for vertical evacuation from tsunamis, published as FEMA 646, now incorporated in the International Building Code as Appendix M. We followed this with a program called Project Safe Haven (http://www.facebook.com/ProjectSafeHaven) to site such facilities along the Washington coast in appropriate locations and appropriate designs to blend with the local communities, as chosen by the citizens. This has now been completed for the entire outer coast of Washington. In conjunction with this effort, we have evaluated the potential for earthquake-induced ground failures in and near tsunami hazard zones to help develop cost estimates for these structures and to establish appropriate tsunami evacuation routes and evacuation assembly areas that are likely to to be available after a major subduction zone earthquake. We intend to continue these geotechnical evaluations for all tsunami hazard zones in Washington. 19. The 8 September 2017 Tsunami Triggered by the M w 8.2 Intraplate Earthquake, Chiapas, Mexico Science.gov (United States) Ramírez-Herrera, María Teresa; Corona, Néstor; Ruiz-Angulo, Angel; Melgar, Diego; Zavala-Hidalgo, Jorge 2018-01-01 The 8 September 2017, M w 8.2 earthquake offshore Chiapas, Mexico, is the largest earthquake in recorded history in Chiapas since 1902. It caused damage in the states of Oaxaca, Chiapas and Tabasco, including more than 100 fatalities, over 1.5 million people were affected, and 41,000 homes were damaged in the state of Chiapas alone. This earthquake, an intraplate event on a normal fault on the oceanic subducting plate, generated a tsunami recorded at several tide gauge stations in Mexico and on the Pacific Ocean. Here, we report the physical effects of the tsunami on the Chiapas coast and analyze the societal implications of this tsunami on the basis of our post-tsunami field survey. The associated tsunami waves were recorded first at Huatulco tide gauge station at 5:04 (GMT) 12 min after the earthquake. We covered ground observations along 41 km of the coast of Chiapas, encompassing the sites with the highest projected wave heights based on our preliminary tsunami model (maximum tsunami amplitudes between 94.5° and 93.0°W). Runup and inundation distances were measured along eight sites. The tsunami occurred at low tide. The maximum runup was 3 m at Boca del Cielo, and maximum inundation distance was 190 m in Puerto Arista, corresponding to the coast in front of the epicenter and in the central sector of the Gulf of Tehuantepec. Tsunami scour and erosion was evident along the Chiapas coast. Tsunami deposits, mainly sand, reached up to 32 cm thickness thinning landward up to 172 m distance. 20. Integrating Caribbean Seismic and Tsunami Hazard into Public Policy and Action Science.gov (United States) 2012-12-01 The Caribbean has a long history of tsunamis and earthquakes. Over the past 500 years, more than 80 tsunamis have been documented in the region by the NOAA National Geophysical Data Center. Almost 90% of all these historical tsunamis have been associated with earthquakes. Just since 1842, 3510 lives have been lost to tsunamis; this is more than in the Northeastern Pacific for the same time period. With a population of almost 160 million and a heavy concentration of residents, tourists, businesses and critical infrastructure along the Caribbean shores (especially in the northern and eastern Caribbean), the risk to lives and livelihoods is greater than ever before. Most of the countries also have a very high exposure to earthquakes. Given the elevated vulnerability, it is imperative that government officials take steps to mitigate the potentially devastating effects of these events. Nevertheless, given the low frequency of high impact earthquakes and tsunamis, in comparison to hurricanes, combined with social and economic considerations, the needed investments are not made and disasters like the 2010 Haiti earthquake occur. In the absence of frequent significant events, an important driving force for public officials to take action, is the dissemination of scientific studies. When papers of this nature have been published and media advisories issued, public officials demonstrate heightened interest in the topic which in turn can lead to increased legislation and funding efforts. This is especially the case if the material can be easily understood by the stakeholders and there is a local contact. In addition, given the close link between earthquakes and tsunamis, in Puerto Rico alone, 50% of the high impact earthquakes have also generated destructive tsunamis, it is very important that earthquake and tsunami hazards studies demonstrate consistency. Traditionally in the region, earthquake and tsunami impacts have been considered independently in the emergency planning 1. A Hybrid Tsunami Risk Model for Japan Science.gov (United States) Haseemkunju, A. V.; Smith, D. F.; Khater, M.; Khemici, O.; Betov, B.; Scott, J. 2014-12-01 Around the margins of the Pacific Ocean, denser oceanic plates slipping under continental plates cause subduction earthquakes generating large tsunami waves. The subducting Pacific and Philippine Sea plates create damaging interplate earthquakes followed by huge tsunami waves. It was a rupture of the Japan Trench subduction zone (JTSZ) and the resultant M9.0 Tohoku-Oki earthquake that caused the unprecedented tsunami along the Pacific coast of Japan on March 11, 2011. EQECAT's Japan Earthquake model is a fully probabilistic model which includes a seismo-tectonic model describing the geometries, magnitudes, and frequencies of all potential earthquake events; a ground motion model; and a tsunami model. Within the much larger set of all modeled earthquake events, fault rupture parameters for about 24000 stochastic and 25 historical tsunamigenic earthquake events are defined to simulate tsunami footprints using the numerical tsunami model COMCOT. A hybrid approach using COMCOT simulated tsunami waves is used to generate inundation footprints, including the impact of tides and flood defenses. Modeled tsunami waves of major historical events are validated against observed data. Modeled tsunami flood depths on 30 m grids together with tsunami vulnerability and financial models are then used to estimate insured loss in Japan from the 2011 tsunami. The primary direct report of damage from the 2011 tsunami is in terms of the number of buildings damaged by municipality in the tsunami affected area. Modeled loss in Japan from the 2011 tsunami is proportional to the number of buildings damaged. A 1000-year return period map of tsunami waves shows high hazard along the west coast of southern Honshu, on the Pacific coast of Shikoku, and on the east coast of Kyushu, primarily associated with major earthquake events on the Nankai Trough subduction zone (NTSZ). The highest tsunami hazard of more than 20m is seen on the Sanriku coast in northern Honshu, associated with the JTSZ. 2. Numerical modelling and evacuation strategies for tsunami awareness: lessons from the 2012 Haida Gwaii Tsunami OpenAIRE Santos, Angela; Tavares, Alexandre Oliveira; Queirós, Margarida 2016-01-01 On October 28, 2012, an earthquake occurred offshore Canada, with a magnitude Mw of 7.8, triggering a tsunami that propagated through the Pacific Ocean. The tsunami numerical model results show it would not be expected to generate widespread inundation on Hawaii. Yet, two hours after the earthquake, the Pacific Tsunami Warning Centre (PTWC) issued a tsunami warning to the state of Hawaii. Since the state was hit by several tsunamis in the past, regular siren exercises, tsuna... 3. Survey of the July 17, 2006 Central Javan tsunami reveals 21m runup heights Science.gov (United States) Fritz, H.; Goff, J.; Harbitz, C.; McAdoo, B.; Moore, A.; Latief, H.; Kalligeris, N.; Kodjo, W.; Uslu, B.; Titov, V.; Synolakis, C. 2006-12-01 The Monday, July 17, 2006 Central Javan 7.7 earthquake triggered a substantial tsunami that killed 600 people along a 200km stretch of coastline. The earthquake was not reported felt along the coastline. While there was a warning issued by the PTWC, it did not trigger an evacuation warning (Synolakis, 2006). The Indian Ocean Tsunami Warning System announced by UNESCO as operational in a press release two weeks before the event did not function as promised. There were no seismic recordings transmitted to the PTWC, and two German tsunameter buoys had broken off their moorings and were not operational. Lifeguards along a tourist beach reported that while the observed the harbinger shoreline recession, they attributed to exteme storm waves that were pounding the beaches that day. Had the tsunami struck on the preceding Sunday, instead of Monday, the death toll would had been far higher. The International Tsunami Survey Team (ITST) surveyed the coastline measuring runup, inundation, flow depths and sediment deposition, with standard methods (Synolakis and Okal, 2004). Runup values ranged up to 21m with several readings over 10m, while sand sheets up to 15cm were deposited. The parent earthquake was similar, albeit of smaller magnitude, to the 1994 East Javan tsunami, which struck about 200km east (Synolakis, et al, 1995) and reached a maximum of 11m runup height only at one location on steep cliffs. The unusual distribution of runup heights, and the pronounced extreme values near Nusa Kambangan, suggest a local coseismic landslide may have triggered an additional tsunami (Okal and Synolakis, 2005). The ITST observed that many coastal villages were completely abandoned after the tsunami, even in locales where there were no casualties. Whether residents will return is uncertain, but it is clear that an education campaign in tsunami hazard mitigation is urgently needed. In the aftermath of the tsunami, the Government of Indonesia enforced urgent emergency preparedness 4. Reconnaissance Survey of the 29 September 2009 Tsunami on Tutuila Island, American Samoa Science.gov (United States) Fritz, H. M.; Borrero, J. C.; Okal, E.; Synolakis, C.; Weiss, R.; Jaffe, B. E.; Lynett, P. J.; Titov, V. V.; Foteinis, S.; Chan, I.; Liu, P. 2009-12-01 On 29 September, 2009 a magnitude Mw 8.1 earthquake occurred 200 km southwest of American Samoa’s Capital of Pago Pago and triggered a tsunami which caused substantial damage and loss of life in Samoa, American Samoa and Tonga. The most recent estimate is that the tsunami caused 189 fatalities, including 34 in American Samoa. This is the highest tsunami death toll on US territory since the 1964 great Alaskan earthquake and tsunami. PTWC responded and issued warnings soon after the earthquake but, because the tsunami arrived within 15 minutes at many locations, was too late to trigger evacuations. Fortunately, the people of Samoa knew to go to high ground after an earthquake because of education and tsunami evacuation exercises initiated throughout the South Pacific after a similar magnitude earthquake and tsunami struck the nearby Solomon Islands in 2007. A multi-disciplinary reconnaissance survey team was deployed within days of the event to document flow depths, runup heights, inundation distances, sediment deposition, damage patterns at various scales, and performance of the man-made infrastructure and impact on the natural environment. The 4 to 11 October 2009 ITST circled American Samoa’s main island Tutuila and the small nearby island of Aunu’u. The American Samoa survey data includes nearly 200 runup and flow depth measurements on Tutuila Island. The tsunami impact peaked with maximum runup exceeding 17 m at Poloa located 1.5 km northeast of Cape Taputapu marking Tutuila’s west tip. A significant variation in tsunami impact was observed on Tutuila. The tsunami runup reached 12 m at Fagasa near the center of the Tutuila’s north coast and 9 m at Tula near Cape Matatula at the east end. Pago Pago, which is near the center of the south coast, represents an unfortunate example of a village and harbor that was located for protection from storm waves but is vulnerable to tsunami waves. The flow patterns inside Pago Pago harbor were characterized based on 5. Synthetic tsunamis along the Israeli coast. Science.gov (United States) Tobias, Joshua; Stiassnie, Michael 2012-04-13 The new mathematical model for tsunami evolution by Tobias & Stiassnie (Tobias & Stiassnie 2011 J. Geophys. Res. Oceans 116, C06026) is used to derive a synthetic tsunami database for the southern part of the Eastern Mediterranean coast. Information about coastal tsunami amplitudes, half-periods, currents and inundation levels is presented. 6. MODELING THE 1958 LITUYA BAY MEGA-TSUNAMI, II Directory of Open Access Journals (Sweden) 2002-01-01 Full Text Available Lituya Bay, Alaska is a T-Shaped bay, 7 miles long and up to 2 miles wide. The two arms at the head of the bay, Gilbert and Crillon Inlets, are part of a trench along the Fairweather Fault. On July 8, 1958, an 7.5 Magnitude earthquake occurred along the Fairweather fault with an epicenter near Lituya Bay.A mega-tsunami wave was generated that washed out trees to a maximum altitude of 520 meters at the entrance of Gilbert Inlet. Much of the rest of the shoreline of the Bay was denuded by the tsunami from 30 to 200 meters altitude.In the previous study it was determined that if the 520 meter high run-up was 50 to 100 meters thick, the observed inundation in the rest of Lituya Bay could be numerically reproduced. It was also concluded that further studies would require full Navier-Stokes modeling similar to those required for asteroid generated tsunami waves.During the Summer of 2000, Hermann Fritz conducted experiments that reproduced the Lituya Bay 1958 event. The laboratory experiments indicated that the 1958 Lituya Bay 524 meter run-up on the spur ridge of Gilbert Inlet could be caused by a landslide impact.The Lituya Bay impact landslide generated tsunami was modeled with the full Navier- Stokes AMR Eulerian compressible hydrodynamic code called SAGE with includes the effect of gravity. 7. Observations and Impacts from the 2010 Chilean and 2011 Japanese Tsunamis in California (USA) Science.gov (United States) Wilson, Rick I.; Admire, Amanda R.; Borrero, Jose C.; Dengler, Lori A.; Legg, Mark R.; Lynett, Patrick; McCrink, Timothy P.; Miller, Kevin M.; Ritchie, Andy; Sterling, Kara; Whitmore, Paul M. 2013-06-01 The coast of California was significantly impacted by two recent teletsunami events, one originating off the coast of Chile on February 27, 2010 and the other off Japan on March 11, 2011. These tsunamis caused extensive inundation and damage along the coast of their respective source regions. For the 2010 tsunami, the NOAA West Coast/Alaska Tsunami Warning Center issued a state-wide Tsunami Advisory based on forecasted tsunami amplitudes ranging from 0.18 to 1.43 m with the highest amplitudes predicted for central and southern California. For the 2011 tsunami, a Tsunami Warning was issued north of Point Conception and a Tsunami Advisory south of that location, with forecasted amplitudes ranging from 0.3 to 2.5 m, the highest expected for Crescent City. Because both teletsunamis arrived during low tide, the potential for significant inundation of dry land was greatly reduced during both events. However, both events created rapid water-level fluctuations and strong currents within harbors and along beaches, causing extensive damage in a number of harbors and challenging emergency managers in coastal jurisdictions. Field personnel were deployed prior to each tsunami to observe and measure physical effects at the coast. Post-event survey teams and questionnaires were used to gather information from both a physical effects and emergency response perspective. During the 2010 tsunami, a maximum tsunami amplitude of 1.2 m was observed at Pismo Beach, and over 3-million worth of damage to boats and docks occurred in nearly a dozen harbors, most significantly in Santa Cruz, Ventura, Mission Bay, and northern Shelter Island in San Diego Bay. During the 2011 tsunami, the maximum amplitude was measured at 2.47 m in Crescent City Harbor with over 50-million in damage to two dozen harbors. Those most significantly affected were Crescent City, Noyo River, Santa Cruz, Moss Landing, and southern Shelter Island. During both events, people on docks and near the ocean became at risk to 8. Hydraulic experiment on evaluation method of tsunami wave pressure using inundation depth and velocity in front of land structure International Nuclear Information System (INIS) Arimitsu, Tsuyoshi; Ooe, Kazuya; Kawasaki, Koji 2012-01-01 Hydraulic experiments were conducted to estimate tsunami wave pressure acting on several different types of land structures and examine the influence of a seawall in front of the structure on tsunami wave pressure. Wave pressures were measured at some points on the structure. The existing hydrostatic formula tended to underestimate tsunami wave pressure under the condition of inundation flow with large Froude number. Estimation method of tsunami wave pressure using inundation depth and horizontal velocity at the front of the structure was proposed based on the experimental results. It was confirmed from comparison with the experiments that the vertical distribution of the maximum tsunami wave pressure can be reproduced by employing the proposed method in this study. (author) 9. Chapter 3 – Phenomenology of Tsunamis: Statistical Properties from Generation to Runup Science.gov (United States) Geist, Eric L. 2015-01-01 Observations related to tsunami generation, propagation, and runup are reviewed and described in a phenomenological framework. In the three coastal regimes considered (near-field broadside, near-field oblique, and far field), the observed maximum wave amplitude is associated with different parts of the tsunami wavefield. The maximum amplitude in the near-field broadside regime is most often associated with the direct arrival from the source, whereas in the near-field oblique regime, the maximum amplitude is most often associated with the propagation of edge waves. In the far field, the maximum amplitude is most often caused by the interaction of the tsunami coda that develops during basin-wide propagation and the nearshore response, including the excitation of edge waves, shelf modes, and resonance. Statistical distributions that describe tsunami observations are also reviewed, both in terms of spatial distributions, such as coseismic slip on the fault plane and near-field runup, and temporal distributions, such as wave amplitudes in the far field. In each case, fundamental theories of tsunami physics are heuristically used to explain the observations. 10. Seaside, Oregon, Tsunami Vulnerability Assessment Pilot Study Science.gov (United States) Dunbar, P. K.; Dominey-Howes, D.; Varner, J. 2006-12-01 The results of a pilot study to assess the risk from tsunamis for the Seaside-Gearhart, Oregon region will be presented. To determine the risk from tsunamis, it is first necessary to establish the hazard or probability that a tsunami of a particular magnitude will occur within a certain period of time. Tsunami inundation maps that provide 100-year and 500-year probabilistic tsunami wave height contours for the Seaside-Gearhart, Oregon, region were developed as part of an interagency Tsunami Pilot Study(1). These maps provided the probability of the tsunami hazard. The next step in determining risk is to determine the vulnerability or degree of loss resulting from the occurrence of tsunamis due to exposure and fragility. The tsunami vulnerability assessment methodology used in this study was developed by M. Papathoma and others(2). This model incorporates multiple factors (e.g. parameters related to the natural and built environments and socio-demographics) that contribute to tsunami vulnerability. Data provided with FEMA's HAZUS loss estimation software and Clatsop County, Oregon, tax assessment data were used as input to the model. The results, presented within a geographic information system, reveal the percentage of buildings in need of reinforcement and the population density in different inundation depth zones. These results can be used for tsunami mitigation, local planning, and for determining post-tsunami disaster response by emergency services. (1)Tsunami Pilot Study Working Group, Seaside, Oregon Tsunami Pilot Study--Modernization of FEMA Flood Hazard Maps, Joint NOAA/USGS/FEMA Special Report, U.S. National Oceanic and Atmospheric Administration, U.S. Geological Survey, U.S. Federal Emergency Management Agency, 2006, Final Draft. (2)Papathoma, M., D. Dominey-Howes, D.,Y. Zong, D. Smith, Assessing Tsunami Vulnerability, an example from Herakleio, Crete, Natural Hazards and Earth System Sciences, Vol. 3, 2003, p. 377-389. 11. Tsunami Forecast for Galapagos Islands Science.gov (United States) Renteria, W. 2012-04-01 The objective of this study is to present a model for the short-term and long-term tsunami forecast for Galapagos Islands. For both cases the ComMIT/MOST(Titov,et al 2011) numerical model and methodology have been used. The results for the short-term model has been compared with the data from Lynett et al, 2011 surveyed from the impacts of the March/11 in the Galapagos Islands. For the case of long-term forecast, several scenarios have run along the Pacific, an extreme flooding map is obtained, the method is considered suitable for places with poor or without tsunami impact information, but under tsunami risk geographic location. 12. The Sri Lanka tsunami experience. Science.gov (United States) Yamada, Seiji; Gunatilake, Ravindu P; Roytman, Timur M; Gunatilake, Sarath; Fernando, Thushara; Fernando, Lalan 2006-01-01 The Indian Ocean tsunami of 2004 killed 31,000 people in Sri Lanka and produced morbidity primarily resulting from near-drownings and traumatic injuries. In the immediate aftermath, the survivors brought bodies to the hospitals, which hampered the hospitals' operations. The fear of epidemics led to mass burials. Infectious diseases were prevented through the provision of clean water and through vector control. Months after the tsunami, little rebuilding of permanent housing was evident, and many tsunami victims continued to reside in transit camps without means of generating their own income. The lack of an incident command system, limited funding, and political conflicts were identified as barriers to optimal relief efforts. Despite these barriers, Sri Lanka was fortunate in drawing upon a well-developed community health infrastructure as well as local and international resources. The need continues for education and training in clinical skills for mass rescue and emergency treatment, as well as participation in a multidisciplinary response. Science.gov (United States) An important and often overlooked item that every early career researcher needs to do is compose an elevator talk. The elevator talk, named because the talk should not last longer than an average elevator ride (30 to 60 seconds), is an effective method to present your research and yourself in a clea... 14. Tsunami response system for ports in Korea Science.gov (United States) Cho, H.-R.; Cho, J.-S.; Cho, Y.-S. 2015-09-01 The tsunamis that have occurred in many places around the world over the past decade have taken a heavy toll on human lives and property. The eastern coast of the Korean Peninsula is not safe from tsunamis, particularly the eastern coastal areas, which have long sustained tsunami damage. The eastern coast had been attacked by 1983 and 1993 tsunami events. The aim of this study was to mitigate the casualties and property damage against unexpected tsunami attacks along the eastern coast of the Korean Peninsula by developing a proper tsunami response system for important ports and harbors with high population densities and high concentrations of key national industries. The system is made based on numerical and physical modelings of 3 historical and 11 virtual tsunamis events, field surveys, and extensive interviews with related people. 15. Analysis of Tsunami Culture in Countries Affected by Recent Tsunamis NARCIS (Netherlands) Esteban, M.; Tsimopoulou, V.; Shibayama, T.; Mikami, T.; Ohira, K. 2012-01-01 Since 2004 there is a growing global awareness of the risks that tsunamis pose to coastal communities. Despite the fact that these events were already an intrinsic part of the culture of some countries (such as Chile and Japan), in many other places they had been virtually unheard of before 2004. 16. Combining historical eyewitness accounts on tsunami-induced waves and numerical simulations for getting insights in uncertainty of source parameters Science.gov (United States) Rohmer, Jeremy; Rousseau, Marie; Lemoine, Anne; Pedreros, Rodrigo; Lambert, Jerome; benki, Aalae 2017-04-01 Recent tsunami events including the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami have caused many casualties and damages to structures. Advances in numerical simulation of tsunami-induced wave processes have tremendously improved forecast, hazard and risk assessment and design of early warning for tsunamis. Among the major challenges, several studies have underlined uncertainties in earthquake slip distributions and rupture processes as major contributor on tsunami wave height and inundation extent. Constraining these uncertainties can be performed by taking advantage of observations either on tsunami waves (using network of water level gauge) or on inundation characteristics (using field evidence and eyewitness accounts). Despite these successful applications, combining tsunami observations and simulations still faces several limitations when the problem is addressed for past tsunamis events like 1755 Lisbon. 1) While recent inversion studies can benefit from current modern networks (e.g., tide gauges, sea bottom pressure gauges, GPS-mounted buoys), the number of tide gauges can be very scarce and testimonies on tsunami observations can be limited, incomplete and imprecise for past tsunamis events. These observations often restrict to eyewitness accounts on wave heights (e.g., maximum reached wave height at the coast) instead of the full observed waveforms; 2) Tsunami phenomena involve a large span of spatial scales (from ocean basin scales to local coastal wave interactions), which can make the modelling very demanding: the computation time cost of tsunami simulation can be very prohibitive; often reaching several hours. This often limits the number of allowable long-running simulations for performing the inversion, especially when the problem is addressed from a Bayesian inference perspective. The objective of the present study is to overcome both afore-described difficulties in the view to combine historical observations on past tsunami-induced waves 17. Japan Tsunami Current Flows Observed by HF Radars on Two Continents Directory of Open Access Journals (Sweden) Toshiyuki Awaji 2011-08-01 Full Text Available Quantitative real-time observations of a tsunami have been limited to deep-water, pressure-sensor observations of changes in the sea surface elevation and observations of sea level fluctuations at the coast, which are essentially point measurements. Constrained by these data, models have been used for predictions and warning of the arrival of a tsunami, but to date no detailed verification of flow patterns nor area measurements have been possible. Here we present unique HF-radar area observations of the tsunami signal seen in current velocities as the wave train approaches the coast. Networks of coastal HF-radars are now routinely observing surface currents in many countries and we report clear results from five HF radar sites spanning a distance of 8,200 km on two continents following the magnitude 9.0 earthquake off Sendai, Japan, on 11 March 2011. We confirm the tsunami signal with three different methodologies and compare the currents observed with coastal sea level fluctuations at tide gauges. The distance offshore at which the tsunami can be detected, and hence the warning time provided, depends on the bathymetry: the wider the shallow continental shelf, the greater this time. Data from these and other radars around the Pacific rim can be used to further develop radar as an important tool to aid in tsunami observation and warning as well as post-processing comparisons between observation and model predictions. 18. Prototype Tsunami Evacuation Park in Padang, West Sumatra, Indonesia Science.gov (United States) Tucker, B. E.; Cedillos, V.; Deierlein, G.; Di Mauro, M.; Kornberg, K. 2012-12-01 Padang, Indonesia, a city of some 900,000 people, half of whom live close to the coast and within a five-meter elevation above sea level, has one of the highest tsunami risks in the world due to its close offshore thrust-fault seismic hazard, flat terrain and dense population. There is a high probability that a tsunami will strike the shores of Padang, flooding half of the area of the city, within the next 30 years. If that tsunami occurred today, it is estimated that several hundred thousand people would die, as they could not reach safe ground in the ~30 minute interval between the earthquake's occurrence and the tsunami's arrival. Padang's needs have been amply demonstrated: after earthquakes in 2007, 2009, 2011 and 2012, citizens, thinking that those earthquakes might cause a tsunami, tried to evacuate in cars and motorbikes, which created traffic jams, and most could not reach safe ground in 30 minutes. Since 2008, GeoHazards International (GHI) and Stanford University have studied a range of options for improving this situation, including ways to accelerate evacuation to high ground with pedestrian bridges and widened roads, and means of "vertical" evacuation in multi-story buildings, mosques, pedestrian overpasses, and Tsunami Evacuation Parks (TEPs), which are man-made hills with recreation facilities on top. TEPs proved most practical and cost-effective for Padang, given the available budget, technology and time. The Earth Observatory Singapore (EOS) developed an agent-based model that simulates pedestrian and vehicular evacuation to assess tsunami risk and risk reduction interventions in Southeast Asia. EOS applied this model to analyze the effectiveness in Padang of TEPs over other tsunami risk management approaches in terms of evacuation times and the number of people saved. The model shows that only ~24,000 people (20% of the total population) in the northern part of Padang can reach safe ground within 30 minutes, if people evacuate using cars and 19. Tsunami of 26 December 2004 Digital Repository Service at National Institute of Oceanography (India) In the absence of earlier studies, an attempt is made to identify the vulnerable areas of the Indian coast for the damages due to Tsunami based on an earlier study reported in the context of sea level rise due to greenhouse effect. It is inferred... 20. The Pacific tsunami warning system Science.gov (United States) Pararas-Carayannis, G. 1986-01-01 Of all natural disasters, tsunamis are among the most terrifying and complex phenomena, responsible for great loss of lives and vast destruction of property. Enormous destruction of coastal communities has taken place throughout the world by such great waves since the beginning of recorded history. 1. Nowcasting Earthquakes and Tsunamis Science.gov (United States) Rundle, J. B.; Turcotte, D. L. 2017-12-01 . As another application, we can define large rectangular regions of subduction zones and shallow depths to compute the progress of the fault zone towards the next major tsunami-genic earthquake. We can then rank the relative progress of the major subduction zones of the world through their cycles of large earthquakes using this method to determine which zones are most at risk. 2. Defining Tsunami Magnitude as Measure of Potential Impact Science.gov (United States) Titov, V. V.; Tang, L. 2016-12-01 The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods. 3. Evolution of tsunami warning systems and products. Science.gov (United States) Bernard, Eddie; Titov, Vasily 2015-10-28 Each year, about 60 000 people and $4 billion (US$) in assets are exposed to the global tsunami hazard. Accurate and reliable tsunami warning systems have been shown to provide a significant defence for this flooding hazard. However, the evolution of warning systems has been influenced by two processes: deadly tsunamis and available technology. In this paper, we explore the evolution of science and technology used in tsunami warning systems, the evolution of their products using warning technologies, and offer suggestions for a new generation of warning products, aimed at the flooding nature of the hazard, to reduce future tsunami impacts on society. We conclude that coastal communities would be well served by receiving three standardized, accurate, real-time tsunami warning products, namely (i) tsunami energy estimate, (ii) flooding maps and (iii) tsunami-induced harbour current maps to minimize the impact of tsunamis. Such information would arm communities with vital flooding guidance for evacuations and port operations. The advantage of global standardized flooding products delivered in a common format is efficiency and accuracy, which leads to effectiveness in promoting tsunami resilience at the community level. © 2015 The Authors. 4. Evolution of tsunami warning systems and products Science.gov (United States) Bernard, Eddie; Titov, Vasily 2015-01-01 Each year, about 60 000 people and $4 billion (US$) in assets are exposed to the global tsunami hazard. Accurate and reliable tsunami warning systems have been shown to provide a significant defence for this flooding hazard. However, the evolution of warning systems has been influenced by two processes: deadly tsunamis and available technology. In this paper, we explore the evolution of science and technology used in tsunami warning systems, the evolution of their products using warning technologies, and offer suggestions for a new generation of warning products, aimed at the flooding nature of the hazard, to reduce future tsunami impacts on society. We conclude that coastal communities would be well served by receiving three standardized, accurate, real-time tsunami warning products, namely (i) tsunami energy estimate, (ii) flooding maps and (iii) tsunami-induced harbour current maps to minimize the impact of tsunamis. Such information would arm communities with vital flooding guidance for evacuations and port operations. The advantage of global standardized flooding products delivered in a common format is efficiency and accuracy, which leads to effectiveness in promoting tsunami resilience at the community level. PMID:26392620 5. Source properties of the 1998 July 17 Papua New Guinea tsunami based on tide gauge records Science.gov (United States) 2015-07-01 We analysed four newly retrieved tide gauge records of the 1998 July 17 Papua New Guinea (PNG) tsunami to study statistical and spectral properties of this tsunami. The four tide gauge records were from Lombrum (PNG), Rabaul (PNG), Malakal Island (Palau) and Yap Island (State of Yap) stations located 600-1450 km from the source. The tsunami registered a maximum trough-to-crest wave height of 3-9 cm at these gauges. Spectral analysis showed two dominant peaks at period bands of 2-4 and 6-20 min with a clear separation at the period of ˜5 min. We interpreted these peak periods as belonging to the landslide and earthquake sources of the PNG tsunami, respectively. Analysis of the tsunami waveforms revealed 12-17 min delay in landslide generation compared to the origin time of the main shock. Numerical simulations including this delay fairly reproduced the observed tide gauge records. This is the first direct evidence of the delayed landslide source of the 1998 PNG tsunami which was previously indirectly estimated from acoustic T-phase records. 6. Scientists Examine Challenges and Lessons From Japan's Earthquake and Tsunami Science.gov (United States) Showstack, Randy 2011-03-01 A week after the magnitude 9.0 great Tohoku earthquake and the resulting tragic and damaging tsunami of 11 March struck Japan, the ramifications continued, with a series of major aftershocks (as Eos went to press, there had been about 4 dozen with magnitudes greater than 6); the grim search for missing people—the death toll was expected to approximate 10,000; the urgent assistance needed for the more than 400,000 homeless and the 1 million people without water; and the frantic efforts to avert an environmental catastrophe at Japan's damaged Fukushima Daiichi Nuclear Power Station, about 225 kilometers northeast of Tokyo, where radiation was leaking. The earthquake offshore of Honshu in northeastern Japan (see Figure 1) was a plate boundary rupture along the Japan Trench subduction zone, with the source area of the earthquake estimated at 400-500 kilometers long with a maximum slip of 20 meters, determined through various means including Global Positioning System (GPS) and seismographic data, according to Kenji Satake, professor at the Earthquake Research Institute of the University of Tokyo. In some places the tsunami may have topped 7 meters—the maximum instrumental measurement at many coastal tide gauges—and some parts of the coastline may have been inundated more than 5 kilometers inland, Satake indicated. The International Tsunami Information Center (ITIC) noted that eyewitnesses reported that the highest tsunami waves were 13 meters high. Satake also noted that continuous GPS stations indicate that the coast near Sendai—which is 130 kilometers west of the earthquake and is the largest city in the Tohoku region of Honshu—moved more than 4 meters horizontally and subsided about 0.8 meter. 7. Tsunami vulnerability assessment in the western coastal belt in Sri Lanka Science.gov (United States) Ranagalage, M. M. 2017-12-01 26th December 2004 tsunami disaster has caused massive loss of life, damage to coastal infrastructures and disruption to economic activities in the coastal belt of Sri Lanka. Tsunami vulnerability assessment is a requirement for disaster risk and vulnerability reduction. It plays a major role in identifying the extent and level of vulnerabilities to disasters within the communities. There is a need for a clearer understanding of the disaster risk patterns and factors contributing to it in different parts of the coastal belt. The main objective of this study is to investigate tsunami vulnerability assessment of Moratuwa Municipal council area in Sri Lanka. We have selected Moratuwa area due to considering urbanization pattern and Tsunami hazards of the country. Different data sets such as one-meter resolution LiDAR data, orthophoto, population, housing data and road layer were employed in this study. We employed tsunami vulnerability model for 1796 housing units located there, for a tsunami scenario with a maximum run-up 8 meters. 86% of the total land area affected by the tsunami in 8 meters scenarios. Additionally, building population has been used to estimate population in different vulnerability levels. The result shows that 32% of the buildings have extremely critical vulnerability level, 46% have critical vulnerability level, 22% have high vulnerability level, and 1% have a moderate vulnerability. According to the population estimation model results, 18% reside building with extremely critical vulnerability, 43% with critical vulnerability, 36% with high vulnerability and 3% belong to moderate vulnerability level. The results of the study provide a clear picture of tsunami vulnerability. Outcomes of this analysis can use as a valuable tool for urban planners to assess the risk and extent of disaster risk reduction which could be achieved via suitable mitigation measures to manage the coastal belt in Sri Lanka. 8. Issues of tsunami hazard maps revealed by the 2011 Tohoku tsunami Science.gov (United States) Sugimoto, M. 2013-12-01 Tsunami scientists are imposed responsibilities of selection for people's tsunami evacuation place after the 2011 Tohoku Tsunami in Japan. A lot of matured people died out of tsunami hazard zone based on tsunami hazard map though students made a miracle by evacuation on their own judgment in Kamaishi city. Tsunami hazard maps were based on numerical model smaller than actual magnitude 9. How can we bridge the gap between hazard map and future disasters? We have to discuss about using tsunami numerical model better enough to contribute tsunami hazard map. How do we have to improve tsunami hazard map? Tsunami hazard map should be revised included possibility of upthrust or downthrust after earthquakes and social information. Ground sank 1.14m below sea level in Ayukawa town, Tohoku. Ministry of Land, Infrastructure, Transport and Tourism's research shows around 10% people know about tsunami hazard map in Japan. However, people know about their evacuation places (buildings) through experienced drills once a year even though most people did not know about tsunami hazard map. We need wider spread of tsunami hazard with contingency of science (See the botom disaster handbook material's URL). California Emergency Management Agency (CEMA) team practically shows one good practice and solution to me. I followed their field trip in Catalina Island, California in Sep 2011. A team members are multidisciplinary specialists: A geologist, a GIS specialist, oceanographers in USC (tsunami numerical modeler) and a private company, a local policeman, a disaster manager, a local authority and so on. They check field based on their own specialties. They conduct an on-the-spot inspection of ambiguous locations between tsunami numerical model and real field conditions today. The data always become older. They pay attention not only to topographical conditions but also to social conditions: vulnerable people, elementary schools and so on. It takes a long time to check such field 9. Impact Forces from Tsunami-Driven Debris Science.gov (United States) Ko, H.; Cox, D. T.; Riggs, H.; Naito, C. J.; Kobayashi, M. H.; Piran Aghl, P. 2012-12-01 Debris driven by tsunami inundation flow has been known to be a significant threat to structures, yet we lack the constitutive equations necessary to predict debris impact force. The objective of this research project is to improve our understanding of, and predictive capabilities for, tsunami-driven debris impact forces on structures. Of special interest are shipping containers, which are virtually everywhere and which will float even when fully loaded. The forces from such debris hitting structures, for example evacuation shelters and critical port facilities such as fuel storage tanks, are currently not known. This research project focuses on the impact by flexible shipping containers on rigid columns and investigated using large-scale laboratory testing. Full-scale in-air collision experiments were conducted at Lehigh University with 20 ft shipping containers to experimentally quantify the nonlinear behavior of full scale shipping containers as they collide into structural elements. The results from the full scale experiments were used to calibrate computer models and used to design a series of simpler, 1:5 scale wave flume experiments at Oregon State University. Scaled in-air collision tests were conducted using 1:5 scale idealized containers to mimic the container behavior observed in the full scale tests and to provide a direct comparison to the hydraulic model tests. Two specimens were constructed using different materials (aluminum, acrylic) to vary the stiffness. The collision tests showed that at higher speeds, the collision became inelastic as the slope of maximum impact force/velocity decreased with increasing velocity. Hydraulic model tests were conducted using the 1:5 scaled shipping containers to measure the impact load by the containers on a rigid column. The column was instrumented with a load cell to measure impact forces, strain gages to measure the column deflection, and a video camera was used to provide the debris orientation and speed. The 10. Developing fragility functions for aquaculture rafts and eelgrass in the case of the 2011 Great East Japan tsunami Science.gov (United States) Suppasri, Anawat; Fukui, Kentaro; Yamashita, Kei; Leelawat, Natt; Ohira, Hiroyuki; Imamura, Fumihiko 2018-01-01 Since the two devastating tsunamis in 2004 (Indian Ocean) and 2011 (Great East Japan), new findings have emerged on the relationship between tsunami characteristics and damage in terms of fragility functions. Human loss and damage to buildings and infrastructures are the primary target of recovery and reconstruction; thus, such relationships for offshore properties and marine ecosystems remain unclear. To overcome this lack of knowledge, this study used the available data from two possible target areas (Mangokuura Lake and Matsushima Bay) from the 2011 Japan tsunami. This study has three main components: (1) reproduction of the 2011 tsunami, (2) damage investigation, and (3) fragility function development. First, the source models of the 2011 tsunami were verified and adjusted to reproduce the tsunami characteristics in the target areas. Second, the damage ratio (complete damage) of the aquaculture raft and eelgrass was investigated using satellite images taken before and after the 2011 tsunami through visual inspection and binarization. Third, the tsunami fragility functions were developed using the relationship between the simulated tsunami characteristics and the estimated damage ratio. Based on the statistical analysis results, fragility functions were developed for Mangokuura Lake, and the flow velocity was the main contributor to the damage instead of the wave amplitude. For example, the damage ratio above 0.9 was found to be equal to the maximum flow velocities of 1.3 m s-1 (aquaculture raft) and 3.0 m s-1 (eelgrass). This finding is consistent with the previously proposed damage criterion of 1 m s-1 for the aquaculture raft. This study is the first step in the development of damage assessment and planning for marine products and environmental factors to mitigate the effects of future tsunamis. 11. Developing fragility functions for aquaculture rafts and eelgrass in the case of the 2011 Great East Japan tsunami Directory of Open Access Journals (Sweden) A. Suppasri 2018-01-01 Full Text Available Since the two devastating tsunamis in 2004 (Indian Ocean and 2011 (Great East Japan, new findings have emerged on the relationship between tsunami characteristics and damage in terms of fragility functions. Human loss and damage to buildings and infrastructures are the primary target of recovery and reconstruction; thus, such relationships for offshore properties and marine ecosystems remain unclear. To overcome this lack of knowledge, this study used the available data from two possible target areas (Mangokuura Lake and Matsushima Bay from the 2011 Japan tsunami. This study has three main components: (1 reproduction of the 2011 tsunami, (2 damage investigation, and (3 fragility function development. First, the source models of the 2011 tsunami were verified and adjusted to reproduce the tsunami characteristics in the target areas. Second, the damage ratio (complete damage of the aquaculture raft and eelgrass was investigated using satellite images taken before and after the 2011 tsunami through visual inspection and binarization. Third, the tsunami fragility functions were developed using the relationship between the simulated tsunami characteristics and the estimated damage ratio. Based on the statistical analysis results, fragility functions were developed for Mangokuura Lake, and the flow velocity was the main contributor to the damage instead of the wave amplitude. For example, the damage ratio above 0.9 was found to be equal to the maximum flow velocities of 1.3 m s−1 (aquaculture raft and 3.0 m s−1 (eelgrass. This finding is consistent with the previously proposed damage criterion of 1 m s−1 for the aquaculture raft. This study is the first step in the development of damage assessment and planning for marine products and environmental factors to mitigate the effects of future tsunamis. 12. Effect of Nearshore Islands on Tsunami Inundation in Shadow Zones Science.gov (United States) Goertz, J.; Kaihatu, J. M.; Kalligeris, N.; Lynett, P. J.; Synolakis, C. 2017-12-01 Field surveys performed in the wake of the 2010 Mentawai tsunami event have described the belief of local residents that offshore islands serve as possible tsunami sheltering mechanisms, reducing the corresponding inundation on beaches behind the islands, despite the fact that deduced inundation from debris lines show this to be in fact untrue (Hill et al. 2012). Recent numerical model studies (Stefanakis et al. 2014) have shown that inundation levels on beaches behind conical islands are indeed higher than they are on open coastlines. While work has been done on tsunami amplification on the lee side of islands (Briggs et al. 1995), no work has been done concerning tsunami inundation on beach areas behind the islands. A series of experiments to address this were conducted in the Directional Wave Basin (DWB) at the O.H. Hinsdale Wave Research Laboratory at Oregon State University in summer 2016. A series of four sheet metal islands (two with a full conical section, two truncated at the water line) were placed at varying distances from the toe of a 1/10 sloping beach. Incident wave conditions consisting of solitary waves and full-stroke "dam break" waves were run over the islands. Free surface elevations, velocities, and beach runup were measured, with the intent of determining relationships between the wave condition, the island geometry and distance from the beach, and the tsunami characteristics. A series of runup measurements from a particular set of experiments can be seen in Figure 1. Based on these preliminary analyses, it was determined that: A) inundation was always amplified behind the island relative to areas outside this shadow zone; and B) inundation was generally highest with the island closest to the beach, except in the case where the tsunami wave broke prior to reaching the island. In this latter scenario, the inundation behind the island increased with island distance from the beach. The development of relationships between the inundation levels 13. IMPACT OF TSUNAMI 2004 IN COASTAL VILLAGES OF NAGAPATTINAM DISTRICT, INDIA Directory of Open Access Journals (Sweden) R. Kumaraperumal 2007-01-01 Full Text Available ABSTRACTA quake-triggered tsunami lashed the Nagapattinam coast of southern India on December 26, 2004 at around 9.00 am (IST. The tsunami caused heavy damage to houses, tourist resorts, fishing boats, prawn culture ponds, soil and crops, and consequently affected the livelihood of large numbers of the coastal communities. The study was carried out in the Tsunami affected villages in the coastal Nagapattinam with the help of remote sensing and geographical information science tools. Through the use of the IRS 1D PAN and LISS 3 merged data and quick bird images, it was found that 1,320 ha of agricultural and non-agricultural lands were affected by the tsunami. The lands were affected by soil erosion, salt deposition, water logging and other deposited sediments and debris. The maximum run-up height of 6.1 m and the maximum seawater inundation distance of 2.2 km were observed at Vadakkupoyyur village in coastal Nagapattinam.Pre and Post Tsunami survey on soil quality showed an increase in pH and EC values, irrespectiveof distance from the sea. The water reaction was found to be in alkaline range (> 8.00 in most of the -1wells. Salinity levels are greater than 4 dS m in all the wells except the ring well. The effect of summer rainfall on soil and water quality showed the dilution of soluble salts. Pumping of water has reduced the salinity levels in the well water samples and as well as in the open ponds. Following the 2004 event, it has become apparent to know the relative tsunami hazard for this coastal Nagapattinam. So, the Tsunami hazard maps are generated using a geographical information systems (GIS approach and the results showed 20.6 per cent, 63.7 per cent and 15.2 per cent of the study area fall under high hazard, medium hazard and low hazard category respectively. 14. Tsunami Propagation Models Based on First Principles Science.gov (United States) 2012-11-21 geodesic lines from the epicenter shown in the figure are great circles with a longitudinal separation of 90o, which define a ‘ lune ’ that covers one...past which the waves begin to converge according to Model C. A tsunami propagating in this lune does not encounter any continental landmass until...2011 Japan tsunami in a lune of angle 90o with wavefronts at intervals of 5,000 km The 2011 Japan tsunami was felt throughout the Pacific Ocean 15. Development of Tsunami PSA method for Korean NPP site International Nuclear Information System (INIS) Kim, Min Kyu; Choi, In Kil; Park, Jin Hee 2010-01-01 A methodology of tsunami PSA was developed in this study. A tsunami PSA consists of tsunami hazard analysis, tsunami fragility analysis and system analysis. In the case of tsunami hazard analysis, evaluation of tsunami return period is major task. For the evaluation of tsunami return period, numerical analysis and empirical method can be applied. The application of this method was applied to a nuclear power plant, Ulchin 56 NPP, which is located in the east coast of Korean peninsula. Through this study, whole tsunami PSA working procedure was established and example calculation was performed for one of real nuclear power plant in Korea 16. High resolution tsunami inversion for 2010 Chile earthquake Directory of Open Access Journals (Sweden) T.-R. Wu 2011-12-01 Full Text Available We investigate the feasibility of inverting high-resolution vertical seafloor displacement from tsunami waveforms. An inversion method named "SUTIM" (small unit tsunami inversion method is developed to meet this goal. In addition to utilizing the conventional least-square inversion, this paper also enhances the inversion resolution by Grid-Shifting method. A smooth constraint is adopted to gain stability. After a series of validation and performance tests, SUTIM is used to study the 2010 Chile earthquake. Based upon data quality and azimuthal distribution, we select tsunami waveforms from 6 GLOSS stations and 1 DART buoy record. In total, 157 sub-faults are utilized for the high-resolution inversion. The resolution reaches 10 sub-faults per wavelength. The result is compared with the distribution of the aftershocks and waveforms at each gauge location with very good agreement. The inversion result shows that the source profile features a non-uniform distribution of the seafloor displacement. The highly elevated vertical seafloor is mainly concentrated in two areas: one is located in the northern part of the epicentre, between 34° S and 36° S; the other is in the southern part, between 37° S and 38° S. 17. High resolution tsunami inversion for 2010 Chile earthquake Science.gov (United States) Wu, T.-R.; Ho, T.-C. 2011-12-01 We investigate the feasibility of inverting high-resolution vertical seafloor displacement from tsunami waveforms. An inversion method named "SUTIM" (small unit tsunami inversion method) is developed to meet this goal. In addition to utilizing the conventional least-square inversion, this paper also enhances the inversion resolution by Grid-Shifting method. A smooth constraint is adopted to gain stability. After a series of validation and performance tests, SUTIM is used to study the 2010 Chile earthquake. Based upon data quality and azimuthal distribution, we select tsunami waveforms from 6 GLOSS stations and 1 DART buoy record. In total, 157 sub-faults are utilized for the high-resolution inversion. The resolution reaches 10 sub-faults per wavelength. The result is compared with the distribution of the aftershocks and waveforms at each gauge location with very good agreement. The inversion result shows that the source profile features a non-uniform distribution of the seafloor displacement. The highly elevated vertical seafloor is mainly concentrated in two areas: one is located in the northern part of the epicentre, between 34° S and 36° S; the other is in the southern part, between 37° S and 38° S. 18. Predicting location-specific extreme coastal floods in the future climate by introducing a probabilistic method to calculate maximum elevation of the continuous water mass caused by a combination of water level variations and wind waves Science.gov (United States) Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu 2017-04-01 Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and 19. National Geophysical Data Center Tsunami Data Archive Science.gov (United States) Stroker, K. J.; Dunbar, P. K.; Brocko, R. 2008-12-01 NOAA's National Geophysical Data Center (NGDC) and co-located World Data Center for Geophysics and Marine Geology long-term tsunami data archive provides data and derived products essential for tsunami hazard assessment, forecast and warning, inundation modeling, preparedness, mitigation, education, and research. As a result of NOAA's efforts to strengthen its tsunami activities, the long-term tsunami data archive has grown from less than 5 gigabyte in 2004 to more than 2 terabytes in 2008. The types of data archived for tsunami research and operation activities have also expanded in fulfillment of the P.L. 109-424. The archive now consists of: global historical tsunami, significant earthquake and significant volcanic eruptions database; global tsunami deposits and proxies database; reference database; damage photos; coastal water-level data (i.e. digital tide gauge data and marigrams on microfiche); bottom pressure recorder (BPR) data as collected by Deep-ocean Assessment and Reporting of Tsunamis (DART) buoys. The tsunami data archive comes from a wide variety of data providers and sources. These include the NOAA Tsunami Warning Centers, NOAA National Data Buoy Center, NOAA National Ocean Service, IOC/NOAA International Tsunami Information Center, NOAA Pacific Marine Environmental Laboratory, U.S. Geological Survey, tsunami catalogs, reconnaissance reports, journal articles, newspaper articles, internet web pages, and email. NGDC has been active in the management of some of these data for more than 50 years while other data management efforts are more recent. These data are openly available, either directly on-line or by contacting NGDC. All of the NGDC tsunami and related databases are stored in a relational database management system. These data are accessible over the Web as tables, reports, and interactive maps. The maps provide integrated web-based GIS access to individual GIS layers including tsunami sources, tsunami effects, significant earthquakes 20. Holocene Tsunamis in Avachinsky Bay, Kamchatka, Russia Science.gov (United States) Pinegina, Tatiana K.; Bazanova, Lilya I.; Zelenin, Egor A.; Bourgeois, Joanne; Kozhurin, Andrey I.; Medvedev, Igor P.; Vydrin, Danil S. 2018-04-01 This article presents results of the study of tsunami deposits on the Avachinsky Bay coast, Kurile-Kamchatka island arc, NW Pacific. We used tephrochronology to assign ages to the tsunami deposits, to correlate them between excavations, and to restore paleo-shoreline positions. In addition to using established regional marker tephra, we establish a detailed tephrochronology for more local tephra from Avachinsky volcano. For the first time in this area, proximal to Kamchatka's primary population, we reconstruct the vertical runup and horizontal inundation for 33 tsunamis recorded over the past 4200 years, 5 of which are historical events - 1737, 1792, 1841, 1923 (Feb) and 1952. The runup heights for all 33 tsunamis range from 1.9 to 5.7 m, and inundation distances from 40 to 460 m. The average recurrence for historical events is 56 years and for the entire study period 133 years. The obtained data makes it possible to calculate frequencies of tsunamis by size, using reconstructed runup and inundation, which is crucial for tsunami hazard assessment and long-term tsunami forecasting. Considering all available data on the distribution of historical and paleo-tsunami heights along eastern Kamchatka, we conclude that the southern part of the Kamchatka subduction zone generates stronger tsunamis than its northern part. The observed differences could be associated with variations in the relative velocity and/or coupling between the downgoing Pacific Plate and Kamchatka. 1. Tsunami hazard map in eastern Bali Science.gov (United States) Afif, Haunan; Cipta, Athanasius 2015-04-01 Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography. 2. Using GPS to Detect Imminent Tsunamis Science.gov (United States) Song, Y. Tony 2009-01-01 A promising method of detecting imminent tsunamis and estimating their destructive potential involves the use of Global Positioning System (GPS) data in addition to seismic data. Application of the method is expected to increase the reliability of global tsunami-warning systems, making it possible to save lives while reducing the incidence of false alarms. Tsunamis kill people every year. The 2004 Indian Ocean tsunami killed about 230,000 people. The magnitude of an earthquake is not always a reliable indication of the destructive potential of a tsunami. The 2004 Indian Ocean quake generated a huge tsunami, while the 2005 Nias (Indonesia) quake did not, even though both were initially estimated to be of the similar magnitude. Between 2005 and 2007, five false tsunami alarms were issued worldwide. Such alarms result in negative societal and economic effects. GPS stations can detect ground motions of earthquakes in real time, as frequently as every few seconds. In the present method, the epicenter of an earthquake is located by use of data from seismometers, then data from coastal GPS stations near the epicenter are used to infer sea-floor displacements that precede a tsunami. The displacement data are used in conjunction with local topographical data and an advanced theory to quantify the destructive potential of a tsunami on a new tsunami scale, based on the GPS-derived tsunami energy, much like the Richter Scale used for earthquakes. An important element of the derivation of the advanced theory was recognition that horizontal sea-floor motions contribute much more to generation of tsunamis than previously believed. The method produces a reliable estimate of the destructive potential of a tsunami within minutes typically, well before the tsunami reaches coastal areas. The viability of the method was demonstrated in computational tests in which the method yielded accurate representations of three historical tsunamis for which well-documented ground 3. Peru 2007 tsunami runup observations and modeling Science.gov (United States) Fritz, H. M.; Kalligeris, N.; Borrero, J. C. 2008-05-01 On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed in the immediate aftermath and investigated the tsunami effects at 51 sites. The largest runup heights were measured in a sparsely populated desert area south of the Paracas Peninsula resulting in only 3 tsunami fatalities. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the presence of the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. The coast of Peru has experienced numerous deadly and destructive tsunamis throughout history, which highlights the importance of ongoing tsunami awareness and education efforts in the region. The Peru tsunami is compared against recent mega-disasters such as the 2004 Indian Ocean tsunami and Hurricane Katrina. 4. Tsunami hazard map in eastern Bali International Nuclear Information System (INIS) Afif, Haunan; Cipta, Athanasius 2015-01-01 Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography 5. Tsunami hazard map in eastern Bali Energy Technology Data Exchange (ETDEWEB) Afif, Haunan, E-mail: [email protected] [Geological Agency, Bandung (Indonesia); Cipta, Athanasius [Geological Agency, Bandung (Indonesia); Australian National University, Canberra (Australia) 2015-04-24 Bali is a popular tourist destination both for Indonesian and foreign visitors. However, Bali is located close to the collision zone between the Indo-Australian Plate and Eurasian Plate in the south and back-arc thrust off the northern coast of Bali resulted Bali prone to earthquake and tsunami. Tsunami hazard map is needed for better understanding of hazard level in a particular area and tsunami modeling is one of the most reliable techniques to produce hazard map. Tsunami modeling conducted using TUNAMI N2 and set for two tsunami sources scenarios which are subduction zone in the south of Bali and back thrust in the north of Bali. Tsunami hazard zone is divided into 3 zones, the first is a high hazard zones with inundation height of more than 3m. The second is a moderate hazard zone with inundation height 1 to 3m and the third is a low tsunami hazard zones with tsunami inundation heights less than 1m. Those 2 scenarios showed southern region has a greater potential of tsunami impact than the northern areas. This is obviously shown in the distribution of the inundated area in the south of Bali including the island of Nusa Penida, Nusa Lembongan and Nusa Ceningan is wider than in the northern coast of Bali although the northern region of the Nusa Penida Island more inundated due to the coastal topography. 6. Holocene Tsunamis in Avachinsky Bay, Kamchatka, Russia Science.gov (United States) Pinegina, Tatiana K.; Bazanova, Lilya I.; Zelenin, Egor A.; Bourgeois, Joanne; Kozhurin, Andrey I.; Medvedev, Igor P.; Vydrin, Danil S. 2018-03-01 This article presents results of the study of tsunami deposits on the Avachinsky Bay coast, Kurile-Kamchatka island arc, NW Pacific. We used tephrochronology to assign ages to the tsunami deposits, to correlate them between excavations, and to restore paleo-shoreline positions. In addition to using established regional marker tephra, we establish a detailed tephrochronology for more local tephra from Avachinsky volcano. For the first time in this area, proximal to Kamchatka's primary population, we reconstruct the vertical runup and horizontal inundation for 33 tsunamis recorded over the past 4200 years, 5 of which are historical events - 1737, 1792, 1841, 1923 (Feb) and 1952. The runup heights for all 33 tsunamis range from 1.9 to 5.7 m, and inundation distances from 40 to 460 m. The average recurrence for historical events is 56 years and for the entire study period 133 years. The obtained data makes it possible to calculate frequencies of tsunamis by size, using reconstructed runup and inundation, which is crucial for tsunami hazard assessment and long-term tsunami forecasting. Considering all available data on the distribution of historical and paleo-tsunami heights along eastern Kamchatka, we conclude that the southern part of the Kamchatka subduction zone generates stronger tsunamis than its northern part. The observed differences could be associated with variations in the relative velocity and/or coupling between the downgoing Pacific Plate and Kamchatka. 7. Book review: Physics of tsunamis Science.gov (United States) Geist, Eric L. 2017-01-01 “Physics of Tsunamis”, second edition, provides a comprehensive analytical treatment of the hydrodynamics associated with the tsunami generation process. The book consists of seven chapters covering 388 pages. Because the subject matter within each chapter is distinct, an abstract appears at the beginning and references appear at the end of each chapter, rather than at the end of the book. Various topics of tsunami physics are examined largely from a theoretical perspective, although there is little information on how the physical descriptions are applied in numerical models.“Physics of Tsunamis”, by B. W. Levin and M. A. Nosov, Second Edition, Springer, 2016; ISBN-10: 33-1933106X, ISBN-13: 978-331933-1065 8. Tsunami Amplitude Estimation from Real-Time GNSS. Science.gov (United States) Jeffries, C.; MacInnes, B. T.; Melbourne, T. I. 2017-12-01 Tsunami early warning systems currently comprise modeling of observations from the global seismic network, deep-ocean DART buoys, and a global distribution of tide gauges. While these tools work well for tsunamis traveling teleseismic distances, saturation of seismic magnitude estimation in the near field can result in significant underestimation of tsunami excitation for local warning. Moreover, DART buoy and tide gauge observations cannot be used to rectify the underestimation in the available time, typically 10-20 minutes, before local runup occurs. Real-time GNSS measurements of coseismic offsets may be used to estimate finite faulting within 1-2 minutes and, in turn, tsunami excitation for local warning purposes. We describe here a tsunami amplitude estimation algorithm; implemented for the Cascadia subduction zone, that uses continuous GNSS position streams to estimate finite faulting. The system is based on a time-domain convolution of fault slip that uses a pre-computed catalog of hydrodynamic Green's functions generated with the GeoClaw shallow-water wave simulation software and maps seismic slip along each section of the fault to points located off the Cascadia coast in 20m of water depth and relies on the principle of the linearity in tsunami wave propagation. The system draws continuous slip estimates from a message broker, convolves the slip with appropriate Green's functions which are then superimposed to produce wave amplitude at each coastal location. The maximum amplitude and its arrival time are then passed into a database for subsequent monitoring and display. We plan on testing this system using a suite of synthetic earthquakes calculated for Cascadia whose ground motions are simulated at 500 existing Cascadia GPS sites, as well as real earthquakes for which we have continuous GNSS time series and surveyed runup heights, including Maule, Chile 2010 and Tohoku, Japan 2011. This system has been implemented in the CWU Geodesy Lab for the Cascadia 9. Tsunami Loss Assessment For Istanbul Science.gov (United States) Hancilar, Ufuk; Cakti, Eser; Zulfikar, Can; Demircioglu, Mine; Erdik, Mustafa 2010-05-01 Tsunami risk and loss assessment incorporating with the inundation mapping in Istanbul and the Marmara Sea region are presented in this study. The city of Istanbul is under the threat of earthquakes expected to originate from the Main Marmara branch of North Anatolian Fault System. In the Marmara region the earthquake hazard reached very high levels with 2% annual probability of occurrence of a magnitude 7+ earthquake on the Main Marmara Fault. Istanbul is the biggest city of Marmara region as well as of Turkey with its almost 12 million inhabitants. It is home to 40% of the industrial facilities in Turkey and operates as the financial and trade hub of the country. Past earthquakes have evidenced that the structural reliability of residential and industrial buildings, as well as that of lifelines including port and harbor structures in the country is questionable. These facts make the management of earthquake risks imperative for the reduction of physical and socio-economic losses. The level of expected tsunami hazard in Istanbul is low as compared to earthquake hazard. Yet the assets at risk along the shores of the city make a thorough assessment of tsunami risk imperative. Important residential and industrial centres exist along the shores of the Marmara Sea. Particularly along the northern and eastern shores we see an uninterrupted settlement pattern with industries, businesses, commercial centres and ports and harbours in between. Following the inundation maps resulting from deterministic and probabilistic tsunami hazard analyses, vulnerability and risk analyses are presented and the socio-economic losses are estimated. This study is part of EU-supported FP6 project ‘TRANSFER'. 10. Getting out of harm's way - evacuation from tsunamis Science.gov (United States) Jones, Jeanne M.; Wood, Nathan J.; Gordon, Leslie C. 2015-01-01 Scientists at the U.S. Geological Survey (USGS) have developed a new mapping tool, the Pedestrian Evacuation Analyst, for use by researchers and emergency managers to estimate how long it would take for someone to travel on foot out of a tsunami-hazard zone. The ArcGIS software extension, released in September 2014, allows the user to create maps showing travel times out of hazard zones and to determine the number of people that may or may not have enough time to evacuate. The maps take into account the elevation changes and the different types of land cover that a person would encounter along the way. 11. The El Salvador and Philippines Tsunamis of August 2012: Insights from Sea Level Data Analysis and Numerical Modeling Science.gov (United States) 2014-12-01 We studied two tsunamis from 2012, one generated by the El Salvador earthquake of 27 August ( Mw 7.3) and the other generated by the Philippines earthquake of 31 August ( Mw 7.6), using sea level data analysis and numerical modeling. For the El Salvador tsunami, the largest wave height was observed in Baltra, Galapagos Islands (71.1 cm) located about 1,400 km away from the source. The tsunami governing periods were around 9 and 19 min. Numerical modeling indicated that most of the tsunami energy was directed towards the Galapagos Islands, explaining the relatively large wave height there. For the Philippines tsunami, the maximum wave height of 30.5 cm was observed at Kushimoto in Japan located about 2,700 km away from the source. The tsunami governing periods were around 8, 12 and 29 min. Numerical modeling showed that a significant part of the far-field tsunami energy was directed towards the southern coast of Japan. Fourier and wavelet analyses as well as numerical modeling suggested that the dominant period of the first wave at stations normal to the fault strike is related to the fault width, while the period of the first wave at stations in the direction of fault strike is representative of the fault length. 12. Impact of a 1755-like tsunami in Huelva, Spain Directory of Open Access Journals (Sweden) V. V. Lima 2010-01-01 Full Text Available Coastal areas are highly exposed to natural hazards associated with the sea. In all cases where there is historical evidence for devastating tsunamis, as is the case of the southern coasts of the Iberian Peninsula, there is a need for quantitative hazard tsunami assessment to support spatial planning. Also, local authorities must be able to act towards the population protection in a preemptive way, to inform "what to do" and "where to go" and in an alarm, to make people aware of the incoming danger. With this in mind, we investigated the inundation extent, run-up and water depths, of a 1755-like event on the region of Huelva, located on the Spanish southwestern coast, one of the regions that was affected in the past by several high energy events, as proved by historical documents and sedimentological data. Modelling was made with a slightly modified version of the COMCOT (Cornell Multi-grid Coupled Tsunami Model code. Sensitivity tests were performed for a single source in order to understand the relevance and influence of the source parameters in the inundation extent and the fundamental impact parameters. We show that a 1755-like event will have a dramatic impact in a large area close to Huelva inundating an area between 82 and 92 km2 and reaching maximum run-up around 5 m. In this sense our results show that small variations on the characteristics of the tsunami source are not too significant for the impact assessment. We show that the maximum flow depth and the maximum run-up increase with the average slip on the source, while the strike of the fault is not a critical factor as Huelva is significantly far away from the potential sources identified up to now. We also show that the maximum flow depth within the inundated area is very dependent on the tidal level, while maximum run-up is less affected, as a consequence of the complex morphology of the area. 13. A short history of tsunami research and countermeasures in Japan. Science.gov (United States) Shuto, Nobuo; Fujima, Koji 2009-01-01 The tsunami science and engineering began in Japan, the country the most frequently hit by local and distant tsunamis. The gate to the tsunami science was opened in 1896 by a giant local tsunami of the highest run-up height of 38 m that claimed 22,000 lives. The crucial key was a tide record to conclude that this tsunami was generated by a "tsunami earthquake". In 1933, the same area was hit again by another giant tsunami. A total system of tsunami disaster mitigation including 10 "hard" and "soft" countermeasures was proposed. Relocation of dwelling houses to high ground was the major countermeasures. The tsunami forecasting began in 1941. In 1960, the Chilean Tsunami damaged the whole Japanese Pacific coast. The height of this tsunami was 5-6 m at most. The countermeasures were the construction of structures including the tsunami breakwater which was the first one in the world. Since the late 1970s, tsunami numerical simulation was developed in Japan and refined to become the UNESCO standard scheme that was transformed to 22 different countries. In 1983, photos and videos of a tsunami in the Japan Sea revealed many faces of tsunami such as soliton fission and edge bores. The 1993 tsunami devastated a town protected by seawalls 4.5 m high. This experience introduced again the idea of comprehensive countermeasures, consisted of defense structure, tsunami-resistant town development and evacuation based on warning. 14. Tsunami disaster risk management capabilities in Greece Science.gov (United States) Marios Karagiannis, Georgios; Synolakis, Costas 2015-04-01 Greece is vulnerable to tsunamis, due to the length of the coastline, its islands and its geographical proximity to the Hellenic Arc, an active subduction zone. Historically, about 10% of all world tsunamis occur in the Mediterranean region. Here we review existing tsunami disaster risk management capabilities in Greece. We analyze capabilities across the disaster management continuum, including prevention, preparedness, response and recovery. Specifically, we focus on issues like legal requirements, stakeholders, hazard mitigation practices, emergency operations plans, public awareness and education, community-based approaches and early-warning systems. Our research is based on a review of existing literature and official documentation, on previous projects, as well as on interviews with civil protection officials in Greece. In terms of tsunami disaster prevention and hazard mitigation, the lack of tsunami inundation maps, except for some areas in Crete, makes it quite difficult to get public support for hazard mitigation practices. Urban and spatial planning tools in Greece allow the planner to take into account hazards and establish buffer zones near hazard areas. However, the application of such ordinances at the local and regional levels is often difficult. Eminent domain is not supported by law and there are no regulatory provisions regarding tax abatement as a disaster prevention tool. Building codes require buildings and other structures to withstand lateral dynamic earthquake loads, but there are no provisions for resistance to impact loading from water born debris Public education about tsunamis has increased during the last half-decade but remains sporadic. In terms of disaster preparedness, Greece does have a National Tsunami Warning Center (NTWC) and is a Member of UNESCO's Tsunami Program for North-eastern Atlantic, the Mediterranean and connected seas (NEAM) region. Several exercises have been organized in the framework of the NEAM Tsunami Warning 15. Tsunamis generated by unconfined deformable granular landslides in various topographic configurations Science.gov (United States) McFall, B. C.; Mohammed, F.; Fritz, H. M. 2012-04-01 Tsunamis generated by landslides and volcanic island collapses account for some of the most catastrophic events. Major tsunamis caused by landslides or volcanic island collapse were recorded at Krakatoa in 1883, Grand Banks, Newfoundland in 1929, Lituya Bay, Alaska in 1958, Papua New Guinea in 1998, and Java in 2006. Source and runup scenarios based on real world events are physically modeled in the three dimensional NEES tsunami wave basin (TWB) at Oregon State University (OSU). A novel pneumatic landslide tsunami generator (LTG) was deployed to simulate landslides with varying geometry and kinematics. The LTG consists of a sliding box filled with up to 1,350 kg of naturally rounded river gravel which is accelerated by means of four pneumatic pistons down the 2H: 1V slope, launching the granular landslide towards the water at velocities of up to 5 m/s. Topographical and bathymetric features can greatly affect wave characteristics and runup heights. Landslide tsunamis are studied in different topographic and bathymetric configurations: far field propagation and runup, a narrow fjord and curved headland configurations, and a conical island setting representing landslides off an island or a volcanic flank collapse. Water surface elevations were measured using an array of resistance wave gauges. The granulate landslide width, thickness and front velocity were measured using above and underwater cameras. Landslide 3-dimensional surface reconstruction and surface velocity properties were measured using a stereo particle image velocimetry (PIV) setup. The speckled pattern on the surface of the granular landslide allows for cross-correlation based PIV analysis. Wave runup was measured with resistance wave gauges along the slope and verified with video image processing. The measured landslide and tsunami data serve to validate and advance 3-dimensional numerical landslide tsunami and prediction models. 16. A Numerical Modelling Study on the Potential Role of Tsunamis in the Biblical Exodus Directory of Open Access Journals (Sweden) José M. Abril 2015-07-01 Full Text Available The reliability of the narrative of the Biblical Exodus has been subject of heated debate for decades. Recent archaeological studies seem to provide new insight of the exodus path, and although with a still controversial chronology, the effects of the Minoan Santorini eruption have been proposed as a likely explanation of the biblical plagues. Particularly, it has been suggested that flooding by the associated tsunamis could explain the first plague and the sea parting. Recent modelling studies have shown that Santorini’s tsunami effects were negligible in the eastern Nile Delta, but the released tectonic stress could have triggered local tsunamigenic sources in sequence. This paper is aimed to a quantitative assessment of the potential role of tsunamis in the biblical parting of the sea. Several “best case” scenarios are tested through the application of a numerical model for tsunami propagation that has been previously validated. The former paleogeographic conditions of the eastern Nile Delta have been implemented based upon recent geological studies; and several feasible local sources for tsunamis are proposed. Tsunamis triggered by submarine landslides of 10–30 km3 could have severely impacted the northern Sinai and southern Levantine coasts but with weak effects in the eastern Nile Delta coastline. The lack of noticeable flooding in this area under the most favorable conditions for tsunamis, along with the time sequence of water elevations, make difficult to accept them as a plausible and literally explanation of the first plague and of the drowning of the Egyptian army in the surroundings of the former Shi-Hor Lagoon. 17. A shallow water model for the propagation of tsunami via Lattice Boltzmann method Science.gov (United States) Zergani, Sara; Aziz, Z. A.; Viswanathan, K. K. 2015-01-01 An efficient implementation of the lattice Boltzmann method (LBM) for the numerical simulation of the propagation of long ocean waves (e.g. tsunami), based on the nonlinear shallow water (NSW) wave equation is presented. The LBM is an alternative numerical procedure for the description of incompressible hydrodynamics and has the potential to serve as an efficient solver for incompressible flows in complex geometries. This work proposes the NSW equations for the irrotational surface waves in the case of complex bottom elevation. In recent time, equation involving shallow water is the current norm in modelling tsunami operations which include the propagation zone estimation. Several test-cases are presented to verify our model. Some implications to tsunami wave modelling are also discussed. Numerical results are found to be in excellent agreement with theory. 18. A shallow water model for the propagation of tsunami via Lattice Boltzmann method International Nuclear Information System (INIS) Zergani, Sara; Aziz, Z A; Viswanathan, K K 2015-01-01 An efficient implementation of the lattice Boltzmann method (LBM) for the numerical simulation of the propagation of long ocean waves (e.g. tsunami), based on the nonlinear shallow water (NSW) wave equation is presented. The LBM is an alternative numerical procedure for the description of incompressible hydrodynamics and has the potential to serve as an efficient solver for incompressible flows in complex geometries. This work proposes the NSW equations for the irrotational surface waves in the case of complex bottom elevation. In recent time, equation involving shallow water is the current norm in modelling tsunami operations which include the propagation zone estimation. Several test-cases are presented to verify our model. Some implications to tsunami wave modelling are also discussed. Numerical results are found to be in excellent agreement with theory 19. The effects of the 2004 tsunami on a coastal aquifer in Sri Lanka DEFF Research Database (Denmark) Vithanage, Meththika Suharshini; Engesgaard, Peter Knudegaard; Villholth, Karen G. 2012-01-01 ) of the groundwater were carried out monthly from October 2005 to August 2007. The aquifer system and tsunami saltwater intrusion were modeled using the variable-density flow and solute transport code HST3D to understand the tsunami plume behavior and estimate the aquifer recovery time. EC values reduced as a result...... on groundwater in coastal areas. Field investigations on the east coast of Sri Lanka were carried out along a transect located perpendicular to the coastline on a 2.4 km wide sand stretch bounded by the sea and a lagoon. Measurements of groundwater table elevation and electrical conductivity (EC...... of the monsoonal rainfall following the tsunami with a decline in reduction rate during the dry season. The upper part of the saturated zone (down to 2.5 m) returned to freshwater conditions (EC 20. The Solomon Islands tsunami of 6 February 2013 field survey in the Santa Cruz Islands Science.gov (United States) Fritz, H. M.; Papantoniou, A.; Biukoto, L.; Albert, G. 2013-12-01 On February 6, 2013 at 01:12:27 UTC (local time: UTC+11), a magnitude Mw 8.0 earthquake occurred 70 km to the west of Ndendo Island (Santa Cruz Island) in the Solomon Islands. The under-thrusting earthquake near a 90° bend, where the Australian plate subducts beneath the Pacific plate generated a locally focused tsunami in the Coral Sea and the South Pacific Ocean. The tsunami claimed the lives of 10 people and injured 15, destroyed 588 houses and partially damaged 478 houses, affecting 4,509 people in 1,066 households corresponding to an estimated 37% of the population of Santa Cruz Island. A multi-disciplinary international tsunami survey team (ITST) was deployed within days of the event to document flow depths, runup heights, inundation distances, sediment and coral boulder depositions, land level changes, damage patterns at various scales, performance of the man-made infrastructure and impact on the natural environment. The 19 to 23 February 2013 ITST covered 30 locations on 4 Islands: Ndendo (Santa Cruz), Tomotu Noi (Lord Howe), Nea Tomotu (Trevanion, Malo) and Tinakula. The reconnaissance completely circling Ndendo and Tinakula logged 240 km by small boat and additionally covered 20 km of Ndendo's hard hit western coastline by vehicle. The collected survey data includes more than 80 tsunami runup and flow depth measurements. The tsunami impact peaked at Manoputi on Ndendo's densely populated west coast with maximum tsunami height exceeding 11 m and local flow depths above ground exceeding 7 m. A fast tide-like positive amplitude of 1 m was recorded at Lata wharf inside Graciosa Bay on Ndendo Island and misleadingly reported in the media as representative tsunami height. The stark contrast between the field observations on exposed coastlines and the Lata tide gauge recording highlights the importance of rapid tsunami reconnaissance surveys. Inundation distance and damage more than 500 m inland were recorded at Lata airport on Ndendo Island. Landslides were 1. Seismic and tsunami safety margin assessment Energy Technology Data Exchange (ETDEWEB) NONE 2013-08-15 Nuclear Regulation Authority is going to establish new seismic and tsunami safety guidelines to increase the safety of NPPs. The main purpose of this research is testing structures/components important to safety and tsunami resistant structures/components, and evaluating the capacity of them against earthquake and tsunami. Those capacity data will be utilized for the seismic and tsunami back-fit review based on the new seismic and tsunami safety guidelines. The summary of the program in 2012 is as follows. 1. Component seismic capacity test and quantitative seismic capacity evaluation. PWR emergency diesel generator partial-model seismic capacity tests have been conducted and quantitative seismic capacities have been evaluated. 2. Seismic capacity evaluation of switching-station electric equipment. Existing seismic test data investigation, specification survey and seismic response analyses have been conducted. 3. Tsunami capacity evaluation of anti-inundation measure facilities. Tsunami pressure test have been conducted utilizing a small breakwater model and evaluated basic characteristics of tsunami pressure against seawall structure. (author) 2. Seismic and tsunami safety margin assessment International Nuclear Information System (INIS) 2013-01-01 Nuclear Regulation Authority is going to establish new seismic and tsunami safety guidelines to increase the safety of NPPs. The main purpose of this research is testing structures/components important to safety and tsunami resistant structures/components, and evaluating the capacity of them against earthquake and tsunami. Those capacity data will be utilized for the seismic and tsunami back-fit review based on the new seismic and tsunami safety guidelines. The summary of the program in 2012 is as follows. 1. Component seismic capacity test and quantitative seismic capacity evaluation. PWR emergency diesel generator partial-model seismic capacity tests have been conducted and quantitative seismic capacities have been evaluated. 2. Seismic capacity evaluation of switching-station electric equipment. Existing seismic test data investigation, specification survey and seismic response analyses have been conducted. 3. Tsunami capacity evaluation of anti-inundation measure facilities. Tsunami pressure test have been conducted utilizing a small breakwater model and evaluated basic characteristics of tsunami pressure against seawall structure. (author) 3. Tsunami excitation by inland/coastal earthquakes: the Green function approach Directory of Open Access Journals (Sweden) T. B. Yanovskaya 2003-01-01 Full Text Available In the framework of the linear theory, the representation theorem is derived for an incompressible liquid layer with a boundary of arbitrary shape and in a homogeneous gravity field. In addition, the asymptotic representation for the Green function, in a layer of constant thickness is obtained. The validity of the approach for the calculation of the tsunami wavefield based on the Green function technique is verified comparing the results with those obtained from the modal theory, for a liquid layer of infinite horizontal dimensions. The Green function approach is preferable for the estimation of the excitation spectra, since in the case of an infinite liquid layer it leads to simple analytical expressions. From this analysis it is easy to describe the peculiarities of tsunami excitation by different sources. The method is extended to the excitation of tsunami in a semiinfinite layer with a sloping boundary. Numerical modelling of the tsunami wavefield, excited by point sources at different distances from the coastline, shows that when the source is located at a distance from the coastline equal or larger than the source depth, the shore presence does not affect the excitation of the tsunami. When the source is moved towards thecoastline, the low frequency content in the excitation spectrum ecreases, while the high frequencies content increases dramatically. The maximum of the excitation spectra from inland sources, located at a distance from the shore like the source depth, becomes less than 10% of that radiated if the same source is located in the open ocean. The effect of the finiteness of the source is also studied and the excitation spectrum is obtained by integration over the fault area. Numerical modelling of the excitation spectra for different source models shows that, for a given seismic moment, the spectral level, as well as the maximum value of the spectra, decreases with increasing fault size. When the sources are located in the 4. Observations and Modeling of the 27 February 2010 Tsunami in Chile Science.gov (United States) Synolakis, C. E.; Fritz, H. M.; Petroff, C. M.; Catalan, P. A.; Cienfuegos, R.; Winckler, P.; Kalligeris, N.; Weiss, R.; Meneses, G.; Valderas-Bermejo, C.; Ebeling, C. W.; Papadopoulos, A.; Contreras, M.; Almar, R.; Dominguez, J. C.; Barrientos, S. E. 2010-12-01 On 27 February 2010, a magnitude Mw 8.8 earthquake occurred just off the coast of Chile, 100km N of Concepción, causing substantial damage and loss of life on Chile’s mainland and the Juan Fernandez archipelago. The tsunami accounts for 124 victims out of about 500 fatalities. Fortunately, ancestral knowledge from past tsunamis such as the giant 1960 event and tsunami education and evacuation exercises prompted most coastal residents to spontaneously evacuate to high ground after the earthquake. The majority of the tsunami victims were tourists staying overnight in low lying camp grounds along the coast. A multi-disciplinary ITST was deployed within days of the event to document flow depths, runup heights, inundation distances, sediment deposition, damage patterns at various scales, performance of the man-made infrastructure and impact on the natural environment per established protocols. The 3-25 March ITST covered an 800km stretch of coastline from Quintero to Mehuín in various subgroups the Pacific Islands of Santa María, Juan Fernández Archipelago, and Rapa Nui (Easter Island), while Mocha Island was surveyed 21-23 May, 2010. The collected survey data includes more than 400 tsunami runup and flow depth measurements. The tsunami impact peaked with a localized maximum runup of 29m on a coastal bluff at Constitución and 23 m on marine terraces on Mocha. A significant variation in tsunami impact was observed along Chile’s mainland both at local and regional scales. Inundation and damage also occurred several kilometers inland along rivers. Observations from the Chile tsunami are compared against the 2004 Indian Ocean tsunami. The tsunamigenic seafloor displacements were partially characterized based on coastal uplift measurements along a 100 km stretch of coastline between Caleta Chome and Punta Morguilla. More than 2 m vertical uplift were measured on Santa Maria Island. Coastal uplift measurements in Chile are compared with tectonic land level changes 5. Topographic data acquisition in tsunami-prone coastal area using Unmanned Aerial Vehicle (UAV) Science.gov (United States) Marfai, M. A.; Sunarto; Khakim, N.; Cahyadi, A.; Rosaji, F. S. C.; Fatchurohman, H.; Wibowo, Y. A. 2018-04-01 The southern coastal area of Java Island is one of the nine seismic gaps prone to tsunamis. The entire coastline in one of the regencies, Gunungkidul, is exposed to the subduction zone in the Indian Ocean. Also, the growing tourism industries in the regency increase its vulnerability, which places most of its areas at high risk of tsunamis. The same case applies to Kukup, i.e., one of the most well-known beaches in Gunungkidul. Structurally shaped cliffs that surround it experience intensive wave erosion process, but it has very minimum access for evacuation routes. Since tsunami modeling is a very advanced analysis, it requires an accurate topographic data. Therefore, the research aimed to generate the topographic data of Kukup Beach as the baseline in tsunami risk reduction analysis and disaster management. It used aerial photograph data, which was acquired using Unmanned Aerial Vehicle (UAV). The results showed that the aerial photographs captured by drone had accurate elevation and spatial resolution. Therefore, they are applicable for tsunami modeling and disaster management. 6. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources Science.gov (United States) Jiménez, César; Carbonel, Carlos; Rojas, Joel 2018-04-01 We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary. 7. A prehistoric tsunami induced long-lasting ecosystem changes on a semi-arid tropical island--the case of Boka Bartol (Bonaire, Leeward Antilles). Science.gov (United States) Engel, Max; Brückner, Helmut; Fürstenberg, Sascha; Frenzel, Peter; Konopczak, Anna Maria; Scheffers, Anja; Kelletat, Dieter; May, Simon Matthias; Schäbitz, Frank; Daut, Gerhard 2013-01-01 The Caribbean is highly vulnerable to coastal hazards. Based on their short recurrence intervals over the intra-American seas, high-category tropical cyclones and their associated effects of elevated storm surge, heavy wave impacts, mudslides and floods represent the most serious threat. Given the abundance of historical accounts and trigger mechanisms (strike-slip motion and oblique collision at the northern and southern Caribbean plate boundaries, submarine and coastal landslides, volcanism), tsunamis must be considered as well. This paper presents interdisciplinary multi-proxy investigations of sediment cores (grain size distribution, carbonate content, loss-on-ignition, magnetic susceptibility, microfauna, macrofauna) from Washington-Slagbaai National Park, NW Bonaire (Leeward Antilles). No historical tsunami is recorded for this island. However, an allochthonous marine layer found in all cores at Boka Bartol reveals several sedimentary criteria typically linked with tsunami deposits. Calibrated (14)C data from these cores point to a palaeotsunami with a maximum age of 3,300 years. Alternative explanations for the creation of this layer, such as inland flooding during tropical cyclones, cannot entirely be ruled out, though in recent times even the strongest of these events on Bonaire did not deposit significant amounts of sediment onshore. The setting of Boka Bartol changed from an open mangrove-fringed embayment into a poly- to hyperhaline lagoon due to the establishment or closure of a barrier of coral rubble during or subsequent to the inferred event. The timing of the event is supported by further sedimentary evidence from other lagoonal and alluvial archives on Bonaire. 8. -Advanced Models for Tsunami and Rogue Waves Directory of Open Access Journals (Sweden) D. W. Pravica 2012-01-01 Full Text Available A wavelet , that satisfies the q-advanced differential equation for , is used to model N-wave oscillations observed in tsunamis. Although q-advanced ODEs may seem nonphysical, we present an application that model tsunamis, in particular the Japanese tsunami of March 11, 2011, by utilizing a one-dimensional wave equation that is forced by . The profile is similar to tsunami models in present use. The function is a wavelet that satisfies a q-advanced harmonic oscillator equation. It is also shown that another wavelet, , matches a rogue-wave profile. This is explained in terms of a resonance wherein two small amplitude forcing waves eventually lead to a large amplitude rogue. Since wavelets are used in the detection of tsunamis and rogues, the signal-analysis performance of and is examined on actual data. 9. Tsunami sediments and their grain size characteristics Science.gov (United States) Sulastya Putra, Purna 2018-02-01 Characteristics of tsunami deposits are very complex as the deposition by tsunami is very complex processes. The grain size characteristics of tsunami deposits are simply generalized no matter the local condition in which the deposition took place. The general characteristics are fining upward and landward, poor sorting, and the grain size distribution is not unimodal. Here I review the grain size characteristics of tsunami deposit in various environments: swale, coastal marsh and lagoon/lake. Review results show that although there are similar characters in some environments and cases, but in detail the characteristics in each environment can be distinguished; therefore, the tsunami deposit in each environment has its own characteristic. The local geological and geomorphological condition of the environment may greatly affect the grain size characteristics. 10. The El Asnam 1980 October 10 inland earthquake: a new hypothesis of tsunami generation Science.gov (United States) Roger, J.; Hébert, H.; Ruegg, J.-C.; Briole, P. 2011-06-01 The Western Mediterranean Sea is not considered as a high seismic region. Only several earthquakes with magnitude above five occur each year and only a handful have consequences on human beings and infrastructure. The El Asnam (Algeria) earthquake of 1980 October 10 with an estimated magnitude Ms= 7.3 is one of the most destructive earthquakes recorded in northern Africa and more largely in the Western Mediterranean Basin. Although it is located inland, it is known to have been followed by a small tsunami recorded on several tide gauges along the southeastern Spanish Coast. In 1954, a similar earthquake having occurred at the same location induced a turbidity current associated to a submarine landslide, which is widely known to have cut submarine phone cables far from the coast. This event was followed by a small tsunami attributed to the landslide. Thus the origin of the tsunami of 1980 was promptly attributed to the same kind of submarine slide. As no evidence of such mass movement was highlighted, and because the tsunami wave periods does not match with a landslide origin in both cases (1954 and 1980), this study considers two rupture scenarios, that the coseismic deformation itself (of about 10 cm off the Algerian coast near Ténès) is sufficient to produce a low amplitude (several centimetres) tsunami able to reach the Spanish southeastern coast from Alicante to Algeciras (Gibraltar strait to the west). After a discussion concerning the proposed rupture scenarios and their respective parameters, numerical tsunami modelling is performed on a set of bathymetric grids. Then the results of wave propagation and amplification (maximum wave height maps) are discussed, with a special attention to Alicante (Spain) Harbour where the location of two historical tide gauges allows the comparison between synthetic mareograms and historical records showing sufficient signal amplitude. This study is part of the active tsunami hazard assessment in Mediterranean Sea especially 11. FIELD SURVEY REPORT OF TSUNAMI EFFECTS CAUSED BY THE AUGUST 2012 OFFSHORE EL SALAVADOR EARTHQUAKE Directory of Open Access Journals (Sweden) Francisco Gavidia-Medina 2015-10-01 Full Text Available This report describes the field survey of the western zone of El Salvador conducted by an international group of scientists and engineers following the earthquake and tsunami of 27 August 2012 (04:37 UTC, 26 August 10:37 pm local time. The earthquake generated a tsunami with a maximum height of ~ 6 m causing inundation of up to 300 m inland along a 40 km section of coastline in eastern El Salvador. * (Note: Presentation from the 6th International Tsunami Symposium of Tsunami Society International in Costa Rica in Sept. 2014 - based on the Field Survey Report of the tsunami effects caused by the August 2012 Earthquake which were compiled in a report by Jose C. Borrero of the University of California Tsunami Research Center. Contributors to that report and field survey participants included Hermann M. Fritz of the Georgia Institute of Technology, Francisco Gavidia-Medina, Jeniffer Larreynaga-Murcia, Rodolfo Torres-Cornejo, Manuel Diaz-Flores and Fabio Alvarad: of the Ministerio de Medio Ambiente y Recursos Naturales de El Salvador (MARN, Norwin Acosta: of the Instituto Nicaragüense de Estudios Territoriales( INOTER, Julie Leonard of the Office of Foreign Disaster Assistance (USAID, OFDA, Nic Arcos of the International Tsunami Information Center (ITIC and Diego Arcas of the Pacific Marine Environmental Laboratory (NOAA – PMEL The figures of this paper are from the report compiled by Jose C. Borrero and are numbered out of sequence out of sequence from the compiled joint report. The quality of figures 2.2, 2.3 and 2.4 is rather poor and the reader is referred to the original report, as shown in the references. 12. OBSERVATION OF TSUNAMI RADIATION AT TOHOKU BY REMOTE SENSING Directory of Open Access Journals (Sweden) Frank C. Lin 2011-01-01 Full Text Available We present prima facie evidence that upon the onset of the Tohoku tsunami of Mar. 11, 2011 infrared radiation was emitted by the tsunami and was detected by the Japanese satellite MTSAT-IR1, in agreement with our earlier findings for the Great Sumatra Tsunami of 2004. Implications for a worldwide Tsunami Early Warning System are discussed. 13. Effect of earthquake and tsunami. Ground motion and tsunami observed at nuclear power station International Nuclear Information System (INIS) Hijikata, Katsuichirou 2012-01-01 Fukushima Daiichi and Daini Nuclear Power Stations (NPSs) were struck by the earthquake off the pacific coast in the Tohoku District, which occurred at 14:46 on March 11, 2011. Afterwards, tsunamis struck the Tohoku District. In terms of the earthquake observed at the Fukushima NPSs, the acceleration response spectra of the earthquake movement observed on the basic board of reactor buildings exceeded the acceleration response spectra of the response acceleration to the standard seismic ground motion Ss for partial periodic bands at the Fukushima Daiichi NPS. As for the Fukushima Daini NPS, the acceleration response spectra of the earthquake movement observed on the basic board of the reactor buildings was below the acceleration response spectra of the response acceleration to the standard seismic ground motion Ss. Areas inundated by Tsunami at each NPS were investigated and tsunami inversion analysis was made to build tsunami source model to reproduce tide record, tsunami height, crustal movement and inundated area, based on tsunami observation records in the wide areas from Hokkaido to Chiba prefectures. Tsunami heights of Fukushima Daiichi and Daini NPSs were recalculated as O.P. +13m and +9m respectively and tsunami peak height difference was attributed to the extent of superposition of tsunami waves of tsunami earthquake type of wave source in the area along interplane trench off the coast in the Fukushima prefecture and interplane earthquake type of wave source in rather deep interplate area off the coast in the Miyagi prefecture. (T. Tanaka) 14. TSUNAMIS AND TSUNAMI-LIKE WAVES OF THE EASTERN UNITED STATES Directory of Open Access Journals (Sweden) James F. Lander 2002-01-01 Full Text Available The threat of tsunamis and tsunami-like waves hitting the eastern United States is very real despite a general impression to the contrary. We have cataloged 40 tsunamis and tsunami-like waves that have occurred in the eastern United States since 1600. Tsunamis were generated from such events as the 1755 Queen Anne’s earthquake, the Grand Banks event of 1929, the Charleston earthquake of 1886, and the New Madrid earthquakes of 1811-1812. The Queen Anne tsunami was observed as far away as St. Martin in the West Indies and is the only known teletsunami generated in this source region.Since subduction zones are absent around most of the Atlantic basin, tsunamis and tsunami-like waves along the United States East Coast are not generated from this traditional source, but appear, in most cases to be the result of slumping or landsliding associated with local earthquakes or with wave action associated with strong storms. Other sources of tsunamis and tsunami-like waves along the eastern seaboard have recently come to light including volcanic debris falls or catastrophic failure of volcanic slopes; explosive decompression of underwater methane deposits or oceanic meteor splashdowns. These sources are considered as well. 15. Real-time correction of tsunami site effect by frequency-dependent tsunami-amplification factor Science.gov (United States) Tsushima, H. 2017-12-01 For tsunami early warning, I developed frequency-dependent tsunami-amplification factor and used it to design a recursive digital filter that can be applicable for real-time correction of tsunami site response. In this study, I assumed that a tsunami waveform at an observing point could be modeled by convolution of source, path and site effects in time domain. Under this assumption, spectral ratio between offshore and the nearby coast can be regarded as site response (i.e. frequency-dependent amplification factor). If the amplification factor can be prepared before tsunamigenic earthquakes, its temporal convolution to offshore tsunami waveform provides tsunami prediction at coast in real time. In this study, tsunami waveforms calculated by tsunami numerical simulations were used to develop frequency-dependent tsunami-amplification factor. Firstly, I performed numerical tsunami simulations based on nonlinear shallow-water theory from many tsuanmigenic earthquake scenarios by varying the seismic magnitudes and locations. The resultant tsunami waveforms at offshore and the nearby coastal observing points were then used in spectral-ratio analysis. An average of the resulted spectral ratios from the tsunamigenic-earthquake scenarios is regarded as frequency-dependent amplification factor. Finally, the estimated amplification factor is used in design of a recursive digital filter that can be applicable in time domain. The above procedure is applied to Miyako bay at the Pacific coast of northeastern Japan. The averaged tsunami-height spectral ratio (i.e. amplification factor) between the location at the center of the bay and the outside show a peak at wave-period of 20 min. A recursive digital filter based on the estimated amplification factor shows good performance in real-time correction of tsunami-height amplification due to the site effect. This study is supported by Japan Society for the Promotion of Science (JSPS) KAKENHI grant 15K16309. 16. Tsunami Generation from Asteroid Airburst and Ocean Impact and Van Dorn Effect Science.gov (United States) Robertson, Darrel 2016-01-01 Airburst - In the simulations explored energy from the airburst couples very weakly with the water making tsunami dangerous over a shorter distance than the blast for asteroid sizes up to the maximum expected size that will still airburst (approx.250MT). Future areas of investigation: - Low entry angle airbursts create more cylindrical blasts and might couple more efficiently - Bursts very close to the ground will increase coupling - Inclusion of thermosphere (>80km altitude) may show some plume collapse effects over a large area although with much less pressure center dot Ocean Impact - Asteroid creates large cavity in ocean. Cavity backfills creating central jet. Oscillation between the cavity and jet sends out tsunami wave packet. - For deep ocean impact waves are deep water waves (Phase speed = 2x Group speed) - If the tsunami propagation and inundation calculations are correct for the small (impact deep ocean basins, the resulting tsunami is not a significant hazard unless particularly close to vulnerable communities. Future work: - Shallow ocean impact. - Effect of continental shelf and beach profiles - Tsunami vs. blast damage radii for impacts close to populated areas - Larger asteroids below presumed threshold of global effects (Ø200 - 800m). 17. FEATURES AND PROBLEMS WITH HISTORICAL GREAT EARTHQUAKES AND TSUNAMIS IN THE MEDITERRANEAN SEA Directory of Open Access Journals (Sweden) Lobkovsky L. 2016-11-01 Full Text Available The present study examines the historical earthquakes and tsunamis of 21 July 365 and of 9 February 1948 in the Eastern Mediterranean Sea. Numerical simulations were performed for the tsunamis generated by underwater seismic sources in frames of the keyboard model, as well as for their propagation in the Mediterranean Sea basin. Similarly examined were three different types of seismic sources at the same localization near the Island of Crete for the earthquake of 21 July 365, and of two different types of seismic sources for the earthquake of 9 February 1948 near the Island of Karpathos. For each scenario, the tsunami wave field characteristics from the earthquake source to coastal zones in Mediterranean Sea’s basin were obtained and histograms were constructed showing the distribution of maximum tsunami wave heights, along a 5-m isobath. Comparison of tsunami wave characteristics for all the above mentioned scenarios, demonstrates that underwater earthquakes with magnitude M > 7 in the Eastern Mediterranean Sea basin, can generate waves with coastal runup up to 9 m. 18. The First Real-Time Tsunami Animation Science.gov (United States) Becker, N. C.; Wang, D.; McCreery, C.; Weinstein, S.; Ward, B. 2014-12-01 For the first time a U.S. tsunami warning center created and issued a tsunami forecast model animation while the tsunami was still crossing an ocean. Pacific Tsunami Warning Center (PTWC) scientists had predicted they would have this ability (Becker et al., 2012) with their RIFT forecast model (Wang et al., 2009) by using rapidly-determined W-phase centroid-moment tensor earthquake focal mechanisms as tsunami sources in the RIFT model (Wang et al., 2012). PTWC then acquired its own YouTube channel in 2013 for its outreach efforts that showed animations of historic tsunamis (Becker et al., 2013), but could also be a platform for sharing future tsunami animations. The 8.2 Mw earthquake of 1 April 2014 prompted PTWC to issue official warnings for a dangerous tsunami in Chile, Peru and Ecuador. PTWC ended these warnings five hours later, then issued its new tsunami marine hazard product (i.e., no coastal evacuations) for the State of Hawaii. With the international warning canceled but with a domestic hazard still present PTWC generated a forecast model animation and uploaded it to its YouTube channel six hours before the arrival of the first waves in Hawaii. PTWC also gave copies of this animation to television reporters who in turn passed it on to their national broadcast networks. PTWC then created a version for NOAA's Science on a Sphere system so it could be shown on these exhibits as the tsunami was still crossing the Pacific Ocean. While it is difficult to determine how many people saw this animation since local, national, and international news networks showed it in their broadcasts, PTWC's YouTube channel provides some statistics. As of 1 August 2014 this animation has garnered more than 650,000 views. Previous animations, typically released during significant anniversaries, rarely get more than 10,000 views, and even then only when external websites share them. Clearly there is a high demand for a tsunami graphic that shows both the speed and the severity of a 19. Second international tsunami workshop on the technical aspects of tsunami warning systems, tsunami analysis, preparedness, observation and instrumentation International Nuclear Information System (INIS) 1989-01-01 The Second Workshop on the Technical Aspects of Tsunami Warning Systems, Tsunami Analysis, Preparedness, Observation, and Instrumentation, sponsored and convened by the Intergovernmental Oceanographic Commission (IOC), was held on 1-2 August 1989, in the modern and attractive research town of Academgorodok, which is located 20 km south from downtown Novosibirsk, the capital of Siberia, USSR. The Program was arranged in eight major areas of interest covering the following: Opening and Introduction; Survey of Existing Tsunami Warning Centers - present status, results of work, plans for future development; Survey of some existing seismic data processing systems and future projects; Methods for fast evaluation of Tsunami potential and perspectives of their implementation; Tsunami data bases; Tsunami instrumentation and observations; Tsunami preparedness; and finally, a general discussion and adoption of recommendations. The Workshop presentations not only addressed the conceptual improvements that have been made, but focused on the inner workings of the Tsunami Warning System, as well, including computer applications, on-line processing and numerical modelling. Furthermore, presentations reported on progress has been made in the last few years on data telemetry, instrumentation and communications. Emphasis was placed on new concepts and their application into operational techniques that can result in improvements in data collection, rapid processing of the data, in analysis and prediction. A Summary Report on the Second International Tsunami Workshop, containing abstracted and annotated proceedings has been published as a separate report. The present Report is a Supplement to the Summary Report and contains the full text of the papers presented at this Workshop. Refs, figs and tabs 20. Earthquake Scenario-Based Tsunami Wave Heights in the Eastern Mediterranean and Connected Seas Science.gov (United States) Necmioglu, Ocal; Özel, Nurcan Meral 2015-12-01 We identified a set of tsunami scenario input parameters in a 0.5° × 0.5° uniformly gridded area in the Eastern Mediterranean, Aegean (both for shallow- and intermediate-depth earthquakes) and Black Seas (only shallow earthquakes) and calculated tsunami scenarios using the SWAN-Joint Research Centre (SWAN-JRC) code ( Mader 2004; Annunziato 2007) with 2-arcmin resolution bathymetry data for the range of 6.5—Mwmax with an Mw increment of 0.1 at each grid in order to realize a comprehensive analysis of tsunami wave heights from earthquakes originating in the region. We defined characteristic earthquake source parameters from a compiled set of sources such as existing moment tensor catalogues and various reference studies, together with the Mwmax assigned in the literature, where possible. Results from 2,415 scenarios show that in the Eastern Mediterranean and its connected seas (Aegean and Black Sea), shallow earthquakes with Mw ≥ 6.5 may result in coastal wave heights of 0.5 m, whereas the same wave height would be expected only from intermediate-depth earthquakes with Mw ≥ 7.0 . The distribution of maximum wave heights calculated indicate that tsunami wave heights up to 1 m could be expected in the northern Aegean, whereas in the Black Sea, Cyprus, Levantine coasts, northern Libya, eastern Sicily, southern Italy, and western Greece, up to 3-m wave height could be possible. Crete, the southern Aegean, and the area between northeast Libya and Alexandria (Egypt) is prone to maximum tsunami wave heights of >3 m. Considering that calculations are performed at a minimum bathymetry depth of 20 m, these wave heights may, according to Green's Law, be amplified by a factor of 2 at the coastline. The study can provide a basis for detailed tsunami hazard studies in the region. 1. Correlation Equation of Fault Size, Moment Magnitude, and Height of Tsunami Case Study: Historical Tsunami Database in Sulawesi Science.gov (United States) 2018-03-01 Sulawesi, one of the biggest island in Indonesia, located on the convergence of two macro plate that is Eurasia and Pacific. NOAA and Novosibirsk Tsunami Laboratory show more than 20 tsunami data recorded in Sulawesi since 1820. Based on this data, determination of correlation between tsunami and earthquake parameter need to be done to proved all event in the past. Complete data of magnitudes, fault sizes and tsunami heights on this study sourced from NOAA and Novosibirsk Tsunami database, completed with Pacific Tsunami Warning Center (PTWC) catalog. This study aims to find correlation between moment magnitude, fault size and tsunami height by simple regression. The step of this research are data collecting, processing, and regression analysis. Result shows moment magnitude, fault size and tsunami heights strongly correlated. This analysis is enough to proved the accuracy of historical tsunami database in Sulawesi on NOAA, Novosibirsk Tsunami Laboratory and PTWC. 2. Quantification of tsunami hazard on Canada's Pacific Coast; implications for risk assessment Science.gov (United States) Evans, Stephen G.; Delaney, Keith B. 2015-04-01 Our assessment of tsunami hazard on Canada's Pacific Coast (i.e., the coast of British Columbia) begins with a review of the 1964 tsunami generated by The Great Alaska Earthquake (M9.2) that resulted in significant damage to coastal communities and infrastructure. In particular, the tsunami waves swept up inlets on the west coast of Vancouver Island and damaged several communities; Port Alberni suffered upwards of 5M worth of damage. At Port Alberni, the maximum tsunami wave height was estimated at 8.2 m above mean sea level and was recorded on the stream gauge on the Somass River located at about 7 m a.s.l, 6 km upstream from its mouth. The highest wave (9.75 m above tidal datum) was reported from Shields Bay, Graham Island, Queen Charlotte Islands (Haida Gwaii). In addition, the 1964 tsunami was recorded on tide gauges at a number of locations on the BC coast. The 1964 signal and the magnitude and frequency of traces of other historical Pacific tsunamis (both far-field and local) are analysed in the Tofino tide gauge records and compared to tsunami traces in other tide gauges in the Pacific Basin (e.g., Miyako, Japan). Together with a review of the geological evidence for tsunami occurrence along Vancouver Island's west coast, we use this tide gauge data to develop a quantitative framework for tsunami hazard on Canada's Pacific coast. In larger time scales, tsunamis are a major component of the hazard from Cascadia megathrust events. From sedimentological evidence and seismological considerations, the recurrence interval of megathrust events on the Cascadia Subduction Zone has been estimated by others at roughly 500 years. We assume that the hazard associated with a high-magnitude destructive tsunami thus has an annual frequency of roughly 1/500. Compared to other major natural hazards in western Canada this represents a very high annual probability of potentially destructive hazard that, in some coastal communities, translates into high levels of local risk 3. Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal Science.gov (United States) Wronna, M.; Omira, R.; Baptista, M. A. 2015-11-01 In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2. 4. Tsunami risk assessments in Messina, Sicily - Italy Science.gov (United States) Grezio, A.; Gasparini, P.; Marzocchi, W.; Patera, A.; Tinti, S. 2012-01-01 We present a first detailed tsunami risk assessment for the city of Messina where one of the most destructive tsunami inundations of the last centuries occurred in 1908. In the tsunami hazard evaluation, probabilities are calculated through a new general modular Bayesian tool for Probability Tsunami Hazard Assessment. The estimation of losses of persons and buildings takes into account data collected directly or supplied by: (i) the Italian National Institute of Statistics that provides information on the population, on buildings and on many relevant social aspects; (ii) the Italian National Territory Agency that provides updated economic values of the buildings on the basis of their typology (residential, commercial, industrial) and location (streets); and (iii) the Train and Port Authorities. For human beings, a factor of time exposition is introduced and calculated in terms of hours per day in different places (private and public) and in terms of seasons, considering that some factors like the number of tourists can vary by one order of magnitude from January to August. Since the tsunami risk is a function of the run-up levels along the coast, a variable tsunami risk zone is defined as the area along the Messina coast where tsunami inundations may occur. 5. Variability of tsunami inundation footprints considering stochastic scenarios based on a single rupture model: Application to the 2011 Tohoku earthquake KAUST Repository Goda, Katsuichiro 2015-06-30 The sensitivity and variability of spatial tsunami inundation footprints in coastal cities and towns due to a megathrust subduction earthquake in the Tohoku region of Japan are investigated by considering different fault geometry and slip distributions. Stochastic tsunami scenarios are generated based on the spectral analysis and synthesis method with regards to an inverted source model. To assess spatial inundation processes accurately, tsunami modeling is conducted using bathymetry and elevation data with 50 m grid resolutions. Using the developed methodology for assessing variability of tsunami hazard estimates, stochastic inundation depth maps can be generated for local coastal communities. These maps are important for improving disaster preparedness by understanding the consequences of different situations/conditions, and by communicating uncertainty associated with hazard predictions. The analysis indicates that the sensitivity of inundation areas to the geometrical parameters (i.e., top-edge depth, strike, and dip) depends on the tsunami source characteristics and the site location, and is therefore complex and highly nonlinear. The variability assessment of inundation footprints indicates significant influence of slip distributions. In particular, topographical features of the region, such as ria coast and near-shore plain, have major influence on the tsunami inundation footprints. 6. Numerical experiment on tsunami deposit distribution process by using tsunami sediment transport model in historical tsunami event of megathrust Nankai trough earthquake Science.gov (United States) Imai, K.; Sugawara, D.; Takahashi, T. 2017-12-01 A large flow caused by tsunami transports sediments from beach and forms tsunami deposits in land and coastal lakes. A tsunami deposit has been found in their undisturbed on coastal lakes especially. Okamura & Matsuoka (2012) found some tsunami deposits in the field survey of coastal lakes facing to the Nankai trough, and tsunami deposits due to the past eight Nankai Trough megathrust earthquakes they identified. The environment in coastal lakes is stably calm and suitable for tsunami deposits preservation compared to other topographical conditions such as plains. Therefore, there is a possibility that the recurrence interval of megathrust earthquakes and tsunamis will be discussed with high resolution. In addition, it has been pointed out that small events that cannot be detected in plains could be separated finely (Sawai, 2012). Various aspects of past tsunami is expected to be elucidated, in consideration of topographical conditions of coastal lakes by using the relationship between the erosion-and-sedimentation process of the lake bottom and the external force of tsunami. In this research, numerical examination based on tsunami sediment transport model (Takahashi et al., 1999) was carried out on the site Ryujin-ike pond of Ohita, Japan where tsunami deposit was identified, and deposit migration analysis was conducted on the tsunami deposit distribution process of historical Nankai Trough earthquakes. Furthermore, examination of tsunami source conditions is possibly investigated by comparison studies of the observed data and the computation of tsunami deposit distribution. It is difficult to clarify details of tsunami source from indistinct information of paleogeographical conditions. However, this result shows that it can be used as a constraint condition of the tsunami source scale by combining tsunami deposit distribution in lakes with computation data. 7. Plasmon tsunamis on metallic nanoclusters. Science.gov (United States) Lucas, A A; Sunjic, M 2012-03-14 A model is constructed to describe inelastic scattering events accompanying electron capture by a highly charged ion flying by a metallic nanosphere. The electronic energy liberated by an electron leaving the Fermi level of the metal and dropping into a deep Rydberg state of the ion is used to increase the ion kinetic energy and, simultaneously, to excite multiple surface plasmons around the positively charged hole left behind on the metal sphere. This tsunami-like phenomenon manifests itself as periodic oscillations in the kinetic energy gain spectrum of the ion. The theory developed here extends our previous treatment (Lucas et al 2011 New J. Phys. 13 013034) of the Ar(q+)/C(60) charge exchange system. We provide an analysis of how the individual multipolar surface plasmons of the metallic sphere contribute to the formation of the oscillatory gain spectrum. Gain spectra showing characteristic, tsunami-like oscillations are simulated for Ar(15+) ions capturing one electron in distant collisions with Al and Na nanoclusters. 8. Role of Compressibility on Tsunami Propagation Science.gov (United States) Abdolali, Ali; Kirby, James T. 2017-12-01 In the present paper, we aim to reduce the discrepancies between tsunami arrival times evaluated from tsunami models and real measurements considering the role of ocean compressibility. We perform qualitative studies to reveal the phase speed reduction rate via a modified version of the Mild Slope Equation for Weakly Compressible fluid (MSEWC) proposed by Sammarco et al. (2013). The model is validated against a 3-D computational model. Physical properties of surface gravity waves are studied and compared with those for waves evaluated from an incompressible flow solver over realistic geometry for 2011 Tohoku-oki event, revealing reduction in phase speed.Plain Language SummarySubmarine earthquakes and submarine mass failures (SMFs), can generate long gravitational waves (or tsunamis) that propagate at the free surface. Tsunami waves can travel long distances and are known for their dramatic effects on coastal areas. Nowadays, numerical models are used to reconstruct the tsunamigenic events for many scientific and socioeconomic aspects i.e. Tsunami Early Warning Systems, inundation mapping, risk and hazard analysis, etc. A number of typically neglected parameters in these models cause discrepancies between model outputs and observations. Most of the tsunami models predict tsunami arrival times at distant stations slightly early in comparison to observations. In this study, we show how ocean compressibility would affect the tsunami wave propagation speed. In this framework, an efficient two-dimensional model equation for the weakly compressible ocean has been developed, validated and tested for simplified and real cases against three dimensional and incompressible solvers. Taking the effect of compressibility, the phase speed of surface gravity waves is reduced compared to that of an incompressible fluid. Then, we used the model for the case of devastating Tohoku-Oki 2011 tsunami event, improving the model accuracy. This study sheds light for future model development 9. Tsunami deposits at MIS Stages 5e and 9 on Oahu, Hawaii: implications for sea level at interglacial stages Science.gov (United States) McMurtry, G. M.; Campbell, J. F.; Fryer, G. J.; Tappin, D. R.; Fietzke, J. 2010-12-01 Sandy, basalt-coral conglomerates associated with both beachrock and coral reefs are found at high elevations on Oahu, Hawaii. They have been attributed to either brief, sea level high-stands or storms. The Kahe Point conglomerates are at 12.5 m elevation, whereas the main stage MIS-5e reef at this location has a maximum elevation of 8.2 m. They are loosely consolidated and poorly cemented, graded, poorly sorted, and with varying amounts of basalt and coral clasts ranging from cobble to boulder size. Coral in these deposits has been U-series dated by us at between 120-125 ka (n=5). Four distinct beds, with a gently seaward tilt, are recognized in a road cut section, with each bed composed of a few cm-thick topset bed of fine-grained, shelly, calcareous sand to silt. Similar high elevation conglomerates and 5e reefs are also described at Mokapu and Kaena Points on Oahu, indicating an island-wide deposit. Older coral clasts, dated at 130 to 142 ka (n=6; oldest by alpha spectrometry) found in association with the stage 5e corals suggest reworking and incorporation of older low-stand reef material. The coarse grain size of the conglomerates indicates deposition from a high-energy event; thus a high-stand source is ruled out. We also consider that the overall lithology and up to 0.5 m bed thickness not to be the result of storms; a series of high frequency storm events is considered unlikely. The weight of the evidence in our opinion clearly indicates deposition by a series of tsunami waves. If correct, this has implications for “probabilistic” models of sea level peaks at least 6.6 m higher than present at stage 5e that use such data in their models (e. g., Kopp et al., 2009), at least for Oahu. Within about 2 km of the Kahe deposit, in a road cut at Ko Olina, there is another markedly similar high-energy, sandy basalt-bearing coral conglomerate sequence at 21 to 25 m elevation. There are at least two distinct beds about one meter in thickness, both gently seaward 10. Can undersea voltage measurements detect tsunamis? Digital Repository Service at National Institute of Oceanography (India) Manoj, C.; Kuvshinov, A.; Neetu, S.; Harinarayana, T. the temporal variations of these electric fields? To answer these questions, we use a barotropic tsunami model and a state-of-the-art 3-D EM induction code to simulate the electric and magnetic fields generated by the Indian Ocean Tsunami. We will first...). The 4 C. MANOJ et al.: TSUNAMI GENERATED ELECTRIC FIELDS solution allows for simulating electromagnetic (EM) field in a spherical models of the Earth with three-dimensional (3-D) distribution of electrical conductivity. These models consist of a number... 11. Geological evidence of tsunamis and earthquakes at the Eastern Hellenic Arc: correlation with historical seismicity in the eastern Mediterranean Sea Directory of Open Access Journals (Sweden) 2012-12-01 Full Text Available Sedimentary stratigraphy determined by trenching in Dalaman, south-western Turkey, revealed three sand layers at a distance of approximately 240 m from the shoreline and at elevations of +0.30, +0.55 and +0.90 cm. Storm surge action does not explain the features of these deposits that show instead typical characteristics of tsunami deposition. The sand layers correlate with historical tsunamis generated by large earthquakes which ruptured the eastern Hellenic Arc and Trench in 1303, 1481 and 1741. Accelerator mass spectrometry 14C dating of a wood sample from layer II indicated deposition in AD 1473±46, which fits the 1481 event. From an estimated average alluvium deposition rate of approximately 0.13 cm/year, layers I and III were dated at 1322 and 1724, which may represent the large 1303 and 1741 tsunamis. The geological record of the 1303 key event is very poor; therefore, sand layer I perhaps represents an important geological signature of the 1303 tsunami. However, the strong tsunami reported to have been generated by the 1609 earthquake is missing from Dalaman stratigraphy: this underlines the sensitivity of tsunami geological signatures to various local factors. The 1303 earthquake ruptured the trench between the islands of Crete and Rhodes. For the earthquakes of 1481, 1609 and 1741 we suggested that they were very likely generated in the Rhodes Abyssal Plain where sea depths of up to approximately 4200 m, together with the thrust component of seismotectonics, favor tsunami generation. Sand dykes directed upwards from layer I to layer II indicated that the 1481 earthquake triggered liquefaction of sand layer I. The results substantially widen our knowledge about the historical earthquake and tsunami activity in the eastern Mediterranean basin. 12. Modelling tsunami inundation for risk analysis at the Andaman Sea Coast of Thailand Science.gov (United States) Kaiser, G.; Kortenhaus, A. 2009-04-01 -wide available. However, to model tsunami-induced inundation for risk analysis and management purposes the accuracy of these data is not sufficient as the processes in the near-shore zone cannot be modelled accurately enough and the spatial resolution of the topography is weak. Moreover, the SRTM data provide a digital surface model which includes vegetation and buildings in the surface description. To improve the data basis additional bathymetric data were used in the near shore zone of the Phang Nga and Phuket coastlines and various remote sensing techniques as well as additional GPS measurements were applied to derive a high resolution topography from satellite and airborne data. Land use classifications and filter methods were developed to correct the digital surface models to digital elevation models. Simulations were then performed with a non-linear shallow water model to model the 2004 Asian Tsunami and to simulate possible future ones. Results of water elevation near the coast were compared with field measurements and observations, and the influence of the resolution of the topography on inundation patterns like water depth, velocity, dispersion and duration of the flood were analysed. The inundation simulation provides detailed hazard maps and is considered a reliable basis for risk assessment and risk zone mapping. Results are regarded vital for estimation of tsunami induced damages and evacuation planning. Results of the aforementioned simulations will be discussed during the conference. Differences of the numerical results using topographic data of different scales and modified by different post processing techniques will be analysed and explained. Further use of the results with respect to tsunami risk analysis and management will also be demonstrated. 13. Numerical modelling and evacuation strategies for tsunami awareness: lessons from the 2012 Haida Gwaii Tsunami Directory of Open Access Journals (Sweden) Angela Santos 2016-07-01 Full Text Available On October 28, 2012, an earthquake occurred offshore Canada, with a magnitude Mw of 7.8, triggering a tsunami that propagated through the Pacific Ocean. The tsunami numerical model results show it would not be expected to generate widespread inundation on Hawaii. Yet, two hours after the earthquake, the Pacific Tsunami Warning Centre (PTWC issued a tsunami warning to the state of Hawaii. Since the state was hit by several tsunamis in the past, regular siren exercises, tsunami hazard maps and other prevention measures are available for public use, revealing that residents are well prepared regarding tsunami evacuation procedures. Nevertheless, residents and tourists evacuated mostly by car, and because of that, heavy traffic was reported, showing that it was a non-viable option for evacuation. The tsunami caused minor damages on the coastline, and several car accidents were reported, with one fatality. In recent years, there has been a remarkable interest in tsunami impacts. However, if risk planners seem to be very knowledgeable about how to avoid or mitigate their potential harmful effects, they seem to disregard its integration with other sectors of human activity and other social factors. 14. Development of Tsunami Numerical Model Considering the Disaster Debris such as Cars, Ships and Collapsed Buildings Science.gov (United States) Kozono, Y.; Takahashi, T.; Sakuraba, M.; Nojima, K. 2016-12-01 A lot of debris by tsunami, such as cars, ships and collapsed buildings were generated in the 2011 Tohoku tsunami. It is useful for rescue and recovery after tsunami disaster to predict the amount and final position of disaster debris. The transport form of disaster debris varies as drifting, rolling and sliding. These transport forms need to be considered comprehensively in tsunami simulation. In this study, we focused on the following three points. Firstly, the numerical model considering various transport forms of disaster debris was developed. The proposed numerical model was compared with the hydraulic experiment by Okubo et al. (2004) in order to verify transport on the bottom surface such as rolling and sliding. Secondly, a numerical experiment considering transporting on the bottom surface and drifting was studied. Finally, the numerical model was applied for Kesennuma city where serious damage occurred by the 2011 Tohoku tsunami. In this model, the influence of disaster debris was considered as tsunami flow energy loss. The hydraulic experiments conducted in a water tank which was 10 m long by 30 cm wide. The gate confined water in a storage tank, and acted as a wave generator. A slope was set at downstream section. The initial position of a block (width: 3.2 cm, density: 1.55 g/cm3) assuming the disaster debris was placed in front of the slope. The proposed numerical model simulated well the maximum transport distance and the final stop position of the block. In the second numerical experiment, the conditions were the same as the hydraulic experiment, except for the density of the block. The density was set to various values (from 0.30 to 4.20 g/cm3). This model was able to estimate various transport forms including drifting and sliding. In the numerical simulation of the 2011 Tohoku tsunami, the condition of buildings was modeled as follows: (i)the resistance on the bottom using Manning roughness coefficient (conventional method), and (ii)structure of 15. Tsunami-induced boulder transport - combining physical experiments and numerical modelling Science.gov (United States) Oetjen, Jan; Engel, Max; May, Simon Matthias; Schüttrumpf, Holger; Brueckner, Helmut; Prasad Pudasaini, Shiva 2016-04-01 Coasts are crucial areas for living, economy, recreation, transportation, and various sectors of industry. Many of them are exposed to high-energy wave events. With regard to the ongoing population growth in low-elevation coastal areas, the urgent need for developing suitable management measures, especially for hazards like tsunamis, becomes obvious. These measures require supporting tools which allow an exact estimation of impact parameters like inundation height, inundation area, and wave energy. Focussing on tsunamis, geological archives can provide essential information on frequency and magnitude on a longer time scale in order to support coastal hazard management. While fine-grained deposits may quickly be altered after deposition, multi-ton coarse clasts (boulders) may represent an information source on past tsunami events with a much higher preservation potential. Applying numerical hydrodynamic coupled boulder transport models (BTM) is a commonly used approach to analyse characteristics (e.g. wave height, flow velocity) of the corresponding tsunami. Correct computations of tsunamis and the induced boulder transport can provide essential event-specific information, including wave heights, runup and direction. Although several valuable numerical models for tsunami-induced boulder transport exist (e. g. Goto et al., 2007; Imamura et al., 2008), some important basic aspects of both tsunami hydrodynamics and corresponding boulder transport have not yet been entirely understood. Therefore, our project aims at these questions in four crucial aspects of boulder transport by a tsunami: (i) influence of sediment load, (ii) influence of complex boulder shapes other than idealized rectangular shapes, (iii) momentum transfers between multiple boulders, and (iv) influence of non-uniform bathymetries and topographies both on tsunami and boulder. The investigation of these aspects in physical experiments and the correct implementation of an advanced model is an urgent need 16. 2011 Tohoku Tsunami Runup Distribution and Damages around Yamada Bay, Iwate Science.gov (United States) Okayasu, A.; Shimozono, T.; Sato, S.; Tajima, Y.; Liu, H.; Takagawa, T.; Fritz, H. M. 2011-12-01 17. Maximum flood hazard assessment for OPG's deep geologic repository for low and intermediate level waste International Nuclear Information System (INIS) Nimmrichter, P.; McClintock, J.; Peng, J.; Leung, H. 2011-01-01 Ontario Power Generation (OPG) has entered a process to seek Environmental Assessment and licensing approvals to construct a Deep Geologic Repository (DGR) for Low and Intermediate Level Radioactive Waste (L&ILW) near the existing Western Waste Management Facility (WWMF) at the Bruce nuclear site in the Municipality of Kincardine, Ontario. In support of the design of the proposed DGR project, maximum flood stages were estimated for potential flood hazard risks associated with coastal, riverine and direct precipitation flooding. The estimation of lake/coastal flooding for the Bruce nuclear site considered potential extreme water levels in Lake Huron, storm surge and seiche, wind waves, and tsunamis. The riverine flood hazard assessment considered the Probable Maximum Flood (PMF) within the local watersheds, and within local drainage areas that will be directly impacted by the site development. A series of hydraulic models were developed, based on DGR project site grading and ditching, to assess the impact of a Probable Maximum Precipitation (PMP) occurring directly at the DGR site. Overall, this flood assessment concluded there is no potential for lake or riverine based flooding and the DGR area is not affected by tsunamis. However, it was also concluded from the results of this analysis that the PMF in proximity to the critical DGR operational areas and infrastructure would be higher than the proposed elevation of the entrance to the underground works. This paper provides an overview of the assessment of potential flood hazard risks associated with coastal, riverine and direct precipitation flooding that was completed for the DGR development. (author) 18. Development of a Probabilistic Tsunami Hazard Analysis in Japan International Nuclear Information System (INIS) Toshiaki Sakai; Tomoyoshi Takeda; Hiroshi Soraoka; Ken Yanagisawa; Tadashi Annaka 2006-01-01 It is meaningful for tsunami assessment to evaluate phenomena beyond the design basis as well as seismic design. Because once we set the design basis tsunami height, we still have possibilities tsunami height may exceeds the determined design tsunami height due to uncertainties regarding the tsunami phenomena. Probabilistic tsunami risk assessment consists of estimating for tsunami hazard and fragility of structures and executing system analysis. In this report, we apply a method for probabilistic tsunami hazard analysis (PTHA). We introduce a logic tree approach to estimate tsunami hazard curves (relationships between tsunami height and probability of excess) and present an example for Japan. Examples of tsunami hazard curves are illustrated, and uncertainty in the tsunami hazard is displayed by 5-, 16-, 50-, 84- and 95-percentile and mean hazard curves. The result of PTHA will be used for quantitative assessment of the tsunami risk for important facilities located on coastal area. Tsunami hazard curves are the reasonable input data for structures and system analysis. However the evaluation method for estimating fragility of structures and the procedure of system analysis is now being developed. (authors) 19. Great Earthquakes, Gigantic Landslides, and the Continuing Enigma of the April Fool's Tsunami of 1946 Science.gov (United States) Fryer, G. J.; Tryon, M. D. 2005-12-01 Paleotsunami studies can extend the record of great earthquakes back into prehistory, but what if the historical record itself is ambiguous? There is growing controversy about whether great earthquakes really occur along the Shumagin and Unimak segments of the Alaska-Aleutian system. The last great tsunami there was April 1, 1946, initiated by an earthquake whose magnitude has variously been reported from 7.1 to 8.5. Okal et al (BSSA, 2003) surveyed the near-field runup and concluded there were two sources: a magnitude 8.5 earthquake, which generated a Pacific-wide tsunami but which produced near-field runups no more than 18 m, and an earthquake-triggered slump whose tsunami reached 42 m at Scotch Cap Light near the western end of Unimak Island, but with runup rapidly decaying eastwards. An M8.5 earthquake, however, is incompatible with GPS strain measurements, which indicate that the maximum earthquake size off Unimak is M7.5. We have long contended that near- and far-field tsunamis were the result of a single earthquake-triggered debris avalanche down the Aleutian slope. In 2004 we were part of an expedition to map and explore the landslide, whose location seemed to be very tightly constrained by the known tsunami travel time to Scotch Cap Light. We found that neither our giant landslide nor Okal et al's smaller slump exist within 100 km of the presumed location. The explanation is obvious in retrospect: the tsunami was so large that it crossed the shallow Aleutian shelf as a bore travelling faster than the theoretical long-wave speed (which we had used to fix the location). Any landslide could only have occurred in an unsurveyed area farther east, off Unimak Bight, the central coast of Unimak Island. That location, however, conflicts with Okal et al's measurements of smaller runup along the Bight. We are now convinced that Okal et al confused the 1946 debris line with the lower line left by the 1957 tsunami. They were apparently unaware that the 1946 tsunami 20. Tsunami Induced Scour Around Monopile Foundations DEFF Research Database (Denmark) Fuhrman, David R.; Eltard-Larsen, Bjarke; Baykal, Cüneyt While the run-up, inundation, and destructive potential of tsunami events has received considerable attention in the literature, the associated interaction with the sea bed i.e. boundary layer dynamics, induced sediment transport, and resultant sea bed morphology, has received relatively little...... specific attention. The present paper aims to further the understanding of tsunami-induced scour, by numerically investigating tsunami-induced flow and scour processes around a monopile structure, representative of those commonly utilized as offshore wind turbine foundations. The simulations are based...... a monopile at model (laboratory) spatial and temporal scales. Therefore, prior to conducting such numerical simulations involving tsunami-induced scour, it is necessary to first establish a methodology for maintaining similarity of model and full field scales. To achieve hydrodynamic similarity we... 1. Tsunami Induced Scour Around Monopile Foundations DEFF Research Database (Denmark) Eltard-Larsen, Bjarke; Fuhrman, David R.; Baykal, Cüneyt 2017-01-01 A fully-coupled (hydrodynamic and morphologic) numerical model is presented, and utilized for the simulation of tsunami-induced scour around a monopile structure, representative of those commonly utilized as offshore wind turbine foundations at moderate depths i.e. for depths less than 30 m...... a steady current, where a generally excellent match with experimentally-based results is found. A methodology for maintaining and assessing hydrodynamic and morphologic similarity between field and (laboratory) model-scale tsunami events is then presented, combining diameter-based Froude number similarity...... with that based on the dimensionless wave boundary layer thickness-to-monopile diameter ratio. This methodology is utilized directly in the selection of governing tsunami wave parameters (i.e. velocity magnitude and period) used for subsequent simulation within the numerical model, with the tsunami-induced flow... 2. Hydrophysical manifestations of the Indian ocean tsunami Digital Repository Service at National Institute of Oceanography (India) Sadhuram, Y.; Murthy, T.V.R.; Rao, B.P. described in detail by several authors. This chapter summarises the results of our investigations on the hydrophysical manifestations (salinity and temperature, coastal currents, internal waves, etc.) of the tsunami on the coastal environments in India... 3. The Mauritius and Indian Tsunami Case Study African Journals Online (AJOL) Nafiisah such unforeseen disasters in order to alleviate sufferings and to reduce loss of lives. Nowadays .... up an Indian Ocean Tsunami Warning and Mitigation System (I.O.T.W.S). ... and other natural disasters like floods, typhoons, hurricanes, and. 4. Standardized procedure for tsunami PRA by AESJ International Nuclear Information System (INIS) Kirimoto, Yukihiro; Yamaguchi, Akira; Ebisawa, Katsumi 2013-01-01 After Fukushima Accident (March 11, 2011), the Atomic Energy Society of Japan (AESJ) started to develop the standard of Tsunami Probabilistic Risk Assessment (PRA) for nuclear power plants in May 2011. As Japan is one of the countries with frequent earthquakes, a great deal of efforts has been made in the field of seismic research since the early stage. To our regret, the PRA procedures guide for tsunami has not yet been developed although the importance is held in mind of the PRA community. Accordingly, AESJ established a standard to specify the standardized procedure for tsunami PRA considering the results of investigation into the concept, the requirements that should have and the concrete methods regarding tsunami PRA referring the opinions of experts in the associated fields in December 2011 (AESJ-SC-RK004:2011). (author) 5. Tsunamis and Hurricanes A Mathematical Approach CERN Document Server Cap, Ferdinand 2006-01-01 Tsunamis and hurricanes have had a devastating impact on the population living near the coast during the year 2005. The calculation of the power and intensity of tsunamis and hurricanes are of great importance not only for engineers and meteorologists but also for governments and insurance companies. This book presents new research on the mathematical description of tsunamis and hurricanes. A combination of old and new approaches allows to derive a nonlinear partial differential equation of fifth order describing the steepening up and the propagation of tsunamis. The description includes dissipative terms and does not contain singularities or two valued functions. The equivalence principle of solutions of nonlinear large gas dynamics waves and of solutions of water wave equations will be used. An extension of the continuity equation by a source term due to evaporation rates of salt seawater will help to understand hurricanes. Detailed formula, tables and results of the calculations are given. 6. Annotated Tsunami bibliography: 1962-1976 International Nuclear Information System (INIS) Pararas-Carayannis, G.; Dong, B.; Farmer, R. 1982-08-01 This compilation contains annotated citations to nearly 3000 tsunami-related publications from 1962 to 1976 in English and several other languages. The foreign-language citations have English titles and abstracts 7. A rapid estimation of near field tsunami run-up Science.gov (United States) Riqueime, Sebastian; Fuentes, Mauricio; Hayes, Gavin; Campos, Jamie 2015-01-01 Many efforts have been made to quickly estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori.However, such models are generally based on uniform slip distributions and thus oversimplify the knowledge of the earthquake source. Here, we show how to predict tsunami run-up from any seismic source model using an analytic solution, that was specifically designed for subduction zones with a well defined geometry, i.e., Chile, Japan, Nicaragua, Alaska. The main idea of this work is to provide a tool for emergency response, trading off accuracy for speed. The solutions we present for large earthquakes appear promising. Here, run-up models are computed for: The 1992 Mw 7.7 Nicaragua Earthquake, the 2001 Mw 8.4 Perú Earthquake, the 2003Mw 8.3 Hokkaido Earthquake, the 2007 Mw 8.1 Perú Earthquake, the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake and the recent 2014 Mw 8.2 Iquique Earthquake. The maximum run-up estimations are consistent with measurements made inland after each event, with a peak of 9 m for Nicaragua, 8 m for Perú (2001), 32 m for Maule, 41 m for Tohoku, and 4.1 m for Iquique. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first minutes after the occurrence of similar events. Thus, such calculations will provide faster run-up information than is available from existing uniform-slip seismic source databases or past events of pre-modeled seismic sources. 8. How soon is too soon? When to cancel a warning after a damaging tsunami Science.gov (United States) Fryer, G. J.; Becker, N. C.; Wang, D.; Weinstein, S.; Richards, K. 2012-12-01 Following an earthquake a tsunami warning center (TWC) must determine if a coastal evacuation is necessary and must do so fast enough for the warning to be useful to affected coastlines. Once a damaging tsunami has arrived, the TWC must decide when to cancel its warning, a task often more challenging than the initial hazard assessment. Here we demonstrate the difficulties by investigating the impact of the Tohoku tsunami of 11 March 2011 on the State of Hawaii, which relies on the Pacific Tsunami Warning Center (PTWC) for tsunami hazard guidance. PTWC issued a Tsunami Watch for Hawaii at 10 March 1956 HST (10 minutes after the earthquake) and upgraded to a Tsunami Warning at 2131 HST. The tsunami arrived in Hawaii just before 0300 HST the next day, reached a maximum runup of over 5 m, and did roughly \$50 million in damage throughout the state. PTWC downgraded the Warning to an Advisory at 0730 HST, and canceled the Advisory at 1140 HST. The timing of the downgrade was appropriate—by then it was safe for coastal residents to re-enter the evacuation zone but not to enter the water—but in retrospect PTWC cancelled its Advisory too early. By late morning tide gauges throughout the state had all registered maximum wave heights of 30 cm or less for a couple of hours, so PTWC cancelled. The Center was unaware, however, of ocean behavior at locations without instruments. At Ma'alaea Harbor on the Island of Maui, for example, sea level oscillations exposed the harbor bottom every 20 minutes for several hours after the cancellation. At Waikiki on Oahu, lifeguards rescued 25 swimmers (who had either ignored or were unaware of the cancellation message's caution about hazardous currents) in the hours after the cancellation and performed CPR on one near-drowning victim. Fortunately, there were no deaths. Because of dangerous surges, ocean safety officials closed Hanauma Bay, a popular snorkeling spot on Oahu, for a full day after the tsunami hit. They reassessed the bay the 9. Generation of deterministic tsunami hazard maps in the Bay of Cadiz, south-west Spain Science.gov (United States) Álvarez-Gómez, J. A.; Otero, L.; Olabarrieta, M.; González, M.; Carreño, E.; Baptista, M. A.; Miranda, J. M.; Medina, R.; Lima, V. 2009-04-01 free surface elevation, maximum water depth, maximum current speed, maximum Froude number and maximum impact forces (hydrostatic and dynamic forces). The fault rupture and sea bottom displacement has been computed by means of the Okada equations. As result, a set of more than 100 deterministic thematic maps have been created in a GIS environment incorporating geographical data and high resolution orthorectified satellite images. These thematic maps form an atlas of inundation maps that will be distributed to different government authorities and civil protection and emergency agencies. The authors gratefully acknowledge the financial support provided by the EU under the frame of the European Project TRANSFER (Tsunami Risk And Strategies For the European Region), 6th Framework Programme. 10. Tsunami Source Identification on the 1867 Tsunami Event Based on the Impact Intensity Science.gov (United States) Wu, T. R. 2014-12-01 The 1867 Keelung tsunami event has drawn significant attention from people in Taiwan. Not only because the location was very close to the 3 nuclear power plants which are only about 20km away from the Taipei city but also because of the ambiguous on the tsunami sources. This event is unique in terms of many aspects. First, it was documented on many literatures with many languages and with similar descriptions. Second, the tsunami deposit was discovered recently. Based on the literatures, earthquake, 7-meter tsunami height, volcanic smoke, and oceanic smoke were observed. Previous studies concluded that this tsunami was generated by an earthquake with a magnitude around Mw7.0 along the Shanchiao Fault. However, numerical results showed that even a Mw 8.0 earthquake was not able to generate a 7-meter tsunami. Considering the steep bathymetry and intense volcanic activities along the Keelung coast, one reasonable hypothesis is that different types of tsunami sources were existed, such as the submarine landslide or volcanic eruption. In order to confirm this scenario, last year we proposed the Tsunami Reverse Tracing Method (TRTM) to find the possible locations of the tsunami sources. This method helped us ruling out the impossible far-field tsunami sources. However, the near-field sources are still remain unclear. This year, we further developed a new method named 'Impact Intensity Analysis' (IIA). In the IIA method, the study area is divided into a sequence of tsunami sources, and the numerical simulations of each source is conducted by COMCOT (Cornell Multi-grid Coupled Tsunami Model) tsunami model. After that, the resulting wave height from each source to the study site is collected and plotted. This method successfully helped us to identify the impact factor from the near-field potential sources. The IIA result (Fig. 1) shows that the 1867 tsunami event was a multi-source event. A mild tsunami was trigged by a Mw7.0 earthquake, and then followed by the submarine 11. Mega Tsunamis of the World Ocean and Their Implication for the Tsunami Hazard Assessment Science.gov (United States) Gusiakov, V. K. 2014-12-01 Mega tsunamis are the strongest tsunamigenic events of tectonic origin that are characterized by run-up heights up to 40-50 m measured along a considerable part of the coastline (up to 1000 km). One of the most important features of mega-tsunamis is their ability to cross the entire oceanic basin and to cause an essential damage to its opposite coast. Another important feature is their ability to penetrate into the marginal seas (like the Sea of Okhotsk, the Bering Sea) and cause dangerous water level oscillations along the parts of the coast, which are largely protected by island arcs against the impact of the strongest regional tsunamis. Among all known historical tsunamis (nearly 2250 events during the last 4000 years) they represent only a small fraction (less than 1%) however they are responsible for more than half the total tsunami fatalities and a considerable part of the overall tsunami damage. The source of all known mega tsunamis is subduction submarine earthquakes with magnitude 9.0 or higher having a return period from 200-300 years to 1000-1200 years. The paper presents a list of 15 mega tsunami events identified so far in historical catalogs with their basic source parameters, near-field and far-field impact effects and their generation and propagation features. The far-field impact of mega tsunamis is largely controlled by location and orientation of their earthquake source as well as by deep ocean bathymetry features. We also discuss the problem of the long-term tsunami hazard assessment when the occurrence of mega tsunamis is taken into account. 12. Our fingerprint in tsunami deposits - anthropogenic markers as a new tsunami identification tool Science.gov (United States) Bellanova, P.; Schwarzbauer, J.; Reicherter, K. R.; Jaffe, B. E.; Szczucinski, W. 2016-12-01 Several recent geochemical studies have focused on the use of inorganic indicators to evaluate a tsunami origin of sediment in the geologic record. However, tsunami transport not only particulate sedimentary material from marine to terrestrial areas (and vice versa), but also associated organic material. Thus, tsunami deposits may be characterized by organic-geochemical parameters. Recently increased attention has been given to the use of natural organic substances (biomarkers) to identify tsunami deposits. To date no studies have been made investigating anthropogenic organic indicators in recent tsunami deposits. Anthropogenic organic markers are more sensitive and reliable markers compared to other tracers due to their specific molecular structural properties and higher source specificity. In this study we evaluate whether anthropogenic substances are useful indicators for determining whether an area has been inundated by a tsunami. We chose the Sendai Plain and Sanemoura and Oppa Bays, Japan, as study sites because the destruction of infrastructure by flooding released environmental pollutants (e.g., fuels, fats, tarmac, plastics, heavy metals, etc.) contaminating large areas of the coastal zone during the 2011 Tohoku-oki tsunami. Organic compounds from the tsunami deposits are extracted from tsunami sediment and compared with the organic signature of unaffected pre-tsunami samples using gas chromatography-mass spectrometry (GS/MS) based analyses. For the anthropogenic markers, compounds such as soil derived pesticides (DDT), source specific PAHs, halogenated aromatics from industrial sources were detected and used to observe the inland extent and the impact of the Tohoku-oki tsunami on the coastal region around Sendai. 13. Correlation of Fault Size, Moment Magnitude, and Tsunami Height to Proved Paleo-tsunami Data in Sulawesi Indonesia Science.gov (United States) 2016-02-01 Sulawesi (Indonesia) island is located in the meeting of three large plates i.e. Indo-Australia, Pacific, and Eurasia. This configuration surely make high risk on tsunami by earthquake and by sea floor landslide. NOAA and Russia Tsunami Laboratory show more than 20 tsunami data recorded in Sulawesi since 1820. Based on this data, determine of correlation between all tsunami parameter need to be done to proved all event in the past. Complete data of magnitudes, fault sizes and tsunami heights in this study sourced from NOAA and Russia Tsunami database and completed with Pacific Tsunami Warning Center (PTWC) catalog. This study aims to find correlation between fault area, moment magnitude, and tsunami height by simple regression in Sulawesi. The step of this research are data collect, processing, and regression analysis. Result shows very good correlation, each moment magnitude, tsunami heights, and fault parameter i.e. long, wide, and slip are correlate linier. In increasing of fault area, the tsunami height and moment magnitude value also increase. In increasing of moment magnitude, tsunami height also increase. This analysis is enough to proved all Sulawesi tsunami parameter catalog in NOAA, Russia Tsunami Laboratory and PTWC are correct. Keyword: tsunami, magnitude, height, fault 14. Modeling tsunamis induced by retrogressive submarine landslides Science.gov (United States) Løvholt, F.; Kim, J.; Harbitz, C. B. 2015-12-01 Enormous submarine landslides having volumes up to thousands of km3 and long run-out may cause tsunamis with widespread effects. Clay-rich landslides, such as Trænadjupet and Storegga offshore Norway commonly involve retrogressive mass and momentum release mechanisms that affect the tsunami generation. Therefore, such landslides may involve a large amount of smaller blocks. As a consequence, the failure mechanisms and release rate of the individual blocks are of importance for the tsunami generation. Previous attempts to model the tsunami generation due to retrogressive landslides are few, and limited to idealized conditions. Here, we review the basic effects of retrogression on tsunamigenesis in simple geometries. To this end, two different methods are employed for the landslide motion, a series block with pre-scribed time lags and kinematics, and a dynamic retrogressive model where the inter-block time lag is determined by the model. The effect of parameters such as time lag on wave-height, wave-length, and dispersion are discussed. Finally, we discuss how the retrogressive effects may have influenced the tsunamis due to large landslides such as the Storegga slide. The research leading to these results has received funding from the Research Council of Norway under grant number 231252 (Project TsunamiLand) and the European Union's Seventh Framework Programme (FP7/2007-2013) under grant agreement 603839 (Project ASTARTE). 15. Tsunami simulation method initiated from waveforms observed by ocean bottom pressure sensors for real-time tsunami forecast; Applied for 2011 Tohoku Tsunami Science.gov (United States) Tanioka, Yuichiro 2017-04-01 After tsunami disaster due to the 2011 Tohoku-oki great earthquake, improvement of the tsunami forecast has been an urgent issue in Japan. National Institute of Disaster Prevention is installing a cable network system of earthquake and tsunami observation (S-NET) at the ocean bottom along the Japan and Kurile trench. This cable system includes 125 pressure sensors (tsunami meters) which are separated by 30 km. Along the Nankai trough, JAMSTEC already installed and operated the cable network system of seismometers and pressure sensors (DONET and DONET2). Those systems are the most dense observation network systems on top of source areas of great underthrust earthquakes in the world. Real-time tsunami forecast has depended on estimation of earthquake parameters, such as epicenter, depth, and magnitude of earthquakes. Recently, tsunami forecast method has been developed using the estimation of tsunami source from tsunami waveforms observed at the ocean bottom pressure sensors. However, when we have many pressure sensors separated by 30km on top of the source area, we do not need to estimate the tsunami source or earthquake source to compute tsunami. Instead, we can initiate a tsunami simulation from those dense tsunami observed data. Observed tsunami height differences with a time interval at the ocean bottom pressure sensors separated by 30 km were used to estimate tsunami height distribution at a particular time. In our new method, tsunami numerical simulation was initiated from those estimated tsunami height distribution. In this paper, the above method is improved and applied for the tsunami generated by the 2011 Tohoku-oki great earthquake. Tsunami source model of the 2011 Tohoku-oki great earthquake estimated using observed tsunami waveforms, coseimic deformation observed by GPS and ocean bottom sensors by Gusman et al. (2012) is used in this study. The ocean surface deformation is computed from the source model and used as an initial condition of tsunami 16. A culture of tsunami preparedness and applying knowledge from recent tsunamis affecting California Science.gov (United States) Miller, K. M.; Wilson, R. I. 2012-12-01 It is the mission of the California Tsunami Program to ensure public safety by protecting lives and property before, during, and after a potentially destructive or damaging tsunami. In order to achieve this goal, the state has sought first to use finite funding resources to identify and quantify the tsunami hazard using the best available scientific expertise, modeling, data, mapping, and methods at its disposal. Secondly, it has been vital to accurately inform the emergency response community of the nature of the threat by defining inundation zones prior to a tsunami event and leveraging technical expertise during ongoing tsunami alert notifications (specifically incoming wave heights, arrival times, and the dangers of strong currents). State scientists and emergency managers have been able to learn and apply both scientific and emergency response lessons from recent, distant-source tsunamis affecting coastal California (from Samoa in 2009, Chile in 2010, and Japan in 2011). Emergency managers must understand and plan in advance for specific actions and protocols for each alert notification level provided by the NOAA/NWS West Coast/Alaska Tsunami Warning Center. Finally the state program has provided education and outreach information via a multitude of delivery methods, activities, and end products while keeping the message simple, consistent, and focused. The goal is a culture of preparedness and understanding of what to do in the face of a tsunami by residents, visitors, and responsible government officials. We provide an update of results and findings made by the state program with support of the National Tsunami Hazard Mitigation Program through important collaboration with other U.S. States, Territories and agencies. In 2009 the California Emergency Management Agency (CalEMA) and the California Geological Survey (CGS) completed tsunami inundation modeling and mapping for all low-lying, populated coastal areas of California to assist local jurisdictions on 17. The Components of Community Awareness and Preparedness; its Effects on the Reduction of Tsunami Vulnerability and Risk Science.gov (United States) Tufekci, Duygu; Lutfi Suzen, Mehmet; Cevdet Yalciner, Ahmet 2017-04-01 . Furthermore, the components of the awareness and preparedness parameter n, is widely investigated in global and local practices by using the method of categorization to determine different levels for different coastal metropolitan areas with different cultures and with different hazard perception. Moreover, consistency between the theoretical maximum and practical applications of parameter n is estimated, discussed and presented. In the applications mainly the Bakirkoy district of Istanbul is analyzed and the results are presented. Acknowledgements: Partial support by 603839 ASTARTE Project of EU, UDAPC-12-14 project of AFAD, Turkey, 213M534 projects of TUBITAK, Japan-Turkey Joint Research Project by JICA on earthquakes and tsunamis in Marmara Region in (JICA SATREPS - MarDiM Project), and Istanbul Metropolitan Municipality are acknowledged. 18. Tsunamis Wind waves are deep-water waves because they are normally found in ... shallow water waves observed over the open sea is much weaker. For linear waves, it ..... processes of reflection, refraction, and trapping that the tsuna- mis reached the ... 19. Tsunamis Science.gov (United States) ... Extreme Heat Older Adults (Aged 65+) Infants and Children Chronic Medical Conditions Low Income Athletes Outdoor Workers Pets Hot Weather Tips Warning Signs and Symptoms FAQs Social Media How to Stay Cool Missouri Cooling Centers Extreme ... 20. The raising of tsunami-wall based on tsunami evaluation at Onagawa nuclear power plant International Nuclear Information System (INIS) Takahashi, Jun; Hirata, Kazuo 2017-01-01 Onagawa nuclear power station (Onagawa NPS) is located on the Pacific coast of Tohoku district where several massive tsunamis had attacked in the past. Based on this natural condition, tsunami safety measures were planned and carried out since the planning of the unit 1. For example, we set appropriate site height for protecting important facilities from tsunamis. As a result, in the massive tsunami which was caused by the 2011 off the Pacific Tohoku Earthquake (3.11 earthquake) on March 11, 2011, all units of Onagawa NPS achieved the cold shutdown. After 3.11 earthquake, we revaluated tsunami considering latest knowledge. In the tsunami re-evaluation, we carried out documents investigation about all tsunami source factors and set the standard fault models which were thought to be appropriate as tsunami wave sources. As a result, the highest water level at the site front is evaluated as 23.1 m. Based on this examination result, we decided to raise the existing seawall (approximately 17 m) to 29 m in consideration of margin and so on. Because the space of the site was limited, we planned a combination of steel-pipe type vertical wall (L = 680 m) and embankment (L = 120 m) due to cement improved soil. (author) 1. Geological evidence and sediment transport modelling for the 1946 and 1960 tsunamis in Shinmachi, Hilo, Hawaii Science.gov (United States) Chagué, Catherine; Sugawara, Daisuke; Goto, Kazuhisa; Goff, James; Dudley, Walter; Gadd, Patricia 2018-02-01 The Japanese community of Shinmachi, established on low-lying land between downtown Hilo and Waiakea, Hawaii, was obliterated by the 1946 Aleutian tsunami but was rebuilt, only to be destroyed again by the 1960 Chilean tsunami. The aim of this study was to find out if any geological evidence of these well documented events had been preserved in the sedimentary record in Wailoa River State Park, which replaced Shinmachi after the 1960 tsunami. This was achieved by collecting cores in the park and performing sedimentological, chronological and geochemical analyses, the latter also processed by principal component analysis. Sediment transport modelling was carried out for both tsunamis, to infer the source of the sediment and areas of deposition on land. The field survey revealed two distinct units within peat and soil, a thin lower unit composed of weathered basalt fragments within mud (Unit 1) and an upper unit dominated by fine volcanic sand within fine silt exhibiting subtle upward fining and coarsening (Unit 2, consisting of Unit 2A and Unit 2B), although these two anomalous units only occur on the western shore of Waiakea Mill Pond. Analysis with an ITRAX core scanner shows that Unit 1 is characterised by high Mn, Fe, Rb, La and Ce counts, combined with elevated magnetic susceptibility. Based on its chemical and sedimentological characteristics, Unit 1 is attributed to a flood event in Wailoa River that occurred around 1520-1660 CE, most probably as a result of a tropical storm. The sharp lower contact of Unit 2 coincides with the appearance of arsenic, contemporaneous with an increase in Ca, Sr, Si, Ti, K, Zr, Mn, Fe, La and Ce. In this study, As is used as a chronological and source material marker, as it is known to have been released into Wailoa River Estuary and Waiakea Mill Pond by the Canec factory between 1932 and 1963. Thus, not only the chemical and sedimentological evidence but also sediment transport modelling, corroborating the historical record 2. Field Survey of the 17 June 2017 Landslide and Tsunami in Karrat Fjord, Greenland Science.gov (United States) Fritz, H. M.; Giachetti, T.; Anderson, S.; Gauthier, D. 2017-12-01 On 17 June 2017 a massive landslide generated tsunami impacted Karrat Fjord and the Uummannaq fjord system located some 280 km north of Ilulissat in western Greenland. The eastern of two easily recognized landslides detached completely and fell approximately 1 km to sea level, before plunging into the Karrat Fjord and generating a tsunami within the fjord system. The landslide generated tsunami washed 4 victims and several houses into the fjord at Nuugaatsiaq, about 30 km west of the landslide. Eyewitnesses at Nuugaatsiaq and Illorsuit recorded the tsunami inundation on videos. The active western landslide features a back scarp and large cracks, and therefore remains a threat in Karrat Fjord. The villages of Nuugaatsiaq and Illorsuit remain evacuated. The Geotechnical Extreme Events Reconnaissance (GEER) survey team deployed to Greenland from July 6 to 9, 2017. The reconnaissance on July 8 involved approximately 800 km of helicopter flight and landings in several key locations. The survey focused on the landslides and coastlines within 30 km of the landslide in either fjord direction. The aerial reconnaissance collected high quality oblique aerial photogrammetry (OAP) of the landslide, scarp, and debris avalanche track. The 3D model of the landslide provides the ability to study the morphology of the slope on July 8, it provides a baseline model for future surveys, and it can be used to compare to earlier imagery to estimate what happened on June 17. Change detection using prior satellite imagery indicates an approximate 55 million m3 total landslide volume of which 45 million m3 plunged into the fjord from elevations up to 1200 m above the water surface. The ground based tsunami survey documented flow depths, runup heights, inundation distances, sediment deposition, damage patterns at various scales, performance of the man-made infrastructure, and impact on the natural and glacial environment. Perishable high-water marks include changes in vegetation and damage to 3. Introduction to "Tsunamis in the Pacific Ocean: 2011-2012" Science.gov (United States) Rabinovich, Alexander B.; Borrero, Jose C.; Fritz, Hermann M. 2014-12-01 With this volume of the Pure and Applied Geophysics (PAGEOPH) topical issue "Tsunamis in the Pacific Ocean: 2011-2012", we are pleased to present 21 new papers discussing tsunami events occurring in this two-year span. Owing to the profound impact resulting from the unique crossover of a natural and nuclear disaster, research into the 11 March 2011 Tohoku, Japan earthquake and tsunami continues; here we present 12 papers related to this event. Three papers report on detailed field survey results and updated analyses of the wave dynamics based on these surveys. Two papers explore the effects of the Tohoku tsunami on the coast of Russia. Three papers discuss the tsunami source mechanism, and four papers deal with tsunami hydrodynamics in the far field or over the wider Pacific basin. In addition, a series of five papers presents studies of four new tsunami and earthquake events occurring over this time period. This includes tsunamis in El Salvador, the Philippines, Japan and the west coast of British Columbia, Canada. Finally, we present four new papers on tsunami science, including discussions on tsunami event duration, tsunami wave amplitude, tsunami energy and tsunami recurrence. 4. Developing an Event-Tree Probabilistic Tsunami Inundation Model for NE Atlantic Coasts: Application to a Case Study Science.gov (United States) Omira, R.; Matias, L.; Baptista, M. A. 2016-12-01 This study constitutes a preliminary assessment of probabilistic tsunami inundation in the NE Atlantic region. We developed an event-tree approach to calculate the likelihood of tsunami flood occurrence and exceedance of a specific near-shore wave height for a given exposure time. Only tsunamis of tectonic origin are considered here, taking into account local, regional, and far-field sources. The approach used here consists of an event-tree method that gathers probability models for seismic sources, tsunami numerical modeling, and statistical methods. It also includes a treatment of aleatoric uncertainties related to source location and tidal stage. Epistemic uncertainties are not addressed in this study. The methodology is applied to the coastal test-site of Sines located in the NE Atlantic coast of Portugal. We derive probabilistic high-resolution maximum wave amplitudes and flood distributions for the study test-site considering 100- and 500-year exposure times. We find that the probability that maximum wave amplitude exceeds 1 m somewhere along the Sines coasts reaches about 60 % for an exposure time of 100 years and is up to 97 % for an exposure time of 500 years. The probability of inundation occurrence (flow depth >0 m) varies between 10 % and 57 %, and from 20 % up to 95 % for 100- and 500-year exposure times, respectively. No validation has been performed here with historical tsunamis. This paper illustrates a methodology through a case study, which is not an operational assessment. 5. The tsunami probabilistic risk assessment of nuclear power plant (3). Outline of tsunami fragility analysis International Nuclear Information System (INIS) Mihara, Yoshinori 2012-01-01 Tsunami Probabilistic Risk Assessment (PRA) standard was issued in February 2012 by Standard Committee of Atomic Energy Society of Japan (AESJ). This article detailed tsunami fragility analysis, which calculated building and structure damage probability contributing core damage and consisted of five evaluation steps: (1) selection of evaluated element and damage mode, (2) selection of evaluation procedure, (3) evaluation of actual stiffness, (4) evaluation of actual response and (5) evaluation of fragility (damage probability and others). As an application example of the standard, calculation results of tsunami fragility analysis investigation by tsunami PRA subcommittee of AESJ were shown reflecting latest knowledge of damage state caused by wave force and others acted by tsunami from the 'off the Pacific Coast of Tohoku Earthquake'. (T. Tanaka) 6. Preliminary assessment of the impacts and effects of the South Pacific tsunami of September 2009 in Samoa Science.gov (United States) Dominey-Howes, D. 2009-12-01 The September 2009 tsunami was a regional South Pacific event of enormous significance. Our UNESCO-IOC ITST Samoa survey used a simplified version of a ‘coupled human-environment systems framework’ (Turner et al., 2003) to investigate the impacts and effects of the tsunami in Samoa. Further, the framework allowed us to identify those factors that affected the vulnerability and resilience of the human-environment system before, during and after the tsunami - a global first. Key findings (unprocessed) include: Maximum run-up exceeded 14 metres above sea level Maximum inundation (at right angles to the shore) was approximately 400 metres Maximum inundation with the wave running parallel with the shore (but inland), exceeded 700 metres Buildings sustained varying degrees of damage Damage was correlated with depth of tsunami flow, velocity, condition of foundations, quality of building materials used, quality of workmanship, adherence to the building code and so on Buildings raised even one metre above the surrounding land surface suffered much less damage Plants, trees and mangroves reduced flow velocity and flow depth - leading to greater chances of human survival and lower levels of building damage The tsunami has left a clear and distinguishable geological record in terms of sediments deposited in the coastal landscape The clear sediment layer associated with this tsunami suggests that older (and prehistoric) tsunamis can be identified, helping to answer questions about frequency and magnitude of tsunamis The tsunami caused widespread erosion of the coastal and beach zones but this damage will repair itself naturally and quickly The tsunami has had clear impacts on ecosystems and these are highly variable Ecosystems will repair themselves naturally and are unlikely to preserve long-term impacts It is clear that some plant (tree) species are highly resilient and provided immediate places for safety during the tsunami and resources post-tsunami People of Samoa are 7. Pemetaan Risiko Tsunami terhadap Bangunan secara Kuantitatif Directory of Open Access Journals (Sweden) Totok Wahyu Wibowo 2017-12-01 Full Text Available ABSTRAK Tsunami merupakan bencana alam yang sebagian besar kejadiannya dipicu oleh gempabumi dasar laut. Dampak kerugian tsunami terhadap lingkungan pesisir antara lain rusaknya properti, struktur bangunan, infrastruktur dan dapat mengakibatkan gangguan ekonomi. Bencana tsunami memiliki keunikan dibandingkan bencana lainnya, karena memiliki kemungkinan sangat kecil tetapi dengan ancaman yang tinggi. Paradigma Pengurangan Risiko Bencana (PRB yang berkembang dalam beberapa tahun terakhir yang menekankan bahwa risiko merupakan hal utama dalam penentuan strategi terhadap bencana. Kelurahan Ploso, merupakan salah satu lokasi di Kabupaten Pacitan yang berpotensi terkena bencana tsunami. Pemetaan risiko bangunan dilakukan dengan metode kuantitatif, yang mana disusun atas peta kerentanan dan peta harga bangunan. Papathoma Tsunami Vulnerability 3 (PTVA-3 diadopsi untuk pemetaan kerentanan. Data harga bangunan diperoleh dari kombinasi kerja lapangan dan analisis Sistem Informasi Geografis (SIG. Hasil pemetaan risiko menunjukkan bahwa Lingkungan Barehan memiliki risiko kerugian paling tinggi diantara semua lingkungan di Kelurahan Ploso. Hasil ini dapat dijadikan sebagai acuan untuk penentuan strategi pengurangan risiko bencana di Kelurahan Ploso. ABSTRACT Tsunami is a natural disaster whose occurrences are mostly triggered by submarine earthquakes. The impact of tsunami on coastal environment includes damages to properties, building structures, and infrastructures as well as economic disruptions. Compared to other disasters, tsunamis are deemed unique because they have a very small occurrence probability but with a very high threat. The paradigm of Disaster Risk Reduction (DRR that has developed in the last few years stresses risk as the primary factor to determine disaster strategies. Ploso Sub-district, an area in Pacitan Regency, is potentially affected by tsunamis. The risk mapping of the buildings in this sub-district was created using a quantitative 8. Application of a Tsunami Warning Message Metric to refine NOAA NWS Tsunami Warning Messages Science.gov (United States) Gregg, C. E.; Johnston, D.; Sorensen, J.; Whitmore, P. 2013-12-01 In 2010, the U.S. National Weather Service (NWS) funded a three year project to integrate social science into their Tsunami Program. One of three primary requirements of the grant was to make improvements to tsunami warning messages of the NWS' two Tsunami Warning Centers- the West Coast/Alaska Tsunami Warning Center (WCATWC) in Palmer, Alaska and the Pacific Tsunami Warning Center (PTWC) in Ewa Beach, Hawaii. We conducted focus group meetings with a purposive sample of local, state and Federal stakeholders and emergency managers in six states (AK, WA, OR, CA, HI and NC) and two US Territories (US Virgin Islands and American Samoa) to qualitatively asses information needs in tsunami warning messages using WCATWC tsunami messages for the March 2011 Tohoku earthquake and tsunami event. We also reviewed research literature on behavioral response to warnings to develop a tsunami warning message metric that could be used to guide revisions to tsunami warning messages of both warning centers. The message metric is divided into categories of Message Content, Style, Order and Formatting and Receiver Characteristics. A message is evaluated by cross-referencing the message with the operational definitions of metric factors. Findings are then used to guide revisions of the message until the characteristics of each factor are met. Using findings from this project and findings from a parallel NWS Warning Tiger Team study led by T. Nicolini, the WCATWC implemented the first of two phases of revisions to their warning messages in November 2012. A second phase of additional changes, which will fully implement the redesign of messages based on the metric, is in progress. The resulting messages will reflect current state-of-the-art knowledge on warning message effectiveness. Here we present the message metric; evidence-based rational for message factors; and examples of previous, existing and proposed messages. 9. THE SAMOA TSUNAMI OF 29 SEPTEMBER 2009 Early Warning and Inundation Assessment Directory of Open Access Journals (Sweden) Giovanni Franchello 2012-01-01 Full Text Available On 29 September 2009 at 17:48:11 UTC, a large earthquake of magnitude 8 struck off-shore of the Samoa Islands and generated a large tsunami that destroyed several villages and caused more than 160 fatalities. This report first presents the characteristics of the earthquake and discusses the best estimations for the fault parameters, which are the necessary input data for the hydrodynamic tsunami calculations. Then, the assessment of the near-real time systems invoked by the Global Disasters Alert and Coordination System (GDACS1 and the post-event calculations are performed, making comparisons with the observed tidal measurements and post-event survey. It was found that the most severely damaged locations are the Southern section of the Western Samoa Islands, Tutuila Isl in American Samoa and Niuatoputapu Isle in Tonga. This is in agreement with the locations indicated by the Red Cross as the most affected and with the results of the post-tsunami surveys. Furthermore, an attempt was made to map the inundation events using more detailed digital elevation models (DEM and hydrodynamic modelling with good results. The flooded areas for which we had satellite images and post-tsunami surveys confirm the inundated areas identified correctly by the hydrodynamic model. Indications are given on the DEM grid size needed for the different simulations. 10. The exposure of Sydney (Australia) to earthquake-generated tsunamis, storms and sea level rise: a probabilistic multi-hazard approach. Science.gov (United States) Dall'Osso, F; Dominey-Howes, D; Moore, C; Summerhayes, S; Withycombe, G 2014-12-10 Approximately 85% of Australia's population live along the coastal fringe, an area with high exposure to extreme inundations such as tsunamis. However, to date, no Probabilistic Tsunami Hazard Assessments (PTHA) that include inundation have been published for Australia. This limits the development of appropriate risk reduction measures by decision and policy makers. We describe our PTHA undertaken for the Sydney metropolitan area. Using the NOAA NCTR model MOST (Method for Splitting Tsunamis), we simulate 36 earthquake-generated tsunamis with annual probabilities of 1:100, 1:1,000 and 1:10,000, occurring under present and future predicted sea level conditions. For each tsunami scenario we generate a high-resolution inundation map of the maximum water level and flow velocity, and we calculate the exposure of buildings and critical infrastructure. Results indicate that exposure to earthquake-generated tsunamis is relatively low for present events, but increases significantly with higher sea level conditions. The probabilistic approach allowed us to undertake a comparison with an existing storm surge hazard assessment. Interestingly, the exposure to all the simulated tsunamis is significantly lower than that for the 1:100 storm surge scenarios, under the same initial sea level conditions. The results have significant implications for multi-risk and emergency management in Sydney. 11. Introduction to "Global Tsunami Science: Past and Future, Volume II" Science.gov (United States) Rabinovich, Alexander B.; Fritz, Hermann M.; Tanioka, Yuichiro; Geist, Eric L. 2017-08-01 Twenty-two papers on the study of tsunamis are included in Volume II of the PAGEOPH topical issue "Global Tsunami Science: Past and Future". Volume I of this topical issue was published as PAGEOPH, vol. 173, No. 12, 2016 (Eds., E. L. Geist, H. M. Fritz, A. B. Rabinovich, and Y. Tanioka). Three papers in Volume II focus on details of the 2011 and 2016 tsunami-generating earthquakes offshore of Tohoku, Japan. The next six papers describe important case studies and observations of recent and historical events. Four papers related to tsunami hazard assessment are followed by three papers on tsunami hydrodynamics and numerical modelling. Three papers discuss problems of tsunami warning and real-time forecasting. The final set of three papers importantly investigates tsunamis generated by non-seismic sources: volcanic explosions, landslides, and meteorological disturbances. Collectively, this volume highlights contemporary trends in global tsunami research, both fundamental and applied toward hazard assessment and mitigation. 12. Morehead City, North Carolina Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Morehead City, North Carolina Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 13. Nawiliwili, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Nawiliwili, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 14. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea KAUST Repository Sawlan, Zaid A 2012-01-01 parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while 15. Neah Bay, Washington Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Neah Bay, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 16. Bar Harbor, ME Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Bar Harbor, Maine Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 17. Sitka, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Sitka, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is... 18. Christiansted, Virgin Islands Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Christiansted, Virgin Islands Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 19. Arena Cove, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Arena Cove, California Forecast Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 20. Atlantic City, New Jersey Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Atlantic City, New Jersey Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 1. Crescent City, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Crescent City, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 2. Newport, Oregon Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Newport, Oregon Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 3. Wake Island Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Wake Island Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 4. Garibaldi, Oregon Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Garibaldi, Oregon Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 5. Keauhou, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Keauhou, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 6. Charlotte Amalie, Virgin Islands Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Charlotte Amalie, Virgin Islands Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami... 7. Tsunamis: stochastic models of occurrence and generation mechanisms Science.gov (United States) Geist, Eric L.; Oglesby, David D. 2014-01-01 The devastating consequences of the 2004 Indian Ocean and 2011 Japan tsunamis have led to increased research into many different aspects of the tsunami phenomenon. In this entry, we review research related to the observed complexity and uncertainty associated with tsunami generation, propagation, and occurrence described and analyzed using a variety of stochastic methods. In each case, seismogenic tsunamis are primarily considered. Stochastic models are developed from the physical theories that govern tsunami evolution combined with empirical models fitted to seismic and tsunami observations, as well as tsunami catalogs. These stochastic methods are key to providing probabilistic forecasts and hazard assessments for tsunamis. The stochastic methods described here are similar to those described for earthquakes (Vere-Jones 2013) and volcanoes (Bebbington 2013) in this encyclopedia. 8. Evaluation of tsunami risk in the Lesser Antilles Directory of Open Access Journals (Sweden) N. Zahibo 2001-01-01 Full Text Available The main goal of this study is to give the preliminary estimates of the tsunami risks for the Lesser Antilles. We investigated the available data of the tsunamis in the French West Indies using the historical data and catalogue of the tsunamis in the Lesser Antilles. In total, twenty-four (24 tsunamis were recorded in this area for last 400 years; sixteen (16 events of the seismic origin, five (5 events of volcanic origin and three (3 events of unknown source. Most of the tsunamigenic earthquakes (13 occurred in the Caribbean, and three tsunamis were generated during far away earthquakes (near the coasts of Portugal and Costa Rica. The estimates of tsunami risk are based on a preliminary analysis of the seismicity of the Caribbean area and the historical data of tsunamis. In particular, we investigate the occurrence of historical extreme runup tsunami data on Guadeloupe, and these data are revised after a survey in Guadeloupe. 9. Westport, Washington Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Westport, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 10. Pago Pago, American Samoa Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Pago Pago, American Samoa Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 11. Daytona Beach, Florida Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Daytona Beach, Florida Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 12. Lahaina, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Lahaina, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 13. Deep-ocean Assessment and Reporting of Tsunamis (DART) Stations Data.gov (United States) Department of Homeland Security — As part of the U.S. National Tsunami Hazard Mitigation Program (NTHMP), the Deep Ocean Assessment and Reporting of Tsunamis (DART(R)) Project is an ongoing effort to... 14. Myrtle Beach, South Carolina Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Myrtle Beach, South Carolina Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 15. Fajardo, Puerto Rico Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Fajardo, Puerto Rico Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 16. Florence, Oregon Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Florence, Oregon Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 17. Ponce, Puerto Rico Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Ponce, Puerto Rico Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 18. Shemya, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Shemya, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is... 19. Key West, Florida Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Key West, Florida Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 20. Los Angeles, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Los Angeles, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 1. CO-OPS 1-minute Raw Tsunami Water Level Data Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — CO-OPS has been involved with tsunami warning and mitigation since the Coast and Geodetic Survey started the Tsunami Warning System in 1948 to provide warnings to... 2. Montauk, New York Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Montauk, New York Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 3. Port Angeles, Washington Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Port Angeles, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 4. The 15 August 2007 Peru tsunami runup observations and modeling Science.gov (United States) Fritz, Hermann M.; Kalligeris, Nikos; Borrero, Jose C.; Broncano, Pablo; Ortega, Erick 2008-05-01 On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. The coast of Peru has experienced numerous deadly and destructive tsunamis throughout history, which highlights the importance of ongoing tsunami awareness and education efforts to ensure successful self-evacuation. 5. Kodiak, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Kodiak, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is... 6. Virginia Beach Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Virginia Beach, Virginia Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 7. Sand Point, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Sand Point, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 8. Ocean City, Maryland Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Ocean City, Maryland Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 9. Cordova, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Cordova, Alaska Forecast Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 10. Kahului, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Kahului, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 11. Nantucket, Massachusetts Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Nantucket, Massachusetts Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Unalaska, Alaska Forecast Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 13. Port Orford, Oregon Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Port Orford, Oregon Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 14. Kailua-Kona, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Kailua-Kona, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 15. Seward, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Seward, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is... 16. Seaside, Oregon Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Seaside, Oregon Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 17. Apra Harbor, Guam Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Apra Harbor, Guam Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 18. Kihei, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Kihei, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is... Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Adak, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 20. Arecibo, Puerto Rico Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Arecibo, Puerto Rico Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 1. Santa Barbara, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Santa Barbara, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 2. San Juan, Puerto Rico Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The San Juan, Puerto Rico Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 3. Point Reyes, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Point Reyes, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 4. Port San Luis, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Port San Luis, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 5. Pearl Harbor, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Pearl Harbor, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 6. Eureka, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Eureka, California Forecast Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 7. Palm Beach, Florida Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Palm Beach, Florida Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 8. Cape Hatteras, North Carolina Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Cape Hatteras, North Carolina Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 9. Toke Point, Washington Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Toke Point, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 10. Hanalei, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Hanalei, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 11. Homer, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Homer, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is... 12. Projected inundations on the South African coast by tsunami waves African Journals Online (AJOL) Hayley.Cawthra wind waves and swells, and because of its relatively short period, .... Inundation modelling attempts to recreate the tsunami generation in deep or ... The preservation of tsunami deposits in the coastal geological record is a function of the. 13. Nikolski, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Nikolski, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 14. Monterey, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Monterey, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 15. Port Alexander, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Port Alexander, Alaska Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 16. San Francisco, California Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The San Francisco, California Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 17. La Push, Washington Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The La Push, Washington Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model.... 18. Elfin Cove, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Elfin Cove, Alaska Forecast Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 19. Haleiwa, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Haleiwa, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 20. British Columbia, Canada Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The British Columbia, Canada Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST)... 1. Hilo, Hawaii Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Hilo, Hawaii Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 2. Savannah, Georgia Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Savannah, Georgia Forecast Model Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST... 3. Chignik, Alaska Tsunami Forecast Grids for MOST Model Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — The Chignik, Alaska Forecast Grids provides bathymetric data strictly for tsunami inundation modeling with the Method of Splitting Tsunami (MOST) model. MOST is a... 4. Site-specific seismic probabilistic tsunami hazard analysis: performances and potential applications Science.gov (United States) Tonini, Roberto; Volpe, Manuela; Lorito, Stefano; Selva, Jacopo; Orefice, Simone; Graziani, Laura; Brizuela, Beatriz; Smedile, Alessandra; Romano, Fabrizio; De Martini, Paolo Marco; Maramai, Alessandra; Piatanesi, Alessio; Pantosti, Daniela 2017-04-01 Seismic Probabilistic Tsunami Hazard Analysis (SPTHA) provides probabilities to exceed different thresholds of tsunami hazard intensity, at a specific site or region and in a given time span, for tsunamis caused by seismic sources. Results obtained by SPTHA (i.e., probabilistic hazard curves and inundation maps) represent a very important input to risk analyses and land use planning. However, the large variability of source parameters implies the definition of a huge number of potential tsunami scenarios, whose omission could lead to a biased analysis. Moreover, tsunami propagation from source to target requires the use of very expensive numerical simulations. At regional scale, the computational cost can be reduced using assumptions on the tsunami modeling (i.e., neglecting non-linear effects, using coarse topo-bathymetric meshes, empirically extrapolating maximum wave heights on the coast). On the other hand, moving to local scale, a much higher resolution is required and such assumptions drop out, since detailed inundation maps require significantly greater computational resources. In this work we apply a multi-step method to perform a site-specific SPTHA which can be summarized in the following steps: i) to perform a regional hazard assessment to account for both the aleatory and epistemic uncertainties of the seismic source, by combining the use of an event tree and an ensemble modeling technique; ii) to apply a filtering procedure which use a cluster analysis to define a significantly reduced number of representative scenarios contributing to the hazard of a specific target site; iii) to perform high resolution numerical simulations only for these representative scenarios and for a subset of near field sources placed in very shallow waters and/or whose coseismic displacements induce ground uplift or subsidence at the target. The method is applied to three target areas in the Mediterranean located around the cities of Milazzo (Italy), Thessaloniki (Greece) and 5. Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas Science.gov (United States) Necmioglu, Ocal; Meral Ozel, Nurcan 2015-04-01 Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the earthquakes resembling the 6. A rapid estimation of tsunami run-up based on finite fault models Science.gov (United States) Campos, J.; Fuentes, M. A.; Hayes, G. P.; Barrientos, S. E.; Riquelme, S. 2014-12-01 Many efforts have been made to estimate the maximum run-up height of tsunamis associated with large earthquakes. This is a difficult task, because of the time it takes to construct a tsunami model using real time data from the source. It is possible to construct a database of potential seismic sources and their corresponding tsunami a priori. However, such models are generally based on uniform slip distributions and thus oversimplify our knowledge of the earthquake source. Instead, we can use finite fault models of earthquakes to give a more accurate prediction of the tsunami run-up. Here we show how to accurately predict tsunami run-up from any seismic source model using an analytic solution found by Fuentes et al, 2013 that was especially calculated for zones with a very well defined strike, i.e, Chile, Japan, Alaska, etc. The main idea of this work is to produce a tool for emergency response, trading off accuracy for quickness. Our solutions for three large earthquakes are promising. Here we compute models of the run-up for the 2010 Mw 8.8 Maule Earthquake, the 2011 Mw 9.0 Tohoku Earthquake, and the recent 2014 Mw 8.2 Iquique Earthquake. Our maximum rup-up predictions are consistent with measurements made inland after each event, with a peak of 15 to 20 m for Maule, 40 m for Tohoku, and 2,1 m for the Iquique earthquake. Considering recent advances made in the analysis of real time GPS data and the ability to rapidly resolve the finiteness of a large earthquake close to existing GPS networks, it will be possible in the near future to perform these calculations within the first five minutes after the occurrence of any such event. Such calculations will thus provide more accurate run-up information than is otherwise available from existing uniform-slip seismic source databases. 7. The Euro-Mediterranean Tsunami Catalogue Directory of Open Access Journals (Sweden) Alessandra Maramai 2014-08-01 Full Text Available A unified catalogue containing 290 tsunamis generated in the European and Mediterranean seas since 6150 B.C. to current days is presented. It is the result of a systematic and detailed review of all the regional catalogues available in literature covering the study area, each of them having their own format and level of accuracy. The realization of a single catalogue covering a so wide area and involving several countries was a complex task that posed a series of challenges, being the standardization and the quality of the data the most demanding. A “reliability” value was used to rate equally the quality of the data for each event and this parameter was assigned based on the trustworthiness of the information related to the generating cause, the tsunami description accuracy and also on the availability of coeval bibliographical sources. Following these criteria we included in the catalogue events whose reliability ranges from 0 (“very improbable tsunami” to 4 (“definite tsunami”. About 900 documentary sources, including historical documents, books, scientific reports, newspapers and previous catalogues, support the tsunami data and descriptions gathered in this catalogue. As a result, in the present paper a list of the 290 tsunamis with their main parameters is reported. The online version of the catalogue, available at http://roma2.rm.ingv.it/en/facilities/data_bases/52/catalogue_of_the_euro-mediterranean_tsunamis, provides additional information such as detailed descriptions, pictures, etc. and the complete list of bibliographical sources. Most of the included events have a high reliability value (3= “probable” and 4= “definite” which makes the Euro-Mediterranean Tsunami Catalogue an essential tool for the implementation of tsunami hazard and risk assessment. 8. Synthetic tsunami waveform catalogs with kinematic constraints Science.gov (United States) Baptista, Maria Ana; Miranda, Jorge Miguel; Matias, Luis; Omira, Rachid 2017-07-01 In this study we present a comprehensive methodology to produce a synthetic tsunami waveform catalogue in the northeast Atlantic, east of the Azores islands. The method uses a synthetic earthquake catalogue compatible with plate kinematic constraints of the area. We use it to assess the tsunami hazard from the transcurrent boundary located between Iberia and the Azores, whose western part is known as the Gloria Fault. This study focuses only on earthquake-generated tsunamis. Moreover, we assume that the time and space distribution of the seismic events is known. To do this, we compute a synthetic earthquake catalogue including all fault parameters needed to characterize the seafloor deformation covering the time span of 20 000 years, which we consider long enough to ensure the representability of earthquake generation on this segment of the plate boundary. The computed time and space rupture distributions are made compatible with global kinematic plate models. We use the tsunami empirical Green's functions to efficiently compute the synthetic tsunami waveforms for the dataset of coastal locations, thus providing the basis for tsunami impact characterization. We present the results in the form of offshore wave heights for all coastal points in the dataset. Our results focus on the northeast Atlantic basin, showing that earthquake-induced tsunamis in the transcurrent segment of the Azores-Gibraltar plate boundary pose a minor threat to coastal areas north of Portugal and beyond the Strait of Gibraltar. However, in Morocco, the Azores, and the Madeira islands, we can expect wave heights between 0.6 and 0.8 m, leading to precautionary evacuation of coastal areas. The advantages of the method are its easy application to other regions and the low computation effort needed. 9. The Pacific Tsunami Warning Center's Response to the Tohoku Earthquake and Tsunami Science.gov (United States) Weinstein, S. A.; Becker, N. C.; Shiro, B.; Koyanagi, K. K.; Sardina, V.; Walsh, D.; Wang, D.; McCreery, C. S.; Fryer, G. J.; Cessaro, R. K.; Hirshorn, B. F.; Hsu, V. 2011-12-01 The largest Pacific basin earthquake in 47 years, and also the largest magnitude earthquake since the Sumatra 2004 earthquake, struck off of the east coast of the Tohoku region of Honshu, Japan at 5:46 UTC on 11 March 2011. The Tohoku earthquake (Mw 9.0) generated a massive tsunami with runups of up to 40m along the Tohoku coast. The tsunami waves crossed the Pacific Ocean causing significant damage as far away as Hawaii, California, and Chile, thereby becoming the largest, most destructive tsunami in the Pacific Basin since 1960. Triggers on the seismic stations at Erimo, Hokkaido (ERM) and Matsushiro, Honshu (MAJO), alerted Pacific Tsunami Warning Center (PTWC) scientists 90 seconds after the earthquake began. Four minutes after its origin, and about one minute after the earthquake's rupture ended, PTWC issued an observatory message reporting a preliminary magnitude of 7.5. Eight minutes after origin time, the Japan Meteorological Agency (JMA) issued its first international tsunami message in its capacity as the Northwest Pacific Tsunami Advisory Center. In accordance with international tsunami warning system protocols, PTWC then followed with its first international tsunami warning message using JMA's earthquake parameters, including an Mw of 7.8. Additional Mwp, mantle wave, and W-phase magnitude estimations based on the analysis of later-arriving seismic data at PTWC revealed that the earthquake magnitude reached at least 8.8, and that a destructive tsunami would likely be crossing the Pacific Ocean. The earthquake damaged the nearest coastal sea-level station located 90 km from the epicenter in Ofunato, Japan. The NOAA DART sensor situated 600 km off the coast of Sendai, Japan, at a depth of 5.6 km recorded a tsunami wave amplitude of nearly two meters, making it by far the largest tsunami wave ever recorded by a DART sensor. Thirty minutes later, a coastal sea-level station at Hanasaki, Japan, 600 km from the epicenter, recorded a tsunami wave amplitude of 10. The Solomon Islands Tsunami of 6 February 2013 in the Santa Cruz Islands: Field Survey and Modeling Science.gov (United States) Fritz, Hermann M.; Papantoniou, Antonios; Biukoto, Litea; Albert, Gilly; Wei, Yong 2014-05-01 On February 6, 2013 at 01:12:27 UTC (local time: UTC+11), a magnitude Mw 8.0 earthquake occurred 70 km to the west of Ndendo Island (Santa Cruz Island) in the Solomon Islands. The under-thrusting earthquake near a 90° bend, where the Australian plate subducts beneath the Pacific plate generated a locally focused tsunami in the Coral Sea and the South Pacific Ocean. The tsunami claimed the lives of 10 people and injured 15, destroyed 588 houses and partially damaged 478 houses, affecting 4,509 people in 1,066 households corresponding to an estimated 37% of the population of Santa Cruz Island. A multi-disciplinary international tsunami survey team (ITST) was deployed within days of the event to document flow depths, runup heights, inundation distances, sediment and coral boulder depositions, land level changes, damage patterns at various scales, performance of the man-made infrastructure and impact on the natural environment. The 19 to 23 February 2013 ITST covered 30 locations on 4 Islands: Ndendo (Santa Cruz), Tomotu Noi (Lord Howe), Nea Tomotu (Trevanion, Malo) and Tinakula. The reconnaissance completely circling Ndendo and Tinakula logged 240 km by small boat and additionally covered 20 km of Ndendo's hard hit western coastline by vehicle. The collected survey data includes more than 80 tsunami runup and flow depth measurements. The tsunami impact peaked at Manoputi on Ndendo's densely populated west coast with maximum tsunami height exceeding 11 m and local flow depths above ground exceeding 7 m. A fast tide-like positive amplitude of 1 m was recorded at Lata wharf inside Graciosa Bay on Ndendo Island and misleadingly reported in the media as representative tsunami height. The stark contrast between the field observations on exposed coastlines and the Lata tide gauge recording highlights the importance of rapid tsunami reconnaissance surveys. Inundation distance and damage more than 500 m inland were recorded at Lata airport on Ndendo Island. Landslides were 11. Hydraulic experiment on formation mechanism of tsunami deposit and verification of sediment transport model for tsunamis Science.gov (United States) Yamamoto, A.; Takahashi, T.; Harada, K.; Sakuraba, M.; Nojima, K. 2017-12-01 An underestimation of the 2011 Tohoku tsunami caused serious damage in coastal area. Reconsideration for tsunami estimation needs knowledge of paleo tsunamis. The historical records of giant tsunamis are limited, because they had occurred infrequently. Tsunami deposits may include many of tsunami records and are expected to analyze paleo tsunamis. However, present research on tsunami deposits are not able to estimate the tsunami source and its magnitude. Furthermore, numerical models of tsunami and its sediment transport are also important. Takahashi et al. (1999) proposed a model of movable bed condition due to tsunamis, although it has some issues. Improvement of the model needs basic data on sediment transport and deposition. This study investigated the formation mechanism of tsunami deposit by hydraulic experiment using a two-dimensional water channel with slope. In a fixed bed condition experiment, velocity, water level and suspended load concentration were measured at many points. In a movable bed condition, effects of sand grains and bore wave on the deposit were examined. Yamamoto et al. (2016) showed deposition range varied with sand grain sizes. In addition, it is revealed that the range fluctuated by number of waves and wave period. The measurements of velocity and water level showed that flow was clearly different near shoreline and in run-up area. Large velocity by return flow was affected the amount of sand deposit near shoreline. When a cutoff wall was installed on the slope, the amount of sand deposit repeatedly increased and decreased. Especially, sand deposit increased where velocity decreased. Takahashi et al. (1999) adapted the proposed model into Kesennuma bay when the 1960 Chilean tsunami arrived, although the amount of sand transportation was underestimated. The cause of the underestimation is inferred that the velocity of this model was underestimated. A relationship between velocity and sediment transport has to be studied in detail, but 12. Investigation on potential landslide sources along the Hyblaean-Malta escarpment for the 1693 tsunami in Eastern Sicily (Southern Italy) Science.gov (United States) Zaniboni, Filippo; Pagnoni, Gianluca; Armigliato, Alberto; Tinti, Stefano 2015-04-01 The study of the source of 1693 tsunami in eastern Sicily (South Italy) is still debated in the scientific community. Macroseismic analyses provide inland location for the epicenter of the earthquake, while historical reports describing 1-2 m waves hitting the coast suggest the existence of at least an offshore extension of the fault. Furthermore, an anomalous water elevation was described in Augusta (between Siracusa and Catania), that was interpreted as the manifestation of a local submarine landslide. The presence of the steep Hyblaean-Malta escarpment, that runs parallel to the eastern coast of Sicily at a short distance from the shoreline and is cut by several canyons and scars, corroborates the hypothesis of a landslide occurrence, though no clear evidence has been found yet. This research, realized in the frame of the project ASTARTE (Assessment, Strategy And Risk Reduction for Tsunamis in Europe - FP7-ENV2013 6.4-3, Grant 603839), aims at assessing the effect of landslide-generated tsunamis on the coastal stretch around Augusta considering different scenarios of collapsing masses along the Hyblaean-Malta escarpment. The slide dynamics is computed by means of the numerical code UBO-BLOCK1 (developed by the University of Bologna Tsunami Research Team), and the corresponding tsunami is simulated via the code UBO-TSUFD. The sliding bodies are placed in different positions in order to assess which of them could produce significant effects on the town of Augusta, providing then clues on the possible source area for the hypothesized slide related to the 1693 tsunami. The sensitivity analysis shows the spatial dependence of the coastal tsunami height on the source volume, position, distance from the coast, and on other parameters. 13. Research for developing precise tsunami evaluation methods. Probabilistic tsunami hazard analysis/numerical simulation method with dispersion and wave breaking International Nuclear Information System (INIS) 2007-01-01 The present report introduces main results of investigations on precise tsunami evaluation methods, which were carried out from the viewpoint of safety evaluation for nuclear power facilities and deliberated by the Tsunami Evaluation Subcommittee. A framework for the probabilistic tsunami hazard analysis (PTHA) based on logic tree is proposed and calculation on the Pacific side of northeastern Japan is performed as a case study. Tsunami motions with dispersion and wave breaking were investigated both experimentally and numerically. The numerical simulation method is verified for its practicability by applying to a historical tsunami. Tsunami force is also investigated and formulae of tsunami pressure acting on breakwaters and on building due to inundating tsunami are proposed. (author) 14. Coastal Impacts of the March 11th Tohoku, Japan Tsunami in the Galapagos Islands Science.gov (United States) Lynett, Patrick; Weiss, Robert; Renteria, Willington; De La Torre Morales, Giorgio; Son, Sangyoung; Arcos, Maria Elizabeth Martin; MacInnes, Breanyn Tiel 2013-06-01 On March 11, 2011 at 5:46:23 UTC (March 10 11:46:23 PM Galapagos Local Time), the Mw 9.0 Great East Japan Earthquake occurred near the Tohoku region off the east coast of Japan, spawning a Pacific-wide tsunami. Approximately 12,000 km away, the Galapagos Islands experienced moderate tsunami impacts, including flooding, structural damage, and strong currents. In this paper, we present observations and measurements of the tsunami effects in the Galapagos, focusing on the four largest islands in the archipelago; (from west to east) Isabela, Santiagio, Santa Cruz, and San Cristobal. Access to the tsunami affected areas was one of the largest challenges of the field survey. Aside from approximately ten sandy beaches open to tourists, all other shoreline locations are restricted to anyone without a research permit; open cooperation with the Galapagos National Park provided the survey team complete access to the Islands coastlines. Survey locations were guided by numerical simulations of the tsunami performed prior to the field work. This numerical guidance accurately predicted the regions of highest impact, as well as regions of relatively low impact. Tide-corrected maximum tsunami heights were generally in the range of 3-4 m with the highest runup of 6 m measured in a small pocket beach on Isla Isabela. Puerto Ayora, on Santa Cruz Island, the largest harbor in the Galapagos experienced significant flooding and damage to structures located at the shoreline. A current meter moored inside the harbor recorded relatively weak tsunami currents of less than 0.3 m/s (0.6 knot) during the event. Comparisons with detailed numerical simulations suggest that these low current speed observations are most likely the result of data averaging at 20-min intervals and that maximum instantaneous current speeds were considerably larger. Currents in the Canal de Itabaca, a natural waterway between Santa Cruz Island and a smaller island offshore, were strong enough to displace multiple 5 15. A Tsunami PSA for Nuclear Power Plants in Korea International Nuclear Information System (INIS) Kim, Min Kyu; Choi, In Kil; Park, Jin Hee; Seo, Kyung Suk; Seo, Jeong Moon; Yang, Joon Eon 2010-06-01 For the evaluation of safety of NPP caused by Tsunami event, probabilistic safety assessment (PSA) method was applied in this study. At first, an empirical tsunami hazard analysis performed for an evaluation of tsunami return period. A procedure for tsunami fragility methodology was established, and target equipment and structures for investigation of Tsunami Hazard assessment were selected. A several fragility calculations were performed for equipment in Nuclear Power Plant and finally accident scenario of tsunami event in NPP was presented. Finally, a system analysis performed in the case of tsunami event for an evaluation of a CDF of Ulchin 56 NPP site. For the evaluation of safety of NPP caused by Tsunami event, probabilistic safety assessment (PSA) method was applied. A procedure for tsunami fragility methodology was established, and target equipment and structures for investigation of Tsunami Hazard assessment were selected. A several fragility calculations were performed for equipment in Nuclear Power Plant and finally accident scenario of tsunami event in NPP was presented. As a result, in the case of tsunami event, functional failure is mostly governed total failure probability of facilities in NPP site 16. Identification and characterization of tsunami deposits off southeast ... 6Institute of Environmental Geosciences, Department of Earth and Environmental Sciences, Pukyong National. University ... challenging topic to be developed in studies on tsunami hazard assessment. Two core ... A tsunami is one of the most terrifying natural hazards .... identify tsunami deposits in a beach environment. 17. Mathematical Modelling of Tsunami Propagation | Eze | Journal of ... African Journals Online (AJOL) The generation of tsunamis with the help of a simple dislocation model of an earthquake and their propagation in the basin are discussed. In this study, we examined the formation of a tsunami wave from an initial sea surface displacement similar to those obtained from earthquakes that have generated tsunami waves and ... 18. Open-Ocean and Coastal Properties of Recent Major Tsunamis Science.gov (United States) Rabinovich, A.; Thomson, R.; Zaytsev, O. 2017-12-01 The properties of six major tsunamis during the period 2009-2015 (2009 Samoa; 2010 Chile; 2011 Tohoku; 2012 Haida Gwaii; 2014 and 2015 Chile) were thoroughly examined using coastal data from British Columbia, the U.S. West Coast and Mexico, and offshore open-ocean DART and NEPTUNE stations. Based on joint spectral analyses of the tsunamis and background noise, we have developed a method to suppress the influence of local topography and to use coastal observations to determine the underlying spectra of tsunami waves in the deep ocean. The "reconstructed" open-ocean tsunami spectra were found to be in close agreement with the actual tsunami spectra evaluated from the analysis of directly measured open-ocean tsunami records. We have further used the spectral estimates to parameterize tsunamis based on their integral open-ocean spectral characteristics. Three key parameters are introduced to describe individual tsunami events: (1) Integral open-ocean energy; (2) Amplification factor (increase of the mean coastal tsunami variance relative to the open-ocean variance); and (3) Tsunami colour, the frequency composition of the open-ocean tsunami waves. In particular, we found that the strongest tsunamis, associated with large source areas (the 2010 Chile and 2011 Tohoku) are "reddish" (indicating the dominance of low-frequency motions), while small-source events (the 2009 Samoa and 2012 Haida Gwaii) are "bluish" (indicating strong prevalence of high-frequency motions). 19. Tsunami Ionospheric warning and Ionospheric seismology Science.gov (United States) Lognonne, Philippe; Rolland, Lucie; Rakoto, Virgile; Coisson, Pierdavide; Occhipinti, Giovanni; Larmat, Carene; Walwer, Damien; Astafyeva, Elvira; Hebert, Helene; Okal, Emile; Makela, Jonathan 2014-05-01 The last decade demonstrated that seismic waves and tsunamis are coupled to the ionosphere. Observations of Total Electron Content (TEC) and airglow perturbations of unique quality and amplitude were made during the Tohoku, 2011 giant Japan quake, and observations of much lower tsunamis down to a few cm in sea uplift are now routinely done, including for the Kuril 2006, Samoa 2009, Chili 2010, Haida Gwai 2012 tsunamis. This new branch of seismology is now mature enough to tackle the new challenge associated to the inversion of these data, with either the goal to provide from these data maps or profile of the earth surface vertical displacement (and therefore crucial information for tsunami warning system) or inversion, with ground and ionospheric data set, of the various parameters (atmospheric sound speed, viscosity, collision frequencies) controlling the coupling between the surface, lower atmosphere and the ionosphere. We first present the state of the art in the modeling of the tsunami-atmospheric coupling, including in terms of slight perturbation in the tsunami phase and group velocity and dependance of the coupling strength with local time, ocean depth and season. We then show the confrontation of modelled signals with observations. For tsunami, this is made with the different type of measurement having proven ionospheric tsunami detection over the last 5 years (ground and space GPS, Airglow), while we focus on GPS and GOCE observation for seismic waves. These observation systems allowed to track the propagation of the signal from the ground (with GPS and seismometers) to the neutral atmosphere (with infrasound sensors and GOCE drag measurement) to the ionosphere (with GPS TEC and airglow among other ionospheric sounding techniques). Modelling with different techniques (normal modes, spectral element methods, finite differences) are used and shown. While the fits of the waveform are generally very good, we analyse the differences and draw direction of future 20. PEMETAAN BAHAYA GEMPA BUMI DAN POTENSI TSU-NAMI DI BALI BERDASARKAN NILAI SEISMISITAS Directory of Open Access Journals (Sweden) 2017-02-01 Full Text Available Bali is one of the areas prone to earthquake and tsunami as being at the junction of two plates, namely the Eurasian plate and the Indo-Australian plate is located in the south of Bali and back arc trust zones are located in the North of Bali. We need research on the potential dangers of earthquakes and tsunami in Bali are based on the value of seismicity which is interpreted by the value of b and a. This study uses earthquake data on the coordinates 6?-11? SLand 114?-116? EL with 339 data that was processed using Zmap in order to obtain the value of b at 1.57 ± 0.008 and the value of a is 10.6 and maximum magnitude of 7.1 Mw. From mapping the values ??of b and a known area that has the highest value of b and a lies in the sea area to the south of Bali, Karangasem and Buleleng to the northern region of Bali. Furthermore, for mapping the tsunami in Bali using the TOAST application obtained tsunami prone areas of Bali, Kuta Beach, East Buleleng and Karangasem. 1. Tsunamis caused by submarine slope failures along western Great Bahama Bank. Science.gov (United States) Schnyder, Jara S D; Eberli, Gregor P; Kirby, James T; Shi, Fengyan; Tehranirad, Babak; Mulder, Thierry; Ducassou, Emmanuelle; Hebbeln, Dierk; Wintersteller, Paul 2016-11-04 Submarine slope failures are a likely cause for tsunami generation along the East Coast of the United States. Among potential source areas for such tsunamis are submarine landslides and margin collapses of Bahamian platforms. Numerical models of past events, which have been identified using high-resolution multibeam bathymetric data, reveal possible tsunami impact on Bimini, the Florida Keys, and northern Cuba. Tsunamis caused by slope failures with terminal landslide velocity of 20 ms -1 will either dissipate while traveling through the Straits of Florida, or generate a maximum wave of 1.5 m at the Florida coast. Modeling a worst-case scenario with a calculated terminal landslide velocity generates a wave of 4.5 m height. The modeled margin collapse in southwestern Great Bahama Bank potentially has a high impact on northern Cuba, with wave heights between 3.3 to 9.5 m depending on the collapse velocity. The short distance and travel time from the source areas to densely populated coastal areas would make the Florida Keys and Miami vulnerable to such low-probability but high-impact events. 2. Can Asteroid Airbursts Cause Dangerous Tsunami?. Energy Technology Data Exchange (ETDEWEB) Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States) 2015-10-01 I have performed a series of high-resolution hydrocode simulations to generate “source functions” for tsunami simulations as part of a proof-of-principle effort to determine whether or not the downward momentum from an asteroid airburst can couple energy into a dangerous tsunami in deep water. My new CTH simulations show enhanced momentum multiplication relative to a nuclear explosion of the same yield. Extensive sensitivity and convergence analyses demonstrate that results are robust and repeatable for simulations with sufficiently high resolution using adaptive mesh refinement. I have provided surface overpressure and wind velocity fields to tsunami modelers to use as time-dependent boundary conditions and to test the hypothesis that this mechanism can enhance the strength of the resulting shallow-water wave. The enhanced momentum result suggests that coupling from an over-water plume-forming airburst could be a more efficient tsunami source mechanism than a collapsing impact cavity or direct air blast alone, but not necessarily due to the originally-proposed mechanism. This result has significant implications for asteroid impact risk assessment and airburst-generated tsunami will be the focus of a NASA-sponsored workshop at the Ames Research Center next summer, with follow-on funding expected. 3. Tsunami-tendenko and morality in disasters. Science.gov (United States) Kodama, Satoshi 2015-05-01 Disaster planning challenges our morality. Everyday rules of action may need to be suspended during large-scale disasters in favour of maxims that that may make prudential or practical sense and may even be morally preferable but emotionally hard to accept, such as tsunami-tendenko. This maxim dictates that the individual not stay and help others but run and preserve his or her life instead. Tsunami-tendenko became well known after the great East Japan earthquake on 11 March 2011, when almost all the elementary and junior high school students in one city survived the tsunami because they acted on this maxim that had been taught for several years. While tsunami-tendenko has been praised, two criticisms of it merit careful consideration: one, that the maxim is selfish and immoral; and two, that it goes against the natural tendency to try to save others in dire need. In this paper, I will explain the concept of tsunami-tendenko and then respond to these criticisms. Such ethical analysis is essential for dispelling confusion and doubts about evacuation policies in a disaster. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions. 4. Using Multi-Scenario Tsunami Modelling Results combined with Probabilistic Analyses to provide Hazard Information for the South-WestCoast of Indonesia Science.gov (United States) Zosseder, K.; Post, J.; Steinmetz, T.; Wegscheider, S.; Strunz, G. 2009-04-01 Indonesia is located at one of the most active geological subduction zones in the world. Following the most recent seaquakes and their subsequent tsunamis in December 2004 and July 2006 it is expected that also in the near future tsunamis are likely to occur due to increased tectonic tensions leading to abrupt vertical seafloor alterations after a century of relative tectonic silence. To face this devastating threat tsunami hazard maps are very important as base for evacuation planning and mitigation strategies. In terms of a tsunami impact the hazard assessment is mostly covered by numerical modelling because the model results normally offer the most precise database for a hazard analysis as they include spatially distributed data and their influence to the hydraulic dynamics. Generally a model result gives a probability for the intensity distribution of a tsunami at the coast (or run up) and the spatial distribution of the maximum inundation area depending on the location and magnitude of the tsunami source used. The boundary condition of the source used for the model is mostly chosen by a worst case approach. Hence the location and magnitude which are likely to occur and which are assumed to generate the worst impact are used to predict the impact at a specific area. But for a tsunami hazard assessment covering a large coastal area, as it is demanded in the GITEWS (German Indonesian Tsunami Early Warning System) project in which the present work is embedded, this approach is not practicable because a lot of tsunami sources can cause an impact at the coast and must be considered. Thus a multi-scenario tsunami model approach is developed to provide a reliable hazard assessment covering large areas. For the Indonesian Early Warning System many tsunami scenarios were modelled by the Alfred Wegener Institute (AWI) at different probable tsunami sources and with different magnitudes along the Sunda Trench. Every modelled scenario delivers the spatial distribution of 5. Public Perceptions of Tsunamis and the NOAA TsunamiReady Program in Los Angeles Science.gov (United States) Rosati, A. 2010-12-01 After the devastating December 2004 Indian Ocean Tsunami, California and other coastal states began installing "Tsunami Warning Zone" and "Evacuation Route" signs at beaches and major access roads. The geography of the Los Angeles area may not be conducive to signage alone for communication of the tsunami risk and safety precautions. Over a year after installation, most people surveyed did not know about or recognize the tsunami signs. More alarming is that many did not believe a tsunami could occur in the area even though earthquake generated waves have reached nearby beaches as recently as September 2009! UPDATE: FEB. 2010. Fifty two percent of the 147 people surveyed did not believe they would survive a natural disaster in Los Angeles. Given the unique geography of Los Angeles, how can the city and county improve the mental health of its citizens before and after a natural disaster? This poster begins to address the issues of community self-efficacy and resiliency in the face of tsunamis. Of note for future research, the data from this survey showed that most people believed climate change would increase the occurrence of tsunamis. Also, the public understanding of water inundation was disturbingly low. As scientists, it is important to understand the big picture of our research - how it is ultimately communicated, understood, and used by the public. 6. NUMERICAL MODELING OF THE GLOBAL TSUNAMI: Indonesian Tsunami of 26 December 2004 Directory of Open Access Journals (Sweden) Zygmunt Kowalik 2005-01-01 Full Text Available A new model for the global tsunami computation is constructed. It includes a high order of approximation for the spatial derivatives. The boundary condition at the shore line is controlled by the total depth and can be set either to runup or to the zero normal velocity. This model, with spatial resolution of one minute, is applied to the tsunami of 26 December 2004 in the World Ocean from 80◦S to 69◦N. Because the computational domain includes close to 200 million grid points, a parallel version of the code was developed and run on a supercomputer. The high spatial resolution of one minute produces very small numerical dispersion even when tsunamis wave travel over large distances. Model results for the Indonesian tsunami show that the tsunami traveled to every location of the World Ocean. In the Indian Ocean the tsunami properties are related to the source function, i.e., to the magnitude of the bottom displacement and directional properties of the source. In the Southern Ocean surrounding Antarctica, in the Pacific, and especially in the Atlantic, tsunami waves propagate over large distances by energy ducting over oceanic ridges. Tsunami energy is concentrated by long wave trapping over the oceanic ridges. Our computations show the Coriolis force plays a noticeable but secondary role in the trapping. Travel times obtained from computations as arrival of the first significant wave show a clear and consistent pattern only in the region of the high amplitude and in the simply connected domains. The tsunami traveled from Indonesia, around New Zealand, and into the Pacific Ocean. The path through the deep ocean to North America carried miniscule energy, while the stronger signal traveled a much longer distance via South Pacific ridges. The time difference between first signal and later signals strong enough to be recorded at North Pacific locations was several hours. 7. Tsunami forecast by joint inversion of real-time tsunami waveforms and seismic of GPS data: application to the Tohoku 2011 tsunami Science.gov (United States) Yong, Wei; Newman, Andrew V.; Hayes, Gavin P.; Titov, Vasily V.; Tang, Liujuan 2014-01-01 Correctly characterizing tsunami source generation is the most critical component of modern tsunami forecasting. Although difficult to quantify directly, a tsunami source can be modeled via different methods using a variety of measurements from deep-ocean tsunameters, seismometers, GPS, and other advanced instruments, some of which in or near real time. Here we assess the performance of different source models for the destructive 11 March 2011 Japan tsunami using model–data comparison for the generation, propagation, and inundation in the near field of Japan. This comparative study of tsunami source models addresses the advantages and limitations of different real-time measurements with potential use in early tsunami warning in the near and far field. The study highlights the critical role of deep-ocean tsunami measurements and rapid validation of the approximate tsunami source for high-quality forecasting. We show that these tsunami measurements are compatible with other real-time geodetic data, and may provide more insightful understanding of tsunami generation from earthquakes, as well as from nonseismic processes such as submarine landslide failures. 8. Tsunami Evidence in South Coast Java, Case Study: Tsunami Deposit along South Coast of Cilacap Science.gov (United States) Rizal, Yan; Aswan; Zaim, Yahdi; Dwijo Santoso, Wahyu; Rochim, Nur; Daryono; Dewi Anugrah, Suci; Wijayanto; Gunawan, Indra; Yatimantoro, Tatok; Hidayanti; Herdiyani Rahayu, Resti; Priyobudi 2017-06-01 Cilacap Area is situated in coastal area of Southern Java and directly affected by tsunami hazard in 2006. This event was triggered by active subduction in Java Trench which active since long time ago. To detect tsunami and active tectonic in Southern Java, paleo-tsunami study is performed which is targeted paleo-tsunami deposit older than fifty years ago. During 2011 - 2016, 16 locations which suspected as paleo-tsunami location were visited and the test-pits were performed to obtain characteristic and stratigraphy of paleo-tsunami layers. Paleo-tsunami layer was identified by the presence of light-sand in the upper part of paleo-soil, liquefaction fine grain sandstone, and many rip-up clast of mudstone. The systematic samples were taken and analysis (micro-fauna, grainsize and dating analysis). Micro-fauna result shows that paleo-tsunami layer consist of benthonic foraminifera assemblages from different bathymetry and mixing in one layer. Moreover, grainsize shows random grain distribution which characterized as turbulence and strong wave deposit. Paleo-tsunami layers in Cilacap area are correlated using paleo-soil as marker. There are three paleo-tsunami layers and the distribution can be identified as PS-A, PS-B and PS-C. The samples which were taken in Glempang Pasir layer are being dated using Pb - Zn (Lead-Zinc) method. The result of Pb - Zn (Lead-Zinc) dating shows that PS-A was deposited in 139 years ago, PS-B in 21 years ago, and PS C in 10 years ago. This result indicates that PS -1 occurred in 1883 earthquake activity while PS B formed in 1982 earthquake and PS-C was formed by 2006 earthquake. For ongoing research, the older paleo-tsunami layers were determined in the Gua Nagaraja, close to Selok location and 6 layers of Paleo-tsunami suspect found which shown a similar characteristic with the layers from another location. The three layers deeper approximately have an older age than another location in Cilacap. 9. Holocene Tsunami Deposits From Large Tsunamis Along the Kuril Subduction Zone, Northeast Japan Science.gov (United States) Nanayama, F.; Furukawa, R.; Satake, K.; Soeda, Y.; Shigeno, K. 2003-12-01 Holocene tsunami deposits in eastern Hokkaido between Nemuro and Tokachi show that the Kuril subduction zone repeatedly produced earthquakes and tsunamis larger than those recorded in this region since AD 1804 (Nanayama et al., Nature, 424, 660-663, 2003). Twenty-two postulated tsunami sand layers from the past 9500 years are preserved on lake bottom near Kushiro City, and about ten postulated tsunami sand layers from the past 3000 years are preserved in peat layers on the coastal marsh of Kiritappu. We dated these ten tsunami deposits (named Ts1 to Ts10 from shallower to deeper) in peat layers by radiocarbon and tephrochronology, correlated them with historical earthquakes and tsunamis, and surveyed their spatial distribution to estimate the tsunamisO inland inundation limits. Ts10 and Ts9 are under regional tephra Ta-c2 (ca. 2.5 ka) and represent prehistorical events. Ts8 to Ts5 are between two regional tephra layers Ta-c2 and B-Tm (ca. 9th century). In particular, Ts5 is found just below B-Tm, so it is dated 9th century (Heian era). Ts4 is dated ca 13th century (Kamakura era), while Ts3, found just below Us-b and Ta-b (AD 1667-1663), is dated 17th century (Edo era). Ts2 is dated 19th century (Edo era) and may correspond to the AD 1843 Tempo Tokachi-oki earthquake (Mt 8.0) recorded in a historical document Nikkanki of Kokutai-ji temple at Akkeshi. Ts1 is inferred 20th century and may correspond to the tsunami from the AD 1960 Chilean earthquake (M 9.5) or the AD 1952 Tokachi-oki earthquake (Mt 8.2). Our detailed surveys indicate that Ts3 and Ts4 can be traced more than 3 km from the present coast line in Kirittapu marsh, much longer than the limits (< 1 km) of recent deposits Ts1 and Ts2 or documented inundation of the 19th and 20th century tsunamis. The recurrence intervals of great tsunami inundation are about 400 to 500 years, longer than that of typical interplate earthquakes along the Kuril subduction zone. The longer interval and the apparent large tsunami 10. Changes in Tsunami Risk Perception in Northern Chile After the April 1 2014 Tsunami Science.gov (United States) Carvalho, L.; Lagos, M. 2016-12-01 11. New Perspective of Tsunami Deposit Investigations: Insight from the 1755 Lisbon Tsunami in Martinique, Lesser Antilles. Science.gov (United States) Roger, J.; Clouard, V.; Moizan, E. 2014-12-01 The recent devastating tsunamis having occurred during the last decades have highlighted the essential necessity to deploy operationnal warning systems and educate coastal populations. This could not be prepared correctly without a minimum knowledge about the tsunami history. That is the case of the Lesser Antilles islands, where a few handfuls of tsunamis have been reported over the past 5 centuries, some of them leading to notable destructions and inundations. But the lack of accurate details for most of the historical tsunamis and the limited period during which we could find written information represents an important problem for tsunami hazard assessment in this region. Thus, it is of major necessity to try to find other evidences of past tsunamis by looking for sedimentary deposits. Unfortunately, island tropical environments do not seem to be the best places to keep such deposits burried. In fact, heavy rainfalls, storms, and all other phenomena leading to coastal erosion, and associated to human activities such as intensive sugarcane cultivation in coastal flat lands, could caused the loss of potential tsunami deposits. Lots of places have been accurately investigated within the Lesser Antilles (from Sainte-Lucia to the British Virgin Islands) the last 3 years and nothing convincing has been found. That is when archeaological investigations excavated a 8-cm thick sandy and shelly layer in downtown Fort-de-France (Martinique), wedged between two well-identified layers of human origin (Fig. 1), that we found new hope: this sandy layer has been quickly attributed without any doubt to the 1755 tsunami, using on one hand the information provided by historical reports of the construction sites, and on the other hand by numerical modeling of the tsunami (wave heights, velocity fields, etc.) showing the ability of this transoceanic tsunami to wrap around the island after ~7 hours of propagation, enter Fort-de-France's Bay with enough energy to carry sediments, and 12. Tsunami Speed Variations in Density-stratified Compressible Global Oceans Science.gov (United States) 2013-12-01 Recent tsunami observations in the deep ocean have accumulated unequivocal evidence that tsunami traveltime delays compared with the linear long-wave tsunami simulations occur during tsunami propagation in the deep ocean. The delay is up to 2% of the tsunami traveltime. Watada et al. [2013] investigated the cause of the delay using the normal mode theory of tsunamis and attributed the delay to the compressibility of seawater, the elasticity of the solid earth, and the gravitational potential change associated with mass motion during the passage of tsunamis. Tsunami speed variations in the deep ocean caused by seawater density stratification is investigated using a newly developed propagator matrix method that is applicable to seawater with depth-variable sound speeds and density gradients. For a 4-km deep ocean, the total tsunami speed reduction is 0.45% compared with incompressible homogeneous seawater; two thirds of the reduction is due to elastic energy stored in the water and one third is due to water density stratification mainly by hydrostatic compression. Tsunami speeds are computed for global ocean density and sound speed profiles and characteristic structures are discussed. Tsunami speed reductions are proportional to ocean depth with small variations, except for in warm Mediterranean seas. The impacts of seawater compressibility and the elasticity effect of the solid earth on tsunami traveltime should be included for precise modeling of trans-oceanic tsunamis. Data locations where a vertical ocean profile deeper than 2500 m is available in World Ocean Atlas 2009. The dark gray area indicates the Pacific Ocean defined in WOA09. a) Tsunami speed variations. Red, gray and black bars represent global, Pacific, and Mediterranean Sea, respectively. b) Regression lines of the tsunami velocity reduction for all oceans. c)Vertical ocean profiles at grid points indicated by the stars in Figure 1. 13. Will oscillating wave surge converters survive tsunamis? Directory of Open Access Journals (Sweden) L. O’Brien 2015-07-01 Full Text Available With an increasing emphasis on renewable energy resources, wave power technology is becoming one of the realistic solutions. However, the 2011 tsunami in Japan was a harsh reminder of the ferocity of the ocean. It is known that tsunamis are nearly undetectable in the open ocean but as the wave approaches the shore its energy is compressed, creating large destructive waves. The question posed here is whether an oscillating wave surge converter (OWSC could withstand the force of an incoming tsunami. Several tools are used to provide an answer: an analytical 3D model developed within the framework of linear theory, a numerical model based on the non-linear shallow water equations and empirical formulas. Numerical results show that run-up and draw-down can be amplified under some circumstances, leading to an OWSC lying on dry ground! 14. Tsunamis: bridging science, engineering and society. Science.gov (United States) Kânoğlu, U; Titov, V; Bernard, E; Synolakis, C 2015-10-28 Tsunamis are high-impact, long-duration disasters that in most cases allow for only minutes of warning before impact. Since the 2004 Boxing Day tsunami, there have been significant advancements in warning methodology, pre-disaster preparedness and basic understanding of related phenomena. Yet, the trail of destruction of the 2011 Japan tsunami, broadcast live to a stunned world audience, underscored the difficulties of implementing advances in applied hazard mitigation. We describe state of the art methodologies, standards for warnings and summarize recent advances in basic understanding, and identify cross-disciplinary challenges. The stage is set to bridge science, engineering and society to help build up coastal resilience and reduce losses. © 2015 The Author(s). 15. Damages in American Samoa due to the 29 September 2009 Samoa Islands Region Earthquake Tsunami Science.gov (United States) Okumura, Y.; Takahashi, T.; Suzuki, S. 2009-12-01 A large earthquake of Mw 8.0 occurred in Samoa Islands Region in the early morning on 29 September 2009 (local time). A Large Tsunami generated by the earthquake hit Samoa, American Samoa, Tonga. Total 192 people were died or missing in these three countries (22 October 2009). The authors surveyed in Tutuila Island, American Samoa from 6 to 8 in October 2009 with the aim to find out damages in the disaster. In American Samoa, death and missing toll was 35. The main findings are as follows; first, human damages were little for tsunami run-up height of about 4 to 6 meters and tsunami arrival time of about 20 minutes. We can suppose that residents evacuated quickly after feeling shaking or something. Secondly, houses were severely damaged in some low elevation coastal villages such as Amanave, Leone, Pago Pago, Tula and so on. Third, a power plant and an airport, which are important infrastructures in relief and recovery phase, were also severely damaged. Inundation depth at the power plant was 2.31 meters. A blackout in the daytime lasted when we surveyed. On the other hand, the airport could use already at that time. But it was closed on the first day in the disaster because of a lot of disaster debris on the runway carried by tsunami. Inundation depth at the airport fence was measured in 0.7 to 0.8 meters. Other countries in the south-western Pacific region may have power plants or airports with similar risk, so it should be assessed against future tsunami disasters. Inundated thermal power plant in Pago Pago Debris on runway in Tafuna Airport (Provided by Mr. Chris Soti, DPA) 16. The elusive AD 1826 tsunami, South Westland, New Zealand International Nuclear Information System (INIS) Goff, J.R.; Wells, A.; Chague-Goff, C.; Nichol, S.L.; Devoy, R.J.N. 2004-01-01 In AD 1826 sealers reported earthquake and tsunami activity in Fiordland, although contemporary or near-contemporary accounts of tsunami inundation at the time are elusive. A detailed analysis of recent sediments fom Okarito Lagoon builds on contextual evidence provided by earlier research concerning past tsunami inundation. Sedimentological, geochemical, micropalaeontological and geochronological data are used to determine palaeoenvironments before, during and after what was most probably tsunami inundation in AD 1826. The most compelling chronological control is provided by a young cohort of trees growing on a raised shoreline bench stranded by a drop in the lagoon water level following tsunami inundation. (author). 42 refs., 9 figs., 1 tab 17. Field survey of the 16 September 2015 Chile tsunami Science.gov (United States) Lagos, Marcelo; Fritz, Hermann M. 2016-04-01 On the evening of 16 September, 2015 a magnitude Mw 8.3 earthquake occurred off the coast of central Chile's Coquimbo region. The ensuing tsunami caused significant inundation and damage in the Coquimbo or 4th region and mostly minor effects in neighbouring 3rd and 5th regions. Fortunately, ancestral knowledge from the past 1922 and 1943 tsunamis in the region along with the catastrophic 2010 Maule and recent 2014 tsunamis, as well as tsunami education and evacuation exercises prompted most coastal residents to spontaneously evacuate to high ground after the earthquake. There were a few tsunami victims; while a handful of fatalities were associated to earthquake induced building collapses and the physical stress of tsunami evacuation. The international scientist joined the local effort from September 20 to 26, 2015. The international tsunami survey team (ITST) interviewed numerous eyewitnesses and documented flow depths, runup heights, inundation distances, sediment deposition, damage patterns, performance of the navigation infrastructure and impact on the natural environment. The ITST covered a 500 km stretch of coastline from Caleta Chañaral de Aceituno (28.8° S) south of Huasco down to Llolleo near San Antonio (33.6° S). We surveyed more than 40 locations and recorded more than 100 tsunami and runup heights with differential GPS and integrated laser range finders. The tsunami impact peaked at Caleta Totoral near Punta Aldea with both tsunami and runup heights exceeding 10 m as surveyed on September 22 and broadcasted nationwide that evening. Runup exceeded 10 m at a second uninhabited location some 15 km south of Caleta Totoral. A significant variation in tsunami impact was observed along the coastlines of central Chile at local and regional scales. The tsunami occurred in the evening hours limiting the availability of eyewitness video footages. Observations from the 2015 Chile tsunami are compared against the 1922, 1943, 2010 and 2014 Chile tsunamis. The 18. Assessment of the safety of Ulchin nuclear power plant in the event of tsunami using parametric study International Nuclear Information System (INIS) Kim, Ji Young; Kang, Keum Seok 2011-01-01 Previous evaluations of the safety of the Ulchin Nuclear Power Plant in the event of a tsunami have the shortcoming of uncertainty of the tsunami sources. To address this uncertainty, maximum and minimum wave heights at the intake of Ulchin NPP have been estimated through a parametric study, and then assessment of the safety margin for the intake has been carried out. From the simulation results for the Ulchin NPP site, it can be seen that the coefficient of eddy viscosity considerably affects wave height at the inside of the breakwater. In addition, assessment of the safety margin shows that almost all of the intake water pumps have a safety margin over 2 m, and Ulchin NPP site seems to be safe in the event of a tsunami according to this parametric study, although parts of the CWPs rarely have a margin for the minimum wave height 19. Numerical tsunami hazard assessment of the submarine volcano Kick 'em Jenny in high resolution are Science.gov (United States) Dondin, Frédéric; Dorville, Jean-Francois Marc; Robertson, Richard E. A. 2016-04-01 Landslide-generated tsunami are infrequent phenomena that can be potentially highly hazardous for population located in the near-field domain of the source. The Lesser Antilles volcanic arc is a curved 800 km chain of volcanic islands. At least 53 flank collapse episodes have been recognized along the arc. Several of these collapses have been associated with underwater voluminous deposits (volume > 1 km3). Due to their momentum these events were likely capable of generating regional tsunami. However no clear field evidence of tsunami associated with these voluminous events have been reported but the occurrence of such an episode nowadays would certainly have catastrophic consequences. Kick 'em Jenny (KeJ) is the only active submarine volcano of the Lesser Antilles Arc (LAA), with a current edifice volume estimated to 1.5 km3. It is the southernmost edifice of the LAA with recognized associated volcanic landslide deposits. The volcano appears to have undergone three episodes of flank failure. Numerical simulations of one of these episodes associated with a collapse volume of ca. 4.4 km3 and considering a single pulse collapse revealed that this episode would have produced a regional tsunami with amplitude of 30 m. In the present study we applied a detailed hazard assessment on KeJ submarine volcano (KeJ) form its collapse to its waves impact on high resolution coastal area of selected island of the LAA in order to highlight needs to improve alert system and risk mitigation. We present the assessment process of tsunami hazard related to shoreline surface elevation (i.e. run-up) and flood dynamic (i.e. duration, height, speed...) at the coast of LAA island in the case of a potential flank collapse scenario at KeJ. After quantification of potential initial volumes of collapse material using relative slope instability analysis (RSIA, VolcanoFit 2.0 & SSAP 4.5) based on seven geomechanical models, the tsunami source have been simulate by St-Venant equations-based code 20. Deterministic tsunami hazard assessment of Sines - Portugal OpenAIRE Wronna, Martin 2015-01-01 Tese de mestrado em Ciências Geográficas, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2015 Neste trabalho apresenta-se uma abordagem determinística de perigo de tsunamis considerando múltiplas fontes para a cidade costeira de Sines, Portugal. Tsunamis ou maremotos são eventos extremos, energeticamente elevados mas pouco frequentes. Normalmente são geradas por um deslocamento duma grande quantidade de água seja por erupções vulcânicas, colapso de caldeiras, desli... 1. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami Science.gov (United States) Horton, B.; Rubin, C. M.; Sieh, K.; Jessica, P.; Daly, P.; Ismail, N.; Parnell, A. C. 2017-12-01 The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here, we identify coastal caves as a new depositional environment for reconstructing tsunami records and present a 5,000 year record of continuous tsunami deposits from a coastal cave in Sumatra, Indonesia which shows the irregular recurrence of 11 tsunamis between 7,400 and 2,900 years BP. The data demonstrates that the 2004 tsunami was just the latest in a sequence of devastating tsunamis stretching back to at least the early Holocene and suggests a high likelihood for future tsunamis in the Indian Ocean. The sedimentary record in the cave shows that ruptures of the Sunda megathrust vary between large (which generated the 2004 Indian Ocean tsunami) and smaller slip failures. The chronology of events suggests the recurrence of multiple smaller tsunamis within relatively short time periods, interrupted by long periods of strain accumulation followed by giant tsunamis. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. The very long dormant period suggests that the Sunda megathrust is capable of accumulating large slip deficits between earthquakes. Such a high slip rupture would produce a substantially larger earthquake than the 2004 event. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda Megathrust ruptures as large as that of 2004 Indian Ocean tsunami. The remarkable variability of recurrence suggests that regional hazard mitigation plans should be based upon the high likelihood of future destructive tsunami demonstrated by 2. ASSIMILATION OF REAL-TIME DEEP SEA BUOY DATA FOR TSUNAMI FORECASTING ALONG THAILAND’S ANDAMAN COASTLINE Directory of Open Access Journals (Sweden) Seree Supharatid 2008-01-01 Full Text Available The occurrence of 2004 Indian Ocean tsunami enhanced the necessity for a tsunami early warning system for countries bordering the Indian Ocean, including Thailand. This paper describes the assimilation of real-time deep sea buoy data for tsunami forecasting along Thailand’s Andaman coastline. Firstly, the numerical simulation (by the linear and non-linear shallow water equations was carried out for hypothetical cases of tsunamigenic earthquakes with epicenters located in the Andaman micro plate. Outputs of the numerical model are tsunami arrival times and the maximum wave height that can be expected at 58 selected communities along Thailand Andaman coastline and two locations of DART buoys in the Indian Ocean. Secondly, a “neural” network model (GRNN was developed to access the data from the numerical computations for subsequent construction of a tsunami database that can be displayed on a web-based system. This database can be updated with the integration from two DART buoys and from several GRNN models. 3. Elders recall an earlier tsunami on Indian Ocean shores Science.gov (United States) Kakar, Din Mohammad; Naeem, Ghazala; Usman, Abdullah; Hasan, Haider; Lohdi, Hira; Srinivasalu, Seshachalam; Andrade, Vanessa; Rajendran, C.P.; Naderi Beni, Abdolmajid; Hamzeh, Mohammad Ali; Hoffmann, Goesta; Al Balushi, Noora; Gale, Nora; Kodijat, Ardito; Fritz, Hermann M.; Atwater, Brian F. 2014-01-01 Ten years on, the Indian Ocean tsunami of 26 December 2004 still looms large in efforts to reduce coastal risk. The disaster has spurred worldwide advances in tsunami detection and warning, tsunami-risk assessment, and tsunami awareness [Satake, 2014]. Nearly a lifetime has passed since the northwestern Indian Ocean last produced a devastating tsunami. Documentation of this tsunami, in November 1945, was hindered by international instability in the wake of the Second World War and, in British India, by the approach of independence and partition. The parent earthquake, of magnitude 8.1, was widely recorded, and the tsunami registered on tide gauges, but intelligence reports and newspaper articles say little about inundation limits while permitting a broad range of catalogued death tolls. What has been established about the 1945 tsunami falls short of what's needed today for ground-truthing inundation models, estimating risk to enlarged populations, and anchoring awareness campaigns in local facts. Recent efforts to reduce coastal risk around the Arabian Sea include a project in which eyewitnesses to the 1945 tsunami were found and interviewed (Fig. 1), and related archives were gathered. Results are being made available through UNESCO's Indian Ocean Tsunami Information Center in hopes of increasing scientific understanding and public awareness of the region's tsunami hazards. 4. Tsunami Early Warning via a Physics-Based Simulation Pipeline Science.gov (United States) Wilson, J. M.; Rundle, J. B.; Donnellan, A.; Ward, S. N.; Komjathy, A. 2017-12-01 Through independent efforts, physics-based simulations of earthquakes, tsunamis, and atmospheric signatures of these phenomenon have been developed. With the goal of producing tsunami forecasts and early warning tools for at-risk regions, we join these three spheres to create a simulation pipeline. The Virtual Quake simulator can produce thousands of years of synthetic seismicity on large, complex fault geometries, as well as the expected surface displacement in tsunamigenic regions. These displacements are used as initial conditions for tsunami simulators, such as Tsunami Squares, to produce catalogs of potential tsunami scenarios with probabilities. Finally, these tsunami scenarios can act as input for simulations of associated ionospheric total electron content, signals which can be detected by GNSS satellites for purposes of early warning in the event of a real tsunami. We present the most recent developments in this project. 5. Tsunamis detection, monitoring, and early-warning technologies CERN Document Server Joseph, Antony 2011-01-01 The devastating impacts of tsunamis have received increased focus since the Indian Ocean tsunami of 2004, the most devastating tsunami in over 400 years of recorded history. This professional reference is the first of its kind: it provides a globally inclusive review of the current state of tsunami detection technology and will be a much-needed resource for oceanographers and marine engineers working to upgrade and integrate their tsunami warning systems. It focuses on the two main tsunami warning systems (TWS): International and Regional. Featured are comparative assessments of detection, monitoring, and real-time reporting technologies. The challenges of detection through remote measuring stations are also addressed, as well as the historical and scientific aspects of tsunamis. 6. Tsunamis generated by long and thin granular landslides in a large flume Science.gov (United States) Miller, Garrett S.; Andy Take, W.; Mulligan, Ryan P.; McDougall, Scott 2017-01-01 In this experimental study, granular material is released down slope to investigate landslide-generated waves. Starting with a known volume and initial position of the landslide source, detailed data are obtained on the velocity and thickness of the granular flow, the shape and location of the submarine landslide deposit, the amplitude and shape of the near-field wave, the far-field wave evolution, and the wave runup elevation on a smooth impermeable slope. The experiments are performed on a 6.7 m long 30° slope on which gravity accelerates the landslides into a 2.1 m wide and 33.0 m long wave flume that terminates with a 27° runup ramp. For a fixed landslide volume of 0.34 m3, tests are conducted in a range of still water depths from 0.05 to 0.50 m. Observations from high-speed cameras and measurements from wave probes indicate that the granular landslide moves as a long and thin train of material, and that only a portion of the landslide (termed the "effective mass") is engaged in activating the leading wave. The wave behavior is highly dependent on the water depth relative to the size of the landslide. In deeper water, the near-field wave behaves as a stable solitary-like wave, while in shallower water, the wave behaves as a breaking dissipative bore. Overall, the physical model observations are in good agreement with the results of existing empirical equations when the effective mass is used to predict the maximum near-field wave amplitude, the far-field amplitude, and the runup of tsunamis generated by granular landslides. 7. Approximate maximum parsimony and ancestral maximum likelihood. Science.gov (United States) Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat 2010-01-01 We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP. 8. The tsunami probabilistic risk assessment (PRA). Example of accident sequence analysis of tsunami PRA according to the standard for procedure of tsunami PRA for nuclear power plants International Nuclear Information System (INIS) Ohara, Norihiro; Hasegawa, Keiko; Kuroiwa, Katsuya 2013-01-01 After the Fukushima Daiichi nuclear power plant (NPP) accident, standard for procedure of tsunami PRA for NPP had been established by the Standardization Committee of AESJ. Industry group had been conducting analysis of Tsunami PRA for PWR based on the standard under the cooperation with electric utilities. This article introduced overview of the standard and examples of accident sequence analysis of Tsunami PRA studied by the industry group according to the standard. The standard consisted of (1) investigation of NPP's composition, characteristics and site information, (2) selection of relevant components for Tsunami PRA and initiating events and identification of accident sequence, (3) evaluation of Tsunami hazards, (4) fragility evaluation of building and components and (5) evaluation of accident sequence. Based on the evaluation, countermeasures for further improvement of safety against Tsunami could be identified by the sensitivity analysis. (T. Tanaka) 9. Developing an event-tree probabilistic tsunami inundation model for NE Atlantic coasts: Application to case studies Science.gov (United States) Omira, Rachid; Baptista, Maria Ana; Matias, Luis 2015-04-01 This study constitutes the first assessment of probabilistic tsunami inundation in the NE Atlantic region, using an event-tree approach. It aims to develop a probabilistic tsunami inundation approach for the NE Atlantic coast with an application to two test sites of ASTARTE project, Tangier-Morocco and Sines-Portugal. Only tsunamis of tectonic origin are considered here, taking into account near-, regional- and far-filed sources. The multidisciplinary approach, proposed here, consists of an event-tree method that gathers seismic hazard assessment, tsunami numerical modelling, and statistical methods. It presents also a treatment of uncertainties related to source location and tidal stage in order to derive the likelihood of tsunami flood occurrence and exceedance of a specific near-shore wave height during a given return period. We derive high-resolution probabilistic maximum wave heights and flood distributions for both test-sites Tangier and Sines considering 100-, 500-, and 1000-year return periods. We find that the probability that a maximum wave height exceeds 1 m somewhere along the Sines coasts reaches about 55% for 100-year return period, and is up to 100% for 1000-year return period. Along Tangier coast, the probability of inundation occurrence (flow depth > 0m) is up to 45% for 100-year return period and reaches 96% in some near-shore costal location for 500-year return period. Acknowledgements: This work is funded by project ASTARTE - Assessment, STrategy And Risk Reduction for Tsunamis in Europe. Grant 603839, 7th FP (ENV.2013.6.4-3 ENV.2013.6.4-3). 10. Geological effects and implications of the 2010 tsunami along the central coast of Chile Science.gov (United States) Morton, R.A.; Gelfenbaum, G.; Buckley, M.L.; Richmond, B.M. 2011-01-01 Geological effects of the 2010 Chilean tsunami were quantified at five near-field sites along a 200. km segment of coast located between the two zones of predominant fault slip. Field measurements, including topography, flow depths, flow directions, scour depths, and deposit thicknesses, provide insights into the processes and morphological changes associated with tsunami inundation and return flow. The superposition of downed trees recorded multiple strong onshore and alongshore flows that arrived at different times and from different directions. The most likely explanation for the diverse directions and timing of coastal inundation combines (1) variable fault rupture and asymmetrical slip displacement of the seafloor away from the epicenter with (2) resonant amplification of coastal edge waves. Other possible contributing factors include local interaction of incoming flow and return flow and delayed wave reflection by the southern coast of Peru. Coastal embayments amplified the maximum inundation distances at two sites (2.4 and 2.6. km, respectively). Tsunami vertical erosion included scour and planation of the land surface, inundation scour around the bases of trees, and channel incision from return flow. Sheets and wedges of sand and gravel were deposited at all of the sites. Locally derived boulders up to 1. m in diameter were transported as much as 400. m inland and deposited as fields of dispersed clasts. The presence of lobate bedforms at one site indicates that at least some of the late-stage sediment transport was as bed load and not as suspended load. Most of the tsunami deposits were less than 25. cm thick. Exceptions were thick deposits near open-ocean river mouths where sediment supply was abundant. Human alterations of the land surface at most of the sites provided opportunities to examine some tsunami effects that otherwise would not have been possible, including flow histories, boulder dispersion, and vegetation controls on deposit thickness 11. Geological impacts and implications of the 2010 tsunami along the central coast of Chile Science.gov (United States) Morton, Robert A.; Gelfenbaum, Guy; Buckley, Mark L.; Richmond, Bruce M. 2011-01-01 Geological effects of the 2010 Chilean tsunami were quantified at five near-field sites along a 200 km segment of coast located between the two zones of predominant fault slip. Field measurements, including topography, flow depths, flow directions, scour depths, and deposit thicknesses, provide insights into the processes and morphological changes associated with tsunami inundation and return flow. The superposition of downed trees recorded multiple strong onshore and alongshore flows that arrived at different times and from different directions. The most likely explanation for the diverse directions and timing of coastal inundation combines (1) variable fault rupture and asymmetrical slip displacement of the seafloor away from the epicenter with (2) resonant amplification of coastal edge waves. Other possible contributing factors include local interaction of incoming flow and return flow and delayed wave reflection by the southern coast of Peru. Coastal embayments amplified the maximum inundation distances at two sites (2.4 and 2.6 km, respectively). Tsunami vertical erosion included scour and planation of the land surface, inundation scour around the bases of trees, and channel incision from return flow. Sheets and wedges of sand and gravel were deposited at all of the sites. Locally derived boulders up to 1 m in diameter were transported as much as 400 m inland and deposited as fields of dispersed clasts. The presence of lobate bedforms at one site indicates that at least some of the late-stage sediment transport was as bed load and not as suspended load. Most of the tsunami deposits were less than 25 cm thick. Exceptions were thick deposits near open-ocean river mouths where sediment supply was abundant. Human alterations of the land surface at most of the sites provided opportunities to examine some tsunami effects that otherwise would not have been possible, including flow histories, boulder dispersion, and vegetation controls on deposit thickness. 12. Maximum permissible dose International Nuclear Information System (INIS) Anon. 1979-01-01 This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed 13. Calculation of Tsunami Damage and preparation of Inundation Maps by 2D and 3D numerical modeling in Göcek, Turkey Science.gov (United States) Ozer Sozdinler, C.; Arikawa, T.; Necmioglu, O.; Ozel, N. M. 2016-12-01 The Aegean and its surroundings form the most active part of the Africa-Eurasia collision zone responsible for the high level of seismicity in this region. It constitutes more than 60% of the expected seismicity in Europe up to Mw=8.2 (Moratto et al., 2007; Papazachos, 1990). Shaw and Jackson (2010) argued that the existing system of Hellenic Arc subduction-zone is capable of allowing very large but rare earthquakes on splay faults, such as the one occurred in 365, together with the contribution of small earthquakes. Based on an extensive earthquake generated tsunami scenario database, Necmioğlu and Özel (2015) showed that maximum wave heights in the Eastern Mediterranean for shallow earthquakes defined is >3 m in locations in, around and orthogonal to the Hellenic Arc. Considering the seismicity and the tsunami potential in Eastern Mediterranean, the investigation and monitoring of earthquake and tsunami hazard, and the preparation of mitigation strategies and national resilience plans would become inevitable in Turkey. Gocek town, as one of the Tsunami Forecast Points having a unique geography with many small bays and islands and a very popular touristic destination especially for yachtsmen, is selected in this study for the tsunami modeling by using high resolution bathymetric and topographic data with less than 4m grid size. The tsunami analyses are performed by the numerical codes NAMIDANCE (NAMIDANCE,2011) for 2D modeling and STOC-CADMAS (Arikawa,2014) for 3D modeling for the calculations of tsunami hydrodynamic parameters. Froude numbers, as one of the most important indicators for tsunami damage (Ozer, 2012) and the directions of current velocities inside marinas are solved by NAMIDANCE while STOC-CADMAS determines the tsunami pressure and force exerted onto the sea and land structures with 3D and non-hydrostatic approaches. The results are then used to determine the tsunami inundation and structural resilience and establish the tsunami preparedness and 14. Design for tsunami barrier wall based on numerical analyses of tsunami inundation at Shimane Nuclear Power Plant International Nuclear Information System (INIS) Kiyoshige, Naoya; Yoshitsugu, Shinich; Kawahara, Kazufumi; Ookubo, Yoshimi; Nishihata, Takeshi; Ino, Hitoshi; Kotoura, Tsuyoshi 2014-01-01 The conventional tsunami assessment of the active fault beneath the Japan sea in front of the Shimane nuclear power plant and the earthquake feared to happen at the eastern margin of the Japan sea does not expect a huge tsunami as to be assumed on the Pacific sea coast. Hence, the huge tsunami observed at the power plant located near the source of the Tohoku Pacific sea earthquake tsunami whose run-up height reached TP+15m is regarded as the level 2 tsunami for the Shimane nuclear power plant and planned to construct the tsunami barrier walls to endure the supposed level 2 tsunami. In this study, the setting of the Level 2 tsunami by using the numerical analysis based on the non-linear shallow water theory and evaluation for the design tsunami wave pressure exerted on the counter measures by using CADMAS-SURF/3D are discussed. The designed tsunami barrier walls which are suitable to the power plant feasibility and decided from the design tsunami wave pressure distribution based on Tanimoto's formulae and standard earthquake ground motion Ss are also addressed. (author) 15. Highly variable recurrence of tsunamis in the 7,400 years before the 2004 Indian Ocean tsunami. Science.gov (United States) Rubin, Charles M; Horton, Benjamin P; Sieh, Kerry; Pilarczyk, Jessica E; Daly, Patrick; Ismail, Nazli; Parnell, Andrew C 2017-07-19 The devastating 2004 Indian Ocean tsunami caught millions of coastal residents and the scientific community off-guard. Subsequent research in the Indian Ocean basin has identified prehistoric tsunamis, but the timing and recurrence intervals of such events are uncertain. Here we present an extraordinary 7,400 year stratigraphic sequence of prehistoric tsunami deposits from a coastal cave in Aceh, Indonesia. This record demonstrates that at least 11 prehistoric tsunamis struck the Aceh coast between 7,400 and 2,900 years ago. The average time period between tsunamis is about 450 years with intervals ranging from a long, dormant period of over 2,000 years, to multiple tsunamis within the span of a century. Although there is evidence that the likelihood of another tsunamigenic earthquake in Aceh province is high, these variable recurrence intervals suggest that long dormant periods may follow Sunda megathrust ruptures as large as that of the 2004 Indian Ocean tsunami. 16. Scientific Animations for Tsunami Hazard Mitigation: The Pacific Tsunami Warning Center's YouTube Channel Science.gov (United States) Becker, N. C.; Wang, D.; Shiro, B.; Ward, B. 2013-12-01 17. The double landslide-induced tsunami Science.gov (United States) Tinti, S.; Armigliat, A.; Manucci, A.; Pagnoni, G.; Tonini, R.; Zaniboni, F.; Maramai, A.; Graziani, L. The 2002 crisis of Stromboli culminated on December 30 in a series of mass failures detached from the Sciara del Fuoco, with two main landslides, one submarine followed about 7 min later by a second subaerial. These landslides caused two distinct tsunamis that were seen by most people in the island as a unique event. The double tsunami was strongly damaging, destroying several houses in the waterfront at Ficogrande, Punta Lena, and Scari localities in the northeastern coast of Stromboli. The waves affected also Panarea and were observed in the northern Sicily coast and even in Campania, but with minor effects. There are no direct instrumental records of these tsunamis. What we know resides on (1) observations and quantification of the impact of the waves on the coast, collected in a number of postevent field surveys; (2) interviews of eyewitnesses and a collection of tsunami images (photos and videos) taken by observers; and (3) on results of numerical simulations. In this paper, we propose a critical reconstruction of the events where all the available pieces of information are recomposed to form a coherent and consistent mosaic. 18. Asteroid-Generated Tsunami and Impact Risk Science.gov (United States) Boslough, M.; Aftosmis, M.; Berger, M. J.; Ezzedine, S. M.; Gisler, G.; Jennings, B.; LeVeque, R. J.; Mathias, D.; McCoy, C.; Robertson, D.; Titov, V. V.; Wheeler, L. 2016-12-01 The justification for planetary defense comes from a cost/benefit analysis, which includes risk assessment. The contribution from ocean impacts and airbursts is difficult to quantify and represents a significant uncertainty in our assessment of the overall risk. Our group is currently working toward improved understanding of impact scenarios that can generate dangerous tsunami. The importance of asteroid-generated tsunami research has increased because a new Science Definition Team, at the behest of NASA's Planetary Defense Coordinating Office, is now updating the results of a 2003 study on which our current planetary defense policy is based Our group was formed to address this question on many fronts, including asteroid entry modeling, tsunami generation and propagation simulations, modeling of coastal run-ups, inundation, and consequences, infrastructure damage estimates, and physics-based probabilistic impact risk assessment. We also organized the Second International Workshop on Asteroid Threat Assessment, focused on asteroid-generated tsunami and associated risk (Aug. 23-24, 2016). We will summarize our progress and present the highlights of our workshop, emphasizing its relevance to earth and planetary science. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE-AC04-94AL85000. 19. On the solitary wave paradigm for tsunamis DEFF Research Database (Denmark) Madsen, Per A.; Fuhrman, David R.; Schäffer, Hemming Andreas 2008-01-01 Since the 1970s, solitary waves have commonly been used to model tsunamis especially in experimental and mathematical studies. Unfortunately, the link to geophysical scales is not well established, and in this work we question the geophysical relevance of this paradigm. In part 1, we simulate... 20. Tiché tsunami bez hranic Czech Academy of Sciences Publication Activity Database Konečný, Tomáš Roč. 6, č. 24 ( 2008 ), s. 14 ISSN 1801-1446 Institutional research plan: CEZ:AV0Z70280505 Keywords : food crisis Subject RIV: AO - Sociology, Demography http://www.respekt.cz/search.php?f_search_text=tich%E9+tsunami+bez+hranic 1. Tsunami hazard assessment in the coastal area of Rabat and Salé, Morocco Directory of Open Access Journals (Sweden) C. Renou 2011-08-01 Full Text Available In the framework of the three-year SCHEMA European project (www.schemaproject.org, we present a generic methodology developed to produce tsunami building vulnerability and impact maps. We apply this methodology to the Moroccan coast. This study focuses on the Bouregreg Valley which is at the junction between Rabat (administrative capital, and Salé. Both present large populations and new infrastructure development. Using a combination of numerical modelling, field surveys, Earth Observation and GIS data, the risk has been evaluated for this vulnerable area. Two tsunami scenarios were studied to estimate a realistic range of hazards on this coast: a worst-case scenario based on the historical Lisbon earthquake of 1755 and a moderate scenario based on the Horseshoe earthquake of 28 February 1969. For each scenario, numerical models allowed the production of tsunami hazard maps (maximum inundation extent and maximum inundation depths. Moreover, the modelling results of these two scenarios were compared with the historical data available. A companion paper to this article (Atillah et al., 2011 presents the following steps of the methodology, namely the elaboration of building damage maps by crossing layers of building vulnerability and the so-inferred inundation depths. 2. Seismogeodesy for rapid earthquake and tsunami characterization Science.gov (United States) Bock, Y. 2016-12-01 Rapid estimation of earthquake magnitude and fault mechanism is critical for earthquake and tsunami warning systems. Traditionally, the monitoring of earthquakes and tsunamis has been based on seismic networks for estimating earthquake magnitude and slip, and tide gauges and deep-ocean buoys for direct measurement of tsunami waves. These methods are well developed for ocean basin-wide warnings but are not timely enough to protect vulnerable populations and infrastructure from the effects of local tsunamis, where waves may arrive within 15-30 minutes of earthquake onset time. Direct measurements of displacements by GPS networks at subduction zones allow for rapid magnitude and slip estimation in the near-source region, that are not affected by instrumental limitations and magnitude saturation experienced by local seismic networks. However, GPS displacements by themselves are too noisy for strict earthquake early warning (P-wave detection). Optimally combining high-rate GPS and seismic data (in particular, accelerometers that do not clip), referred to as seismogeodesy, provides a broadband instrument that does not clip in the near field, is impervious to magnitude saturation, and provides accurate real-time static and dynamic displacements and velocities in real time. Here we describe a NASA-funded effort to integrate GPS and seismogeodetic observations as part of NOAA's Tsunami Warning Centers in Alaska and Hawaii. It consists of a series of plug-in modules that allow for a hierarchy of rapid seismogeodetic products, including automatic P-wave picking, hypocenter estimation, S-wave prediction, magnitude scaling relationships based on P-wave amplitude (Pd) and peak ground displacement (PGD), finite-source CMT solutions and fault slip models as input for tsunami warnings and models. For the NOAA/NASA project, the modules are being integrated into an existing USGS Earthworm environment, currently limited to traditional seismic data. We are focused on a network of 3. Real-time Tsunami Inundation Prediction Using High Performance Computers Science.gov (United States) Oishi, Y.; Imamura, F.; Sugawara, D. 2014-12-01 Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the 4. Data Elevator Energy Technology Data Exchange (ETDEWEB) 2017-04-29 Data Elevator: Efficient Asynchronous Data Movement in Hierarchical Storage Systems Multi-layer storage subsystems, including SSD-based burst buffers and disk-based parallel file systems (PFS), are becoming part of HPC systems. However, software for this storage hierarchy is still in its infancy. Applications may have to explicitly move data among the storage layers. We propose Data Elevator for transparently and efficiently moving data between a burst buffer and a PFS. Users specify the final destination for their data, typically on PFS, Data Elevator intercepts the I/O calls, stages data on burst buffer, and then asynchronously transfers the data to their final destination in the background. This system allows extensive optimizations, such as overlapping read and write operations, choosing I/O modes, and aligning buffer boundaries. In tests with large-scale scientific applications, Data Elevator is as much as 4.2X faster than Cray DataWarp, the start-of-art software for burst buffer, and 4X faster than directly writing to PFS. The Data Elevator library uses HDF5's Virtual Object Layer (VOL) for intercepting parallel I/O calls that write data to PFS. The intercepted calls are redirected to the Data Elevator, which provides a handle to write the file in a faster and intermediate burst buffer system. Once the application finishes writing the data to the burst buffer, the Data Elevator job uses HDF5 to move the data to final destination in an asynchronous manner. Hence, using the Data Elevator library is currently useful for applications that call HDF5 for writing data files. Also, the Data Elevator depends on the HDF5 VOL functionality. 5. Mexican Earthquakes and Tsunamis Catalog Reviewed Science.gov (United States) Ramirez-Herrera, M. T.; Castillo-Aja, R. 2015-12-01 Today the availability of information on the internet makes online catalogs very easy to access by both scholars and the public in general. The catalog in the "Significant Earthquake Database", managed by the National Center for Environmental Information (NCEI formerly NCDC), NOAA, allows access by deploying tabular and cartographic data related to earthquakes and tsunamis contained in the database. The NCEI catalog is the product of compiling previously existing catalogs, historical sources, newspapers, and scientific articles. Because NCEI catalog has a global coverage the information is not homogeneous. Existence of historical information depends on the presence of people in places where the disaster occurred, and that the permanence of the description is preserved in documents and oral tradition. In the case of instrumental data, their availability depends on the distribution and quality of seismic stations. Therefore, the availability of information for the first half of 20th century can be improved by careful analysis of the available information and by searching and resolving inconsistencies. This study shows the advances we made in upgrading and refining data for the earthquake and tsunami catalog of Mexico since 1500 CE until today, presented in the format of table and map. Data analysis allowed us to identify the following sources of error in the location of the epicenters in existing catalogs: • Incorrect coordinate entry • Place name erroneous or mistaken • Too general data that makes difficult to locate the epicenter, mainly for older earthquakes • Inconsistency of earthquakes and the tsunami occurrence: earthquake's epicenter located too far inland reported as tsunamigenic. The process of completing the catalogs directly depends on the availability of information; as new archives are opened for inspection, there are more opportunities to complete the history of large earthquakes and tsunamis in Mexico. Here, we also present new earthquake and 6. Earthquake and Tsunami: a movie and a book for seismic and tsunami risk reduction in Italy. Science.gov (United States) Nostro, C.; Baroux, E.; Maramai, A.; Graziani, L.; Tertulliani, A.; Castellano, C.; Arcoraci, L.; Casale, P.; Ciaccio, M. G.; Frepoli, A. 2009-04-01 Italy is a country well known for the seismic and volcanic hazard. However, a similarly great hazard, although not well recognized, is posed by the occurrence of tsunami waves along the Italian coastline. This is testified by a rich catalogue and by field evidence of deposits left over by pre- and historical tsunamis, even in places today considered safe. This observation is of great importance since many of the areas affected by tsunamis in the past are today touristic places. The Italian tsunamis can be caused by different sources: 1- off-shore or near coast in-land earthquakes; 2- very large earthquakes on distant sources in the Mediterranean; 3- submarine volcanic explosion in the Tyrrhenian sea; 4- submarine landslides triggered by earthquakes and volcanic activity. The consequence of such a wide spectrum of sources is that an important part of the more than 7000 km long Italian coast line is exposed to the tsunami risk, and thousands of inhabitants (with numbers increasing during summer) live near hazardous coasts. The main historical tsunamis are the 1783 and 1908 events that hit Calabrian and Sicilian coasts. The recent tsunami is that caused by the 2002 Stromboli landslide. In order to reduce this risk and following the emotional impact of the December 2004 Sumatra earthquake and tsunami, we developed an outreach program consisting in talks given by scientists and in a movie and a book, both exploring the causes of the tsunami waves, how do they propagate in deep and shallow waters, and what are the effects on the coasts. Hints are also given on the most dangerous Italian coasts (as deduced by scientific studies), and how to behave in the case of a tsunami approaching the coast. These seminars are open to the general public, but special programs are developed with schools of all grades. In this talk we want to present the book and the movie used during the seminars and scientific expositions, that was realized from a previous 3D version originally 7. A tsunami PSA methodology and application for NPP site in Korea International Nuclear Information System (INIS) Kim, Min Kyu; Choi, In-Kil 2012-01-01 Highlights: ► A methodology of tsunami PSA was developed in this study. ► Tsunami return period was evaluated by empirical method using historical tsunami record and tidal gauge record. ► Procedure of tsunami fragility analysis was established and target equipments and structures for investigation of tsunami fragility assessment were selected. ► A sample fragility calculation was performed for the equipment in Nuclear Power Plant. ► Accident sequence of tsunami event is developed by according to the tsunami run-up and draw down, and tsunami induced core damage frequency (CDF) is determined. - Abstract: A methodology of tsunami PSA was developed in this study. A tsunami PSA consists of tsunami hazard analysis, tsunami fragility analysis and system analysis. In the case of tsunami hazard analysis, evaluation of tsunami return period is a major task. For the evaluation of tsunami return period, numerical analysis and empirical method can be applied. In this study, tsunami return period was evaluated by empirical method using historical tsunami record and tidal gauge record. For the performing a tsunami fragility analysis, procedure of tsunami fragility analysis was established and target equipments and structures for investigation of tsunami fragility assessment were selected. A sample fragility calculation was performed for the equipment in Nuclear Power Plant. In the case of system analysis, accident sequence of tsunami event is developed by according to the tsunami run-up and draw down, and tsunami induced core damage frequency (CDF) is determined. For the application to the real Nuclear Power Plant, the Ulchin 56 NPP which located in east coast of Korean peninsula was selected. Through this study, whole tsunami PSA working procedure was established and example calculation was performed for one of real Nuclear Power Plant in Korea. But for more accurate tsunami PSA result, there are many researches needed for evaluation of hydrodynamic force, effect of 8. Mechanism of the 2015 volcanic tsunami earthquake near Torishima, Japan Science.gov (United States) Satake, Kenji 2018-01-01 Tsunami earthquakes are a group of enigmatic earthquakes generating disproportionally large tsunamis relative to seismic magnitude. These events occur most typically near deep-sea trenches. Tsunami earthquakes occurring approximately every 10 years near Torishima on the Izu-Bonin arc are another example. Seismic and tsunami waves from the 2015 event [Mw (moment magnitude) = 5.7] were recorded by an offshore seafloor array of 10 pressure gauges, ~100 km away from the epicenter. We made an array analysis of dispersive tsunamis to locate the tsunami source within the submarine Smith Caldera. The tsunami simulation from a large caldera-floor uplift of ~1.5 m with a small peripheral depression yielded waveforms remarkably similar to the observations. The estimated central uplift, 1.5 m, is ~20 times larger than that inferred from the seismologically determined non–double-couple source. Thus, the tsunami observation is not compatible with the published seismic source model taken at face value. However, given the indeterminacy of Mzx, Mzy, and M{tensile} of a shallow moment tensor source, it may be possible to find a source mechanism with efficient tsunami but inefficient seismic radiation that can satisfactorily explain both the tsunami and seismic observations, but this question remains unresolved. PMID:29740604 9. Tsunami Prediction and Earthquake Parameters Estimation in the Red Sea KAUST Repository Sawlan, Zaid A 2012-12-01 Tsunami concerns have increased in the world after the 2004 Indian Ocean tsunami and the 2011 Tohoku tsunami. Consequently, tsunami models have been developed rapidly in the last few years. One of the advanced tsunami models is the GeoClaw tsunami model introduced by LeVeque (2011). This model is adaptive and consistent. Because of different sources of uncertainties in the model, observations are needed to improve model prediction through a data assimilation framework. Model inputs are earthquake parameters and topography. This thesis introduces a real-time tsunami forecasting method that combines tsunami model with observations using a hybrid ensemble Kalman filter and ensemble Kalman smoother. The filter is used for state prediction while the smoother operates smoothing to estimate the earthquake parameters. This method reduces the error produced by uncertain inputs. In addition, state-parameter EnKF is implemented to estimate earthquake parameters. Although number of observations is small, estimated parameters generates a better tsunami prediction than the model. Methods and results of prediction experiments in the Red Sea are presented and the prospect of developing an operational tsunami prediction system in the Red Sea is discussed. 10. A Tsunami Fragility Assessment for Nuclear Power Plants in Korea International Nuclear Information System (INIS) Kim, Min Kyu; Choi, In Kil; Kang, Keum Seok 2009-01-01 Although Tsunami events were defined as an external event in 'PRA Procedure Guide (NUREG/CR- 2300)'after 1982, a Tsunami event was not considered in a design and construction of NPP before the Sumatra earthquake in 2004. But the Madras Atomic Power Station, a commercial nuclear power plant owned and operated by the Nuclear Power Corporation of India Limited (NPCIL), and located near Chennai, India, was affected by the tsunami generated by the 2004 Sumatra earthquake (USNRC 2008). The condenser cooling pumps of Unit 2 of the installation were affected due to flooding of the pump house and subsequent submergence of the seawater pumps by tsunami waves. The turbine was tripped and the reactor shut down. The unit was brought to a cold-shutdown state, and the shutdown-cooling systems were reported as operating safely. After this event, Tsunami hazards were considered as one of the major natural disasters which can affect the safety of Nuclear Power Plants. The IAEA performed an Extrabudgetary project for Tsunami Hazard Assessment and finally an International Seismic Safety Center (ISSC) established in IAEA for protection from natural disasters like earthquake, tsunami etc. For this reason, a tsunami hazard assessment method determined in this study. At first, a procedure for tsunami hazard assessment method was established, and second target equipment and structures for investigation of Tsunami Hazard assessment were selected. Finally, a sample fragility calculation was performed for one of equipment in Nuclear Power Plant 11. A BRIEF HISTORY OF TSUNAMIS IN THE CARIBBEAN SEA Directory of Open Access Journals (Sweden) Patricia A. Lockridge 2002-01-01 Full Text Available The area of the Caribbean Sea is geologically active. Earthquakes and volcanoes are common occurrences. These geologic events can generate powerful tsunamis some of which are more devastating than the earthquake or volcanic eruption itself. This document lists brief descriptions of 91 reported waves that might have been tsunamis within the Caribbean region. Of these, 27 are judged by the authors to be true, verified tsunamis and an additional nine are considered to be very likely true tsunamis. The additional 53 events either are not described with sufficient detail in the literature to verify their tsunami nature or are judged to be reports of other phenomenasuch as sea quakes or hurricane storm surges which may have been reported as tsunamis. Included in these 91 reports are teletsunamis, tectonic tsunamis, landslide tsunamis, and volcanic tsunamis that have caused major damage and deaths. Nevertheless, in recent history these events have been relatively rare. In the interim since the last major tsunami event in the Caribbean Sea the coastal regions have greatly increased in population. Coastal development has also increased. Today tourism is a major industry that exposes thousands of non-residents to the disastrous effects of a tsunami. These factors make the islands in this region much more vulnerable today than they were when the last major tsunami occurred in this area. This paper gives an overview of the tsunami history in the area. This history illustrates what can be expected in the future from this geologic hazard and provides information that will be useful for mitigation purposes. 12. Historical Tsunami Records on Russian Island, the Sea of Japan Science.gov (United States) Razjigaeva, N. G.; Ganzey, L. A.; Grebennikova, T. A.; Arslanov, Kh. A.; Ivanova, E. D.; Ganzey, K. S.; Kharlamov, A. A. 2018-03-01 In this article, we provide data evidencing tsunamis on Russian Island over the last 700 years. Reconstructions are developed based on the analyses of peat bog sections on the coast of Spokoynaya Bay, including layers of tsunami sands. Ancient beach sands under peat were deposited during the final phase of transgression of the Medieval Warm Period. We used data on diatoms and benthic foraminifers to identify the marine origin of the sands. The grain size compositions of the tsunami deposits were used to determine the sources of material carried by the tsunamis. The chronology of historical tsunamis was determined based on the radiocarbon dating of the underlying organic deposits. There was a stated difference between the deposition environments during tsunamis and large storms during the Goni (2015) and Lionrock (2016) typhoons. Tsunami deposits from 1983 and 1993 were found in the upper part of the sections. The inundation of the 1993 tsunami did not exceed 20 m or a height of 0.5 m a.m.s.l. (0.3 above high tide). The more intensive tsunami of 1983 had a run-up of 0.65 m a.m.s.l. and penetrated inland from the shoreline up to 40 m. Sand layer of tsunami 1940 extend in land up to 50 m from the present shoreline. Evidence of six tsunamis was elicited from the peat bog sections, the deposits of which are located 60 m from the modern coastal line. The deposits of strong historic tsunamis in the Japan Sea region in 1833, 1741, 1614 (or 1644), 1448, the XIV-XV century and 1341 were also identified on Russian Island. Their run-ups and inundation distances were also determined. The strong historic tsunamis appeared to be more intensive than those of the XX century, and considering the sea level drop during the Little Ice Age, the inundation distances were as large as 250 m. 13. Historical Tsunami Records on Russian Island, the Sea of Japan Science.gov (United States) Razjigaeva, N. G.; Ganzey, L. A.; Grebennikova, T. A.; Arslanov, Kh. A.; Ivanova, E. D.; Ganzey, K. S.; Kharlamov, A. A. 2018-04-01 In this article, we provide data evidencing tsunamis on Russian Island over the last 700 years. Reconstructions are developed based on the analyses of peat bog sections on the coast of Spokoynaya Bay, including layers of tsunami sands. Ancient beach sands under peat were deposited during the final phase of transgression of the Medieval Warm Period. We used data on diatoms and benthic foraminifers to identify the marine origin of the sands. The grain size compositions of the tsunami deposits were used to determine the sources of material carried by the tsunamis. The chronology of historical tsunamis was determined based on the radiocarbon dating of the underlying organic deposits. There was a stated difference between the deposition environments during tsunamis and large storms during the Goni (2015) and Lionrock (2016) typhoons. Tsunami deposits from 1983 and 1993 were found in the upper part of the sections. The inundation of the 1993 tsunami did not exceed 20 m or a height of 0.5 m a.m.s.l. (0.3 above high tide). The more intensive tsunami of 1983 had a run-up of 0.65 m a.m.s.l. and penetrated inland from the shoreline up to 40 m. Sand layer of tsunami 1940 extend in land up to 50 m from the present shoreline. Evidence of six tsunamis was elicited from the peat bog sections, the deposits of which are located 60 m from the modern coastal line. The deposits of strong historic tsunamis in the Japan Sea region in 1833, 1741, 1614 (or 1644), 1448, the XIV-XV century and 1341 were also identified on Russian Island. Their run-ups and inundation distances were also determined. The strong historic tsunamis appeared to be more intensive than those of the XX century, and considering the sea level drop during the Little Ice Age, the inundation distances were as large as 250 m. 14. Statistical Analysis of the Effectiveness of Seawalls and Coastal Forests in Mitigating Tsunami Impacts in Iwate and Miyagi Prefectures. Directory of Open Access Journals (Sweden) Roshanak Nateghi Full Text Available The Pacific coast of the Tohoku region of Japan experiences repeated tsunamis, with the most recent events having occurred in 1896, 1933, 1960, and 2011. These events have caused large loss of life and damage throughout the coastal region. There is uncertainty about the degree to which seawalls reduce deaths and building damage during tsunamis in Japan. On the one hand they provide physical protection against tsunamis as long as they are not overtopped and do not fail. On the other hand, the presence of a seawall may induce a false sense of security, encouraging additional development behind the seawall and reducing evacuation rates during an event. We analyze municipality-level and sub-municipality-level data on the impacts of the 1896, 1933, 1960, and 2011 tsunamis, finding that seawalls larger than 5 m in height generally have served a protective role in these past events, reducing both death rates and the damage rates of residential buildings. However, seawalls smaller than 5 m in height appear to have encouraged development in vulnerable areas and exacerbated damage. We also find that the extent of flooding is a critical factor in estimating both death rates and building damage rates, suggesting that additional measures, such as multiple lines of defense and elevating topography, may have significant benefits in reducing the impacts of tsunamis. Moreover, the area of coastal forests was found to be inversely related to death and destruction rates, indicating that forests either mitigated the impacts of these tsunamis, or displaced development that would otherwise have been damaged. 15. An elevator Energy Technology Data Exchange (ETDEWEB) Loginovskiy, V.I.; Medinger, N.V.; Rasskazov, V.A.; Solonitsyn, V.A. 1983-01-01 An elevator is proposed which includes a body, spring loaded cams and a shut-off ring. To increase the reliability of the elevator by eliminating the possibility of spontaneous shifting of the shut-off ring, the latter is equipped with handles hinged to it and is made with evolvent grooves. The cams are equipped with rollers installed in the evolvent grooves of the shut off ring, where the body is made with grooves for the handles. 16. The 2006 July 17 Java (Indonesia) tsunami from satellite imagery and numerical modelling: a single or complex source? Science.gov (United States) Hébert, H.; Burg, P.-E.; Binet, R.; Lavigne, F.; Allgeyer, S.; Schindelé, F. 2012-12-01 The Mw 7.8 2006 July 17 earthquake off the southern coast of Java, Indonesia, has been responsible for a very large tsunami causing more than 700 casualties. The tsunami has been observed on at least 200 km of coastline in the region of Pangandaran (West Java), with run-up heights from 5 to more than 20 m. Such a large tsunami, with respect to the source magnitude, has been attributed to the slow character of the seismic rupture, defining the event as a so-called tsunami earthquake, but it has also been suggested that the largest run-up heights are actually the result of a second local landslide source. Here we test whether a single slow earthquake source can explain the tsunami run-up, using a combination of new detailed data in the region of the largest run-ups and comparison with modelled run-ups for a range of plausible earthquake source models. Using high-resolution satellite imagery (SPOT 5 and Quickbird), the coastal impact of the tsunami is refined in the surroundings of the high-security Permisan prison on Nusa Kambangan island, where 20 m run-up had been recorded directly after the event. These data confirm the extreme inundation lengths close to the prison, and extend the area of maximum impact further along the Nusa Kambangan island (about 20 km of shoreline), where inundation lengths reach several hundreds of metres, suggesting run-up as high as 10-15 m. Tsunami modelling has been conducted in detail for the high run-up Permisan area (Nusa Kambangan) and the PLTU power plant about 25 km eastwards, where run-up reached only 4-6 m and a video recording of the tsunami arrival is available. For the Permisan prison a high-resolution DEM was built from stereoscopic satellite imagery. The regular basin of the PLTU plant was designed using photographs and direct observations. For the earthquake's mechanism, both static (infinite) and finite (kinematic) ruptures are investigated using two published source models. The models account rather well for the sea level 17. Uncertainty quantification and inference of Manning's friction coefficients using DART buoy data during the Tōhoku tsunami KAUST Repository Sraj, Ihab; Mandli, Kyle T.; Knio, Omar; Dawson, Clint N.; Hoteit, Ibrahim 2014-01-01 Tsunami computational models are employed to explore multiple flooding scenarios and to predict water elevations. However, accurate estimation of water elevations requires accurate estimation of many model parameters including the Manning's n friction parameterization. Our objective is to develop an efficient approach for the uncertainty quantification and inference of the Manning's n coefficient which we characterize here by three different parameters set to be constant in the on-shore, near-shore and deep-water regions as defined using iso-baths. We use Polynomial Chaos (PC) to build an inexpensive surrogate for the G. eoC. law model and employ Bayesian inference to estimate and quantify uncertainties related to relevant parameters using the DART buoy data collected during the Tōhoku tsunami. The surrogate model significantly reduces the computational burden of the Markov Chain Monte-Carlo (MCMC) sampling of the Bayesian inference. The PC surrogate is also used to perform a sensitivity analysis. 18. Uncertainty quantification and inference of Manning's friction coefficients using DART buoy data during the Tōhoku tsunami KAUST Repository Sraj, Ihab 2014-11-01 Tsunami computational models are employed to explore multiple flooding scenarios and to predict water elevations. However, accurate estimation of water elevations requires accurate estimation of many model parameters including the Manning\\'s n friction parameterization. Our objective is to develop an efficient approach for the uncertainty quantification and inference of the Manning\\'s n coefficient which we characterize here by three different parameters set to be constant in the on-shore, near-shore and deep-water regions as defined using iso-baths. We use Polynomial Chaos (PC) to build an inexpensive surrogate for the G. eoC. law model and employ Bayesian inference to estimate and quantify uncertainties related to relevant parameters using the DART buoy data collected during the Tōhoku tsunami. The surrogate model significantly reduces the computational burden of the Markov Chain Monte-Carlo (MCMC) sampling of the Bayesian inference. The PC surrogate is also used to perform a sensitivity analysis. 19. Monocular Elevation Deficiency - Double Elevator Palsy Science.gov (United States) ... Español Condiciones Chinese Conditions Monocular Elevation Deficiency/ Double Elevator Palsy En Español Read in Chinese What is monocular elevation deficiency (Double Elevator Palsy)? Monocular Elevation Deficiency, also known by the ... 20. Elevator deflections on the icing process Science.gov (United States) Britton, Randall K. 1990-01-01 The effect of elevator deflection of the horizontal stabilizer for certain icing parameters is investigated. Elevator deflection can severely change the lower and upper leading-edge impingement limits, and ice can accrete on the elevator itself. Also, elevator deflection had practically no effect on the maximum local collection efficiency. It is shown that for severe icing conditions (large water droplets), elevator deflections that increase the projected height of the airfoil can significantly increase the total collection efficiency of the airfoil. 1. IMPORTANCE OF MANGROVE TO REDUCE THE TSUNAMI WAVE ENERGY Directory of Open Access Journals (Sweden) Anastasia Neni Candra Purnamasari 2017-09-01 Full Text Available Mangrove has a very important role to reduce the tsunami wave energy. It is shown that the coastal areas have no vegetation or in this case will have an impact Mangrove forests greater damage due to tsunami waves than the coastal areas of vegetation. The purpose of the Term Paper is proved the importance of Mangrove to reduce the tsunami wave energy by comparing the various methods that have been observed in some case studies on the impact of the tsunami that occurred in several Asian countries in 2004 and case studies on ocean waves on the Gulf coast of south Florida. Based on the research results that could dampen Mangrove Tsunami wave energy. Tsunami wave energy can be reduced by several factors, namely mangrove species, tree size, vast mangrove forest, nature tree structure, and the size limit Mangrove forest (as far as how much of the ocean to the surface. 2. Complex behavior of elevators in peak traffic Science.gov (United States) Nagatani, Takashi 2003-08-01 We study the dynamical behavior of elevators in the morning peak traffic. We present a stochastic model of the elevators to take into account the interactions between elevators through passengers. The dynamics of the elevators is expressed in terms of a coupled nonlinear map with noises. The number of passengers carried by an elevator and the time-headway between elevators exhibit the complex behavior with varying elevator trips. It is found that the behavior of elevators exhibits a deterministic chaos even if there are no noises. The chaotic motion depends on the loading parameter, the maximum capacity of an elevator, and the number of elevators. When the loading parameter is superior to the threshold, each elevator carries a full load of passengers throughout its trip. The dependence of the threshold (transition point) on the elevator capacity is clarified. 3. Identification of tsunami deposits considering the tsunami waveform: An example of subaqueous tsunami deposits in Holocene shallow bay on southern Boso Peninsula, Central Japan Science.gov (United States) Fujiwara, Osamu; Kamataki, Takanobu 2007-08-01 This study proposes a tsunami depositional model based on observations of emerged Holocene tsunami deposits in outcrops located in eastern Japan. The model is also applicable to the identification of other deposits, such as those laid down by storms. The tsunami deposits described were formed in a small bay of 10-20-m water depth, and are mainly composed of sand and gravel. They show various sedimentary structures, including hummocky cross-stratification (HCS) and inverse and normal grading. Although, individually, the sedimentary structures are similar to those commonly found in storm deposits, the combination of vertical stacking in the tsunami deposits makes a unique pattern. This vertical stacking of internal structures is due to the waveform of the source tsunamis, reflecting: 1) extremely long wavelengths and wave period, and 2) temporal changes of wave sizes from the beginning to end of the tsunamis. The tsunami deposits display many sub-layers with scoured and graded structures. Each sub-layer, especially in sandy facies, is characterized by HCS and inverse and normal grading that are the result of deposition from prolonged high-energy sediment flows. The vertical stack of sub-layers shows incremental deposition from the repeated sediment flows. Mud drapes cover the sub-layers and indicate the existence of flow-velocity stagnant stages between each sediment flow. Current reversals within the sub-layers indicate the repeated occurrence of the up- and return-flows. The tsunami deposits are vertically divided into four depositional units, Tna to Tnd in ascending order, reflecting the temporal change of wave sizes in the tsunami wave trains. Unit Tna is relatively fine-grained and indicative of small tsunami waves during the early stage of the tsunami. Unit Tnb is a protruding coarse-grained and thickest-stratified division and is the result of a relatively large wave group during the middle stage of the tsunami. Unit Tnc is a fine alternation of thin sand 4. Microbial ecology of Thailand tsunami and non-tsunami affected terrestrials. Science.gov (United States) Somboonna, Naraporn; Wilantho, Alisa; Jankaew, Kruawun; Assawamakin, Anunchai; Sangsrakru, Duangjai; Tangphatsornruang, Sithichoke; Tongsima, Sissades 2014-01-01 The effects of tsunamis on microbial ecologies have been ill-defined, especially in Phang Nga province, Thailand. This ecosystem was catastrophically impacted by the 2004 Indian Ocean tsunami as well as the 600 year-old tsunami in Phra Thong island, Phang Nga province. No study has been conducted to elucidate their effects on microbial ecology. This study represents the first to elucidate their effects on microbial ecology. We utilized metagenomics with 16S and 18S rDNA-barcoded pyrosequencing to obtain prokaryotic and eukaryotic profiles for this terrestrial site, tsunami affected (S1), as well as a parallel unaffected terrestrial site, non-tsunami affected (S2). S1 demonstrated unique microbial community patterns than S2. The dendrogram constructed using the prokaryotic profiles supported the unique S1 microbial communities. S1 contained more proportions of archaea and bacteria domains, specifically species belonging to Bacteroidetes became more frequent, in replacing of the other typical floras like Proteobacteria, Acidobacteria and Basidiomycota. Pathogenic microbes, including Acinetobacter haemolyticus, Flavobacterium spp. and Photobacterium spp., were also found frequently in S1. Furthermore, different metabolic potentials highlighted this microbial community change could impact the functional ecology of the site. Moreover, the habitat prediction based on percent of species indicators for marine, brackish, freshwater and terrestrial niches pointed the S1 to largely comprise marine habitat indicating-species. 5. Tsunami Simulation Method Assimilating Ocean Bottom Pressure Data Near a Tsunami Source Region Science.gov (United States) Tanioka, Yuichiro 2018-02-01 A new method was developed to reproduce the tsunami height distribution in and around the source area, at a certain time, from a large number of ocean bottom pressure sensors, without information on an earthquake source. A dense cabled observation network called S-NET, which consists of 150 ocean bottom pressure sensors, was installed recently along a wide portion of the seafloor off Kanto, Tohoku, and Hokkaido in Japan. However, in the source area, the ocean bottom pressure sensors cannot observe directly an initial ocean surface displacement. Therefore, we developed the new method. The method was tested and functioned well for a synthetic tsunami from a simple rectangular fault with an ocean bottom pressure sensor network using 10 arc-min, or 20 km, intervals. For a test case that is more realistic, ocean bottom pressure sensors with 15 arc-min intervals along the north-south direction and sensors with 30 arc-min intervals along the east-west direction were used. In the test case, the method also functioned well enough to reproduce the tsunami height field in general. These results indicated that the method could be used for tsunami early warning by estimating the tsunami height field just after a great earthquake without the need for earthquake source information. 6. Numerical tsunami simulations in the western Pacific Ocean and East China Sea from hypothetical M 9 earthquakes along the Nankai trough Science.gov (United States) Harada, Tomoya; Satake, Kenji; Furumura, Takashi 2017-04-01 We carried out tsunami numerical simulations in the western Pacific Ocean and East China Sea in order to examine the behavior of massive tsunami outside Japan from the hypothetical M 9 tsunami source models along the Nankai Trough proposed by the Cabinet Office of Japanese government (2012). The distribution of MTHs (maximum tsunami heights for 24 h after the earthquakes) on the east coast of China, the east coast of the Philippine Islands, and north coast of the New Guinea Island show peaks with approximately 1.0-1.7 m,4.0-7.0 m,4.0-5.0 m, respectively. They are significantly higher than that from the 1707 Ho'ei earthquake (M 8.7), the largest earthquake along the Nankai trough in recent Japanese history. Moreover, the MTH distributions vary with the location of the huge slip(s) in the tsunami source models although the three coasts are far from the Nankai trough. Huge slip(s) in the Nankai segment mainly contributes to the MTHs, while huge slip(s) or splay faulting in the Tokai segment hardly affects the MTHs. The tsunami source model was developed for responding to the unexpected occurrence of the 2011 Tohoku Earthquake, with 11 models along the Nanakai trough, and simulated MTHs along the Pacific coasts of the western Japan from these models exceed 10 m, with a maximum height of 34.4 m. Tsunami propagation was computed by the finite-difference method of the non-liner long-wave equations with the Corioli's force and bottom friction (Satake, 1995) in the area of 115-155 ° E and 8° S-40° N. Because water depth of the East China Sea is shallower than 200 m, the tsunami propagation is likely to be affected by the ocean bottom fiction. The 30 arc-seconds gridded bathymetry data provided by the General Bathymetric Chart of the Oceans (GEBCO-2014) are used. For long propagation of tsunami we simulated tsunamis for 24 hours after the earthquakes. This study was supported by the"New disaster mitigation research project on Mega thrust earthquakes around Nankai 7. Near Source 2007 Peru Tsunami Runup Observations and Modeling Science.gov (United States) Borrero, J. C.; Fritz, H. M.; Kalligeris, N.; Broncano, P.; Ortega, E. 2008-12-01 On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights and massive inundation distances up to 2 km were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. As with all near field tsunamis, the waves struck within minutes of the massive ground shaking. Spontaneous evacuations coordinated by the Peruvian Coast Guard minimized the fatalities and illustrate the importance of community-based education and awareness programs. The residents of the fishing village Lagunilla were unaware of the tsunami hazard after an earthquake and did not evacuate, which resulted in 3 fatalities. Despite the relatively benign tsunami effects at Pisco from this event, the tsunami hazard for this city (and its liquefied natural gas terminal) cannot be underestimated. Between 1687 and 1868, the city of Pisco was destroyed 4 times by tsunami waves. Since then, two events (1974 and 2007) have resulted in partial inundation and moderate damage. The fact that potentially devastating tsunami runup heights were observed immediately south of the peninsula only serves to underscore this point. 8. Frequency Domain Response at Pacific Coast Harbors to Major Tsunamis of 2005-2011 Science.gov (United States) Xing, Xiuying; Kou, Zhiqing; Huang, Ziyi; Lee, Jiin-Jen 2013-06-01 Tsunamis waves caused by submarine earthquake or landslide might contain large wave energy, which could cause significant human loss and property damage locally as well as in distant region. The response of three harbors located at the Pacific coast (i.e. Crescent City Harbor, Los Angeles/Long Beach Port, and San Diego Harbor) to six well-known tsunamis events generated (both near-field and far-field) between 2005 and 2011 are examined and simulated using a hybrid finite element numerical model in frequency domain. The model incorporated the effects of wave refraction, wave diffraction, partial wave reflection from boundaries, entrance and bottom energy dissipation. It can be applied to harbor regions with arbitrary shapes and variable water depth. The computed resonant periods or modes of oscillation for three harbors are in good agreement with the energy spectral analysis of the time series of water surface elevations recorded at tide gauge stations inside three harbors during the six tsunamis events. The computed wave induced currents based on the present model are also in qualitative agreement with some of the reported eye-witness accounts absence of reliable current data. The simulated results show that each harbor responded differently and significantly amplified certain wave period(s) of incident wave trains according to the shape, topography, characteristic dimensions and water depth of the harbor basins. 9. A global probabilistic tsunami hazard assessment from earthquake sources Science.gov (United States) Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana 2017-01-01 Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate. 10. Study of tsunami propagation in the Ligurian Sea Directory of Open Access Journals (Sweden) E. Pelinovsky 2001-01-01 Full Text Available Tsunami propagation is analyzed for the Ligurian Sea with particular attention on the French coasts of the Mediterranean. Historical data of tsunami manifestation on the French coast are analyzed for the period 2000 B.C.–1991 A.D. Numerical simulations of potential and historical tsunamis in the Ligurian Sea are done in the context of the nonlinear shallow water theory. Tsunami wave heights as well as their distribution function is calculated for historical tsunamis and it is shown that the log-normal distribution describes reasonably the simulated data. This demonstrates the particular role of bottom irregularities for the wave height distribution function near the coastlines. Also, spectral analysis of numerical tide-gauge records is done for potential tsunamis, revealing the complex resonant interactions between the tsunami waves and the bottom oscillations. It is shown that for an earthquake magnitude of 6.8 (averaged value for the Mediterranean Sea the tsunami phenomenon has a very local character but with long duration. For sources located near the steep continental slope in the vicinity of the French-Italian Rivera, the tsunami tide-gauge records in the vicinity of Cannes – Imperia present irregular oscillations with a characteristic period of 20–30 min and a total duration of 10–20 h. For the western French coasts the amplitudes are significantly less with characteristic low-frequency oscillations (period of 40 min–1 h. 11. Influence of Flow Velocity on Tsunami Loss Estimation Directory of Open Access Journals (Sweden) Jie Song 2017-11-01 Full Text Available Inundation depth is commonly used as an intensity measure in tsunami fragility analysis. However, inundation depth cannot be taken as the sole representation of tsunami impact on structures, especially when structural damage is caused by hydrodynamic and debris impact forces that are mainly determined by flow velocity. To reflect the influence of flow velocity in addition to inundation depth in tsunami risk assessment, a tsunami loss estimation method that adopts both inundation depth and flow velocity (i.e., bivariate intensity measures in evaluating tsunami damage is developed. To consider a wide range of possible tsunami inundation scenarios, Monte Carlo-based tsunami simulations are performed using stochastic earthquake slip distributions derived from a spectral synthesis method and probabilistic scaling relationships of earthquake source parameters. By focusing on Sendai (plain coast and Onagawa (ria coast in the Miyagi Prefecture of Japan in a case study, the stochastic tsunami loss is evaluated by total economic loss and its spatial distribution at different scales. The results indicate that tsunami loss prediction is highly sensitive to modelling resolution and inclusion of flow velocity for buildings located less than 1 km from the sea for Sendai and Onagawa of Miyagi Prefecture. 12. Probabilistic tsunami hazard assessment for Point Lepreau Generating Station Energy Technology Data Exchange (ETDEWEB) Mullin, D., E-mail: [email protected] [New Brunswick Power Corporation, Point Lepreau Generating Station, Point Lepreau (Canada); Alcinov, T.; Roussel, P.; Lavine, A.; Arcos, M.E.M.; Hanson, K.; Youngs, R., E-mail: [email protected], E-mail: [email protected] [AMEC Foster Wheeler Environment & Infrastructure, Dartmouth, NS (Canada) 2015-07-01 In 2012 the Geological Survey of Canada published a preliminary probabilistic tsunami hazard assessment in Open File 7201 that presents the most up-to-date information on all potential tsunami sources in a probabilistic framework on a national level, thus providing the underlying basis for conducting site-specific tsunami hazard assessments. However, the assessment identified a poorly constrained hazard for the Atlantic Coastline and recommended further evaluation. As a result, NB Power has embarked on performing a Probabilistic Tsunami Hazard Assessment (PTHA) for Point Lepreau Generating Station. This paper provides the methodology and progress or hazard evaluation results for Point Lepreau G.S. (author) 13. Landslide tsunami hazard in the Indonesian Sunda Arc Directory of Open Access Journals (Sweden) S. Brune 2010-03-01 Full Text Available The Indonesian archipelago is known for the occurrence of catastrophic earthquake-generated tsunamis along the Sunda Arc. The tsunami hazard associated with submarine landslides however has not been fully addressed. In this paper, we compile the known tsunamigenic events where landslide involvement is certain and summarize the properties of published landslides that were identified with geophysical methods. We depict novel mass movements, found in newly available bathymetry, and determine their key parameters. Using numerical modeling, we compute possible tsunami scenarios. Furthermore, we propose a way of identifying landslide tsunamis using an array of few buoys with bottom pressure units. 14. GPS water level measurements for Indonesia's Tsunami Early Warning System Directory of Open Access Journals (Sweden) T. Schöne 2011-03-01 Full Text Available On Boxing Day 2004, a severe tsunami was generated by a strong earthquake in Northern Sumatra causing a large number of casualties. At this time, neither an offshore buoy network was in place to measure tsunami waves, nor a system to disseminate tsunami warnings to local governmental entities. Since then, buoys have been developed by Indonesia and Germany, complemented by NOAA's Deep-ocean Assessment and Reporting of Tsunamis (DART buoys, and have been moored offshore Sumatra and Java. The suite of sensors for offshore tsunami detection in Indonesia has been advanced by adding GPS technology for water level measurements. The usage of GPS buoys in tsunami warning systems is a relatively new approach. The concept of the German Indonesian Tsunami Early Warning System (GITEWS (Rudloff et al., 2009 combines GPS technology and ocean bottom pressure (OBP measurements. Especially for near-field installations where the seismic noise may deteriorate the OBP data, GPS-derived sea level heights provide additional information. The GPS buoy technology is precise enough to detect medium to large tsunamis of amplitudes larger than 10 cm. The analysis presented here suggests that for about 68% of the time, tsunamis larger than 5 cm may be detectable. 15. Tsunami waveform inversion by numerical finite-elements Green’s functions Directory of Open Access Journals (Sweden) A. Piatanesi 2001-01-01 Full Text Available During the last few years, the steady increase in the quantity and quality of the data concerning tsunamis has led to an increasing interest in the inversion problem for tsunami data. This work addresses the usually ill-posed problem of the hydrodynamical inversion of tsunami tide-gage records to infer the initial sea perturbation. We use an inversion method for which the data space consists of a given number of waveforms and the model parameter space is represented by the values of the initial water elevation field at a given number of points. The forward model, i.e. the calculation of the synthetic tide-gage records from an initial water elevation field, is based on the linear shallow water equations and is simply solved by applying the appropriate Green’s functions to the known initial state. The inversion of tide-gage records to determine the initial state results in the least square inversion of a rectangular system of linear equations. When the inversions are unconstrained, we found that in order to attain good results, the dimension of the data space has to be much larger than that of the model space parameter. We also show that a large number of waveforms is not sufficient to ensure a good inversion if the corresponding stations do not have a good azimuthal coverage with respect to source directivity. To improve the inversions we use the available a priori information on the source, generally coming from the inversion of seismological data. In this paper we show how to implement very common information about a tsunamigenic seismic source, i.e. the earthquake source region, as a set of spatial constraints. The results are very satisfactory, since even a rough localisation of the source enables us to invert correctly the initial elevation field. 16. The 2014 Lake Askja rockslide tsunami - optimization of landslide parameters comparing numerical simulations with observed run-up Science.gov (United States) Sif Gylfadóttir, Sigríður; Kim, Jihwan; Kristinn Helgason, Jón; Brynjólfsson, Sveinn; Höskuldsson, Ármann; Jóhannesson, Tómas; Bonnevie Harbitz, Carl; Løvholt, Finn 2016-04-01 The Askja central volcano is located in the Northern Volcanic Zone of Iceland. Within the main caldera an inner caldera was formed in an eruption in 1875 and over the next 40 years it gradually subsided and filled up with water, forming Lake Askja. A large rockslide was released from the Southeast margin of the inner caldera into Lake Askja on 21 July 2014. The release zone was located from 150 m to 350 m above the water level and measured 800 m across. The volume of the rockslide is estimated to have been 15-30 million m3, of which 10.5 million m3 was deposited in the lake, raising the water level by almost a meter. The rockslide caused a large tsunami that traveled across the lake, and inundated the shores around the entire lake after 1-2 minutes. The vertical run-up varied typically between 10-40 m, but in some locations close to the impact area it ranged up to 70 m. Lake Askja is a popular destination visited by tens of thousands of tourists every year but as luck would have it, the event occurred near midnight when no one was in the area. Field surveys conducted in the months following the event resulted in an extensive dataset. The dataset contains e.g. maximum inundation, high-resolution digital elevation model of the entire inner caldera, as well as a high resolution bathymetry of the lake displaying the landslide deposits. Using these data, a numerical model of the Lake Askja landslide and tsunami was developed using GeoClaw, a software package for numerical analysis of geophysical flow problems. Both the shallow water version and an extension of GeoClaw that includes dispersion, was employed to simulate the wave generation, propagation, and run-up due to the rockslide plunging into the lake. The rockslide was modeled as a block that was allowed to stretch during run-out after entering the lake. An optimization approach was adopted to constrain the landslide parameters through inverse modeling by comparing the calculated inundation with the observed run 17. Has the tsunami arrived? Part II. Science.gov (United States) Halverson, Dean; Glowac, Wayne 2009-01-01 Healthcare is an industry in the midst of significant change. After years of double-digit cost increases, the system has reached a tipping point. Where once only employers were heard crying out for change, the call is now coming from all levels of American society. The voice that is most important to effect change is the newest--that of the consumer. In part two of our overview of the healthcare tsunami, we hope to offer you some insights and practical ideas on how to improve the return on investment of your marketing. We believe those who work to understand the new market forces and react with insight will not just survive during the tsunami, they will thrive. 18. Tsunami prevention and mitigation necessities and options derived from tsunami risk assessment in Indonesia Science.gov (United States) Post, J.; Zosseder, K.; Wegscheider, S.; Steinmetz, T.; Mück, M.; Strunz, G.; Riedlinger, T.; Anwar, H. Z.; Birkmann, J.; Gebert, N. 2009-04-01 Risk and vulnerability assessment is an important component of an effective End-to-End Tsunami Early Warning System and therefore contributes significantly to disaster risk reduction. Risk assessment is a key strategy to implement and design adequate disaster prevention and mitigation measures. The knowledge about expected tsunami hazard impacts, exposed elements, their susceptibility, coping and adaptation mechanisms is a precondition for the development of people-centred warning structures, local specific response and recovery policy planning. The developed risk assessment and its components reflect the disaster management cycle (disaster time line) and cover the early warning as well as the emergency response phase. Consequently the components hazard assessment, exposure (e.g. how many people/ critical facilities are affected?), susceptibility (e.g. are the people able to receive a tsunami warning?), coping capacity (are the people able to evacuate in time?) and recovery (are the people able to restore their livelihoods?) are addressed and quantified. Thereby the risk assessment encompasses three steps: (i) identifying the nature, location, intensity and probability of potential tsunami threats (hazard assessment); (ii) determining the existence and degree of exposure and susceptibility to those threats; and (iii) identifying the coping capacities and resources available to address or manage these threats. The paper presents results of the research work, which is conducted in the framework of the GITEWS project and the Joint Indonesian-German Working Group on Risk Modelling and Vulnerability Assessment. The assessment methodology applied follows a people-centred approach to deliver relevant risk and vulnerability information for the purposes of early warning and disaster management. The analyses are considering the entire coastal areas of Sumatra, Java and Bali facing the Sunda trench. Selected results and products like risk maps, guidelines, decision support 19. Maximum Acceleration Recording Circuit Science.gov (United States) Bozeman, Richard J., Jr. 1995-01-01 Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks. 20. Deterministic Tectonic Origin Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas Science.gov (United States) Necmioglu, O.; Meral Ozel, N. 2014-12-01 Accurate earthquake source parameters are essential for any tsunami hazard assessment and mitigation, including early warning systems. Complex tectonic setting makes the a priori accurate assumptions of earthquake source parameters difficult and characterization of the faulting type is a challenge. Information on tsunamigenic sources is of crucial importance in the Eastern Mediterranean and its Connected Seas, especially considering the short arrival times and lack of offshore sea-level measurements. In addition, the scientific community have had to abandon the paradigm of a ''maximum earthquake'' predictable from simple tectonic parameters (Ruff and Kanamori, 1980) in the wake of the 2004 Sumatra event (Okal, 2010) and one of the lessons learnt from the 2011 Tohoku event was that tsunami hazard maps may need to be prepared for infrequent gigantic earthquakes as well as more frequent smaller-sized earthquakes (Satake, 2011). We have initiated an extensive modeling study to perform a deterministic Tsunami Hazard Analysis for the Eastern Mediterranean and its Connected Seas. Characteristic earthquake source parameters (strike, dip, rake, depth, Mwmax) at each 0.5° x 0.5° size bin for 0-40 km depth (total of 310 bins) and for 40-100 km depth (total of 92 bins) in the Eastern Mediterranean, Aegean and Black Sea region (30°N-48°N and 22°E-44°E) have been assigned from the harmonization of the available databases and previous studies. These parameters have been used as input parameters for the deterministic tsunami hazard modeling. Nested Tsunami simulations of 6h duration with a coarse (2 arc-min) and medium (1 arc-min) grid resolution have been simulated at EC-JRC premises for Black Sea and Eastern and Central Mediterranean (30°N-41.5°N and 8°E-37°E) for each source defined using shallow water finite-difference SWAN code (Mader, 2004) for the magnitude range of 6.5 - Mwmax defined for that bin with a Mw increment of 0.1. Results show that not only the
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6383038759231567, "perplexity": 5732.931024228995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214702.96/warc/CC-MAIN-20180819051423-20180819071423-00016.warc.gz"}
http://mathhelpforum.com/calculus/221659-u-integral-print.html
# a to the u integral • September 4th 2013, 07:03 PM Jason76 a to the u integral Does look right? http://www.freemathhelp.com/forum/im...s/confused.png As far as correct form, etc..? $\int_{1}^{5} 3^{2x} dx$. Using $\int a^{u} du = \dfrac{a^{u}}{\ln a} + C$ and $\dfrac{d}{dx} (a^{u}) = a^{u} * du * \ln a$. $u = 2x$. $du = 2$. $a = 3$. $\dfrac{1}{2} \int_{1}^{5} 3^{u} du$. $\int_{1}^{5} \dfrac{1}{2} 3^{u} du$. $\int_{1}^{5} \dfrac{3^{u}}{2} du$. $\dfrac{3^{2(u)}}{2\ln 3} |_{1}^{5}$. $\dfrac{3^{2(5)}}{2\ln 3}$ - $\dfrac{3^{2(1)}}{2\ln 3}$. $\dfrac{3^{10}}{2\ln 3}$ - $\dfrac{3^{2}}{2\ln 3}$. $\dfrac{59049}{2\ln 3}$ - $\dfrac{9}{2\ln 3} = \dfrac{59040}{2\ln 3} = \dfrac{29520}{\ln 3}$ • September 4th 2013, 07:30 PM FelixFelicis28 Re: a to the u integral Yes, that's correct, but there are a few mistakes that I feel the need to point out. Quote: Using $\int a^{u} du = \dfrac{a^{u}}{\ln a} + C$ and $\dfrac{d}{dx} (a^{u}) = a^{u} * du * \ln a$. This should be $\frac{d}{dx} a^u = \frac{d}{du} \cdot \frac{du}{dx} a^u = a^u \ln a \cdot \frac{du}{dx}$ by the chain rule. Quote: $u = 2x$ $du = 2$ This should be $u = 2x \implies \frac{du}{dx} = 2$ Quote: $\dfrac{3^{2(u)}}{2\ln 3} \bigg|_{1}^{5}$. I'm not quite sure if this is a typo or not but you have to either re-write your integral after you've integrated in terms of $x$ again if you're going to evaluate it with those limits (which I think you've done by putting the $2$ back in but it got a bit confusing by keeping it in terms of $u$) OR change your limits for $u$ i.e. $u = 2x \implies 1 \to 2, \ 5 \to 10$. Quote: $\dfrac{59049}{2\ln 3}$ - $\dfrac{9}{2\ln 3} = \dfrac{59040}{2\ln 3} = \dfrac{29520}{\ln 3}$ That's correct. :) • September 5th 2013, 12:05 AM Jason76 Re: a to the u integral Now, it's all coming together. Pretty sure, this is right: $\int_{5}^{1} 3^{2x} dx$. $u = 2x$. $du = 2 dx \rightarrow du(\dfrac{1}{2}) = dx \rightarrow \dfrac{du}{2} = dx$. $\dfrac{1}{2} \int_{5}^{1} 3^{u} du \rightarrow \int_{5}^{1} (\dfrac{1}{2}) 3^{u} du \rightarrow \dfrac{3^{u}}{2}$. $= \dfrac{3^{u}}{2\ln 3} |_{5}^{1} \rightarrow \dfrac{3^{2x}}{2\ln 3} |_{5}^{1}$. $\dfrac{3^{2(5)}}{2\ln 3} - \dfrac{3^{2(1)}}{2\ln 3} \rightarrow \dfrac{3^{10}}{2\ln 3} - \dfrac{3^{2}}{2\ln 3}$. $= \dfrac{59049}{2\ln 3} \rightarrow \dfrac{29520}{\ln 3}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 37, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701446652412415, "perplexity": 1139.5622493470792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398449145.52/warc/CC-MAIN-20151124205409-00293-ip-10-71-132-137.ec2.internal.warc.gz"}
http://www.progressingeography.com/CN/10.11820/dlkxjz.2013.11.014
• 气候与环境变化 • ### 基于氢氧同位素的中国东南部降水局地蒸发水汽贡献率 1. 西北师范大学地理与环境科学学院, 兰州 730070 • 收稿日期:2013-02-01 修回日期:2013-10-01 出版日期:2013-11-25 发布日期:2013-11-25 • 通讯作者: 张明军(1974- ),男,教授,博士生导师,主要从事冰川与环境研究。E-mail: [email protected] E-mail:[email protected] • 作者简介:马潜(1987- ),男,甘肃兰州人,硕士研究生,主要研究方向为同位素地球化学。E-mail: [email protected] • 基金资助: 国家自然科学基金项目(41161012,41240001);甘肃省高等学校基本科研业务费项目。 ### Contributions of moisture from local evaporation to precipitations in Southeast China based on hydrogen and oxygen isotopes MA Qian, ZHANG Mingjun, WANG Shengjie, WANG Baolong 1. College of Geography and Environment Sciences, Northwest Normal University, Lanzhou 730070, China • Received:2013-02-01 Revised:2013-10-01 Online:2013-11-25 Published:2013-11-25 Abstract: Stable isotopes are considered as a diagnostic tool which has been utilized in different media and widely used in geosciences and environmental studies, including use of hydrogen and oxygen isotopes in rivers, lakes and groundwater to investigate the circulation mechanism as well as the surface runoff composition in drainage basins, and use of isotopic data from speleothems, tree rings and ice cores to reconstruct paleoclimate. Precipitation is a main input factor in atmospheric water cycle and contains two natural tracers (18O and 2H) with strong signals for tracking the trajectories of water vapor. Rayleigh model is a popular model used in the methods to investigations the changes in moisture sources. Many investigators have used the model to simulate the variations of δ values in different study areas and got better results. In this paper, the study area in Southeast China is mainly influenced by summer monsoon during the period from June to September. However, with depletion of moisture in clouds, the impact of monsoon moisture changes from coast to inland. Based on Rayleigh theory and an evaporative model used by many researchers to calculate the contribution rate in different areas, we investigated the atmospheric water cycle mechanism, the contribution rate of evaporative vapor and the effect of secondary evaporation in Southeast China during the summer monsoon. (1) The comparison between the modeled values and the observed values indicated that the movement of water vapor abided by Rayleigh theory. (2) It was found that the supply of evaporative vapor from surface increased from coast to inland. The contribution rate of evaporative vapor, varying from 1.4% to 4.1% in the area, was 2.2% on average. (3) By comparison of the observed d excess to the global average d excess (10‰), it was inferred that the supply of evaporative vapor from surface and the effect of secondary evaporation both existed in this area. However, the effect of secondary evaporation decreased from coast to inland, suggesting that the decrease of the secondary evaporation may have been compensated by the supply of evaporative vapor from land. Based on the results in this research, it was concluded that the supply of evaporative vapor from surface area and the effect of secondary evaporation both had influences on water circulation in the study area. However, the value of the supply of evaporative vapor and the impact of the secondary evaporation could only be roughly estimated. Related investigations on the supply of evaporative vapor and the effect of secondary evaporation are few and far between in the area. If the problems above can be comprehensively solved, it will be of great significance not only studying the regional water cycle, but also providing basic data for agriculture, meteorology and other purposes. Thus, more sampling sites should be built in this area for detailed studies.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45893532037734985, "perplexity": 2686.7206581150026}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00552.warc.gz"}
https://openmw.readthedocs.io/en/latest/manuals/openmw-cs/tour.html
# A Tour through OpenMW CS: making a magic ring¶ In this first chapter we will create a mod that adds a new ring with a simple enchantment to the game. The ring will give its wearer a permanent Night Vision effect while being worn. You do not need previous Morrowind modding experience, but you should be familiar with the game itself. There will be no scripting necessary, we can achieve everything using just what the base game offers out of the box. Before continuing make sure that OpenMW is properly installed and playable. ## Adding the ring to the game’s records¶ In this first section we will define what our new ring is, what it looks like, and what it does. Getting it to work is the first step before we go further. ### Starting up OpenMW CS¶ We will start by launching OpenMW CS, the location of the program depends on your operating system. You will be presented with a dialogue with three options: create a new game, create a new addon, edit a content file. The first option is for creating an entirely new game, that’s not what we want. We want to edit an existing game, so choose the second option. When you save your addon you can use the third option to open it again. You will be presented with another window where you get to choose the content to edit and the name of your project. Then we have to select at least the base game and optionally a number of other addons we want to depend on. The name of the project is arbitrary, it will be used to identify the addon later in the OpenMW launcher. Choose Morrowind as your content file and enter Ring of Night Vision as the name. We could also choose further content files as dependencies if we wanted to, but for this mod the base game is enough. Once the addon has been created you will be presented with a table. If you see a blank window rather than a table choose WorldObjects from the menu. Let’s talk about the interface for a second. Every window in OpenMW CS has panels, these are often but not always tables. You can close a panel by clicking the small “X” on the title bar of the panel, or you can detach it by either dragging the title bar or clicking the icon with the two windows. A detached panel can be re-attached to a window by dragging it by the title bar on top of the window. Now let’s look at the panel itself: we have a filter text field, a very large table and a status bar. The filter will be very useful when we want to find an entry in the table, but for now it is irrelevant. The table you are looking at contains all objects in the game, these can be items, NPCs, creatures, whatever. Every object is an entry in that table, visible as a row. The columns of the table are the attributes of each object. Morrowind uses something called a relational database for game data. If you are not familiar with the term, it means that every type of thing can be expressed as a table: there is a table for objects, a table for enchantments, a table for icons, one for meshes and so on. Properties of an entry must be simple values, like numbers or text strings. If we want a more complicated property we need to reference an entry from another table. There are a few exceptions to this though, some tables do have subtables. The effects of enchantments are one of those exceptions. ### Defining a new record¶ Enough talk, let’s create the new ring now. Right-click anywhere in the objects table, choose Add Record and the status bar will change into an input field. We need to enter an ID (short for identifier) and pick the type. The identifier is a unique name by which the ring can later be identified; I have chosen ring_night_vision. For the type choose Clothing. The table should jump right to our newly created record, if not read further below how to use filters to find a record by ID. Notice that the Modified column now shows that this record is new. Records can also be Base (unmodified), Modified and Deleted. The other fields are still empty since we created this record from nothing. We can double-click a table cell while holding Shift to edit it (this is a configurable shortcut), but there is a better way: right-click the row of our new record and chose Edit Record, a new panel will open. We can right-click the row of our new record and select Edit Record, a new panel will open. Alternatively we can also define a configurable shortcut instead of using the context menu; the default is double-clicking while holding down the shift key. You can set the name, weight and coin value as you like, I chose Ring of Night Vision, 0.1 and 2500 respectively. Make sure you set the Clothing Type to Ring. We could set the other properties manually as well, but unless you have an exceptional memory for identifiers and never make typos that’s not feasible. What we are going to do instead is find the records we want in their respective tables and assign them from there. ### Finding records using filters¶ We will add an icon first. Open the Icons table the same way you opened the Objects table: in the menu click AssetsIcons. If the window gets too crowded remember that you can detach panels. The table is huge and not every ring icon starts with “ring”, so we have to use filters to find what we want. Filters are a central element of OpenMW CS and a major departure from how the original Morrowind CS was used. In fact, filters are so important that they have their own table as well. We won’t be going that far for now though. There are three types of filters: Project filters are part of the project and are stored in the project file, session filter are only valid until you exit the CS, and finally instant filter which are used only once and typed directly into the Filter field. For this tutorial we will only use instant filters. We type the definition of the filter directly into the filter field rather than the name of an existing filter. To signify that we are using an instant filter the have to use ! as the first character. Type the following into the field: !string("id", ".*ring.*") A filter is defined by a number of queries which can be logically linked. For now all that matters is that the string(<property>, <pattern>) query will check whether <property> matches <pattern>. The pattern is a regular expression, if you don’t know about them you should learn their syntax. For now all that matters is that . stands for any character and * stands for any amount, even zero. In other words, we are looking for all entries which have an ID that contains the word “ring” somewhere in it. This is a pretty dumb pattern because it will also match words like “ringmail”, but it’s good enough for now. If you have typed the filter definition properly the text should change from red to black and our table will be narrowed down a lot. Browse for an icon you like and drag & drop its table row onto the Icon field of our new ring. That’s it, you have now assigned a reference to an entry in another table to the ring entry in the Objects table. Repeat the same process for the 3D model, you can find the Meshes table under AssetsMeshes. ### Adding the enchantment¶ Putting everything you have learned so far to practice we can add the final and most important part to our new ring: the enchantment. You know enough to perform the following steps without guidance: Open the Enchantments table (MechanicsEnchantments) and create a new entry with the ID Cats Eye. Edit it so that it has Constant Effect enchantment type. To add an effect to the enchantment right-click the Magic Effects table and choose Add new row. You can edit the effects by right-clicking their table cells. Set the effect to NightEye, range to Self, and both magnitudes to 50. The other properties are irrelevant. Once you are done add the new enchantment to our ring. That’s it, we now have a complete enchanted ring to play with. Let’s take it for a test ride. Launch OpenMW and in the launcher under Data Files check your addon. Load a game and open the console. We have only defined the ring, but we haven’t placed any instance of it anywhere in the game world, so we have to create one. In the console type: player->AddItem "ring_night_vision" 1 The part in quotation marks is the ID of our ring, you have to adjust it if you chose a different ID. Exit the console and you should find a new ring in your inventory. Equip it and you will instantly receive the Night Vision effect for your character. ### Conclusion¶ In this tutorial we have learned how to create a new addon, what tables are and how to create new records. We have also taken a very brief glimpse at the syntax of filters, a feature you will be using a lot when creating larger mods. This mod is a pure addition, it does not change any of the existing records. However, if you want to actually present appealing content to the player rather than just offering abstract definitions you will have to change the game’s content. In the next tutorial we will learn how to place the ring in the game world so the player can find it legitimately. ## Adding the ring to the game’s world¶ Now that we have defined the ring it is time add it to the game world so the player can find it legitimately. We will add the ring to a merchant, place it in a chest, and put it somewhere in plain sight. To this end we will have to actually modify the contents of the game. ### Adding to an NPC¶ The simplest way is probably to add it to the inventory of a shopkeeper. An obvious candidate is Arrille in Seyda Neen - he’s quick to find in a new game and he’s easy to find in the CS as his name comes early alphabetically. Open the CS and open the Objects table (WorldObjects). Scroll down to Arrille, or use a filter like !string(“ID”,”arrille”). Open another pane to edit him - either right click and select edit or use the shortcut (default is shift double-click). Scroll down to the inventory section and right click to add a new row. Type in the id of the ring (or find it in the object pane, and drag and drop). Set the number of rings for him to stock - with a negative number indicating that he will restock again to maintain that level. However, it’s an attractive item, so he will probably wear it rather than sell it. So set his stock level too high for him to wear them all (3 works, 2 might do). Another possibility, again in Seyda Neen making it easy to access, would be for Fargoth to give it to the player in exchange for his healing ring. Open the Topicinfo Table (CharactersTopic Infos). Use a filter !string(Topic,ring) and select the row with a response starting with “You found it!”. Edit the record, firstly by adding a bit more to the response, then by adding a line to the script to give the ring to the player - the same as used earlier in the console player->AddItem "ring_night_vision" 1 ### Placing in a chest¶ For this example we will use the small chest intended for lockpick practice, located in the Census and Excise Office in Seyda Neen. First we need the ID of the chest - this can be obtained either by clicking on it in the console in the game, or by applying a similar process in the CS - World/Cells Select “Seyda Neen, Census and Excise Office” Right-click and select “View” Use mouse wheel to zoom in/out, and mouse plus WASD keys to navigate Click on the small chest Either way, you should find the ID, which is “chest_small_02_lockprac”. Open the Objects table (World/Objects) and scroll down to find this item. Alternatively use the Edit/Search facility, selecting ID rather than text, enter “lockprac” (without the quotes) into the search box, press “Search”, which should return two rows, then select the “Container” one rather than the “Instance” Right-click and “Edit Record”. Right-click the “Content” section and select “Add a row” Set the Item ID of the new row to be your new ring - simplest way is probably to open the Objects table if it’s not already open, sort on the “Modified” column which should bring the ring, with its status of “Added” to the top, then drag and drop to the chest row. Increase the Count to 1. Save the addon, then test to ensure it works - e.g. start a new game and lockpick the chest. ### Placing in plain sight¶ Let’s hide the Ring of Night vision in the cabin of the [Ancient Shipwreck] (https://en.uesp.net/wiki/Morrowind:Ancient_Shipwreck), a derelict vessel southeast of Dagon Fel. Open the list of Cells (WorldCells) and find “Ancient Shipwreck, Cabin”. This will open a visualization of the cabin. You can navigate around the scene just like you would when playing Morrowind. Use the WASD keys to move forward, backwards, and sideways. Click and drag with the left mouse button to change the direction you are looking. Navigate to the table in the cabin. If you’ve closed the Objects table, reopen it via WorldObjects. Navigate to your Ring of Night Vision (you can find it easily if you sort by the “Modified” column). Drag the ring from the Objects table onto the table in the Cell view. Now let’s move the ring to the precise location we want. Hover over the ring and click the middle mouse button. If you don’t have a middle mouse button, you can select an alternative command by going to EditPreferences… (Windows, Linux) or OpenMWPreferences… (macOS). Go to the Key Bindings section and choose “Scene” from the dropdown menu. Then click on the button for “Primary Select” and choose an alternative binding. After you have switched to movement mode, you will see several arrows. Clicking and dragging them with the right mouse button will allow you to move the object in the direction you want. If you’d like an easy way to test this, you can start OpenMW with the [game arguments](https://wiki.openmw.org/index.php?title=Testing) –start=”Ancient Shipwreck, Cabin” –skip-menu. This will place you right in the cell and allow you to pick up and equip the ring in order to check that it works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1963692158460617, "perplexity": 1181.0395011933233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710685.0/warc/CC-MAIN-20221129031912-20221129061912-00720.warc.gz"}
https://medcraveonline.com/JABB/biorecovery-of-sewage-polluted-by-waste-motor-oil.html
# Applied Biotechnology & Bioengineering Research Article Volume 9 Issue 3 # Biorecovery of sewage polluted by waste motor oil #### David Garcia-Hernandez, Liliana Marquez Benavides, Juan Luis Ignacio-De la Cruz, Juan Manuel Sanchez-Yanez function clickButton(){ var name=document.getElementById('name').value; var descr=document.getElementById('descr').value; var unCopyslNo=document.getElementById('unCopyslNo').value; document.getElementById("mydiv").style.display = "none"; $.ajax({ type:"post", url:"https://medcraveonline.com/captchaCode/server_action", data: { 'name' :name, 'descr' :descr, 'unCopyslNo': unCopyslNo }, cache:false, success: function (html) { //alert('Data Send');$('#msg').html(html); } }); return false; } Verify Captcha × Regret for the inconvenience: we are taking measures to prevent fraudulent form submissions by extractors and page crawlers. Please type the correct Captcha word to see email ID. function refreshPage(){ $("#mydiv").load(location.href + " #mydiv"); }$(document).ready(function () { //Disable cut copy paste $('#msg').bind('cut copy paste', function (e) { e.preventDefault(); }); //Disable mouse right click$("#msg").on("contextmenu",function(e){ return false; }); }); .noselect { -webkit-touch-callout: none; /* iOS Safari */ -webkit-user-select: none; /* Safari */ -khtml-user-select: none; /* Konqueror HTML */ -moz-user-select: none; /* Firefox */ -ms-user-select: none; /* Internet Explorer/Edge */ user-select: none; /* Non-prefixed version, currently supported by Chrome and Opera */ cursor: none; } Enviornmental Microbiology Laboratory, Research Institute in Chemistry and Biology, México Correspondence: Juan Manuel Sánchez–Yanez, Environmental Microbiology Laboratory, Research Institute in Chemistry and Biology, Ed. B-3. Universidad Michoacana de San Nicolás de Hidalgo. Av. Francisco J Mujica S/N. Col Felicitas del Rio, CP 58.000, Morelia. Michoacan, México, Tel 01 (443) 322 35 00 Received: April 20, 2022 | Published: May 4, 2022 Citation: García-Hernández D, Márquez-Benavides L, Ignacio-De la Cruz JL, et al. Biorecovery of sewage polluted by waste motor oil. J Appl Biotechnol Bioeng. 2022;9(3):62-65. DOI: 10.15406/jabb.2022.09.00286 # Abstract An acute problem in México and everywhere is the reutilization of sewage polluted by hydrocarbon, such as waste motor oil (O), a toxic waste according to the General Law of Ecological Balance and Environmental Protection and NOM-001- SEMARNAT-1996, NOM-002-ECOL-1997 and NOM-003-ECOL-1997, indicate that the maximum permissible limits of 25 ppm of hydrocarbons in sewage 75 ppm of wastewater to systems of urban sewage and 15 ppm for treated wastewater for public reuse, respectively, which, when exceeding the total of these values, inhibiting the treatment of that domestic sewage. An alternative solution is biostimulation with detergent, minerals and O2 (oxygen) that induce the aerobic heterotrophic microbial population in the sewage to eliminate WMO and reuse it. The objective of this work was the biostimulation of domestic sewage contaminated by AWO until it decreased to a value lower than the maximum of the NOM-001-SEMARNAT-1996, NOM-002-ECOL-1997 and NOM-003-ECOL-1997. For this, the sewage impacted by WMO was diluted and biostimulated with the detergent Tween 80, a mineral solution (MS) and H2O2 as a source of O2, using the response variables: i) CO2 production due to the mineralization of AWO in sewage, ii) determination of the decrease in the concentration of WMO in sewage by gas chromatography coupled to mass (GC-MS) and by Soxhlet, the experimental data was analyzed by ANOVA/Tukey HSD (P ≤ 0.05). The results indicate that the BIS of the water impacted by WMO with Tween 80, MS and H2O2, reduced the concentration to a value of 10 ppm, lower than that established by the NOM-001-SEMARNAT-1996, the NOM-002-ECOL -1997 and NOM-003-ECOL-1997, due to mineralization of the WMO and the evidence of its disappearance according to the CG-EM analysis. This demonstrated the biorecovery of water contaminated by WMO allow industrial and/or recreational reuse. Keywords: water reautilization, hydrocarbons, bioremediation, public and enviornmental health # Abbreviations WMO, waste motor oil; MS, mineral solution; O2, oxygen # Introduction In Mexico, waste motor oil (WMO), a product of the petrochemical industry, is a complex mixture of linear, branched, and polycyclic aromatic aliphatic hydrocarbons,1 due to lubrication cycle of automobile engines that at the end of its useful life is not properly disposed of by mechanical workshops that change the oil,2 WMO throw in the drain and avoid reuse treatment,3 in partly because components of WMO are toxic to life.4 When WMO impacts surface water and groundwater the General Law of Ecological Balance and Environmental Protection (GLEBEP), which classifies it as a hazardous waste; underline in swage due to regulations: NOM-001-SEMARNAT-1996, NOM-002-ECOL-1997 and NOM-003-ECOL-1997. The spillage of WMO in the drainage system is real problem because it is not cleaning up by sewage treatment, causing other enviornmental due to the scarcity of water for reutilization when level of WMO is over 25 ppm for, 75 ppm for urban sewage system and 15 ppm for reutilization of this type of water in public service. An ecological way to eliminate WMO impacted sewage is the biostimulation with detergent that emulsifies hydrocarbons, followed by biostimulation with a mineral solution (MS), with minerals base in: N (nitrogen), P (phosphorus), etc, and also demands O2 (oxygen) source to improve of WMO mineralization.5–7 However, cleaning up of sewage impacted by petroleum derivatives, it is not enough understood.8 Based on the above, the objective of this research was the biostimulation of sewage contaminated by WMO to eliminate at lower level accepted by NOM-001-SEMARNAT-1996, NOM-002-ECOL-1997 and NOM-003. -ECOL-1997. # Material and methods This research at was conducted in microcosmos of Bartha respirometer at the environmental microbiology laboratory, Instituto de Investigaciones Químico-Biológicas at Universidad Michoacana de San Nicolás de Hidalgo, Morelia, Mich., México. Biostimulation of sewage polluted by waste motor oil In that sense 500 mL Bartha flasks were used (Figure 1), with 100 mL of swage polluted by WMO (from auto mechanic shop in Morelia, Mich, Mexico) diluted 1:100, equivalent to 10,000 ppm biostimulated with 0.01% of Tween 20, following by mineral solution (MS) with this chemical composition (g•L-1): K2HPO4, 5.0; KH2PO4, 4.0; MgSO4, 3.0; NH4NO3, 10.0; CaCO3, 1.0; KCl, 2.0; ZnSO4, 0.5.0; CuSO4; 0.5; FeSO4, 0.2, and EDTA 8. The flasks were shaken at 100 rpm and incubated at 30 ± 2ºC/3 weeks, the experiment was carried out in 4 repetitions. As a relative control, we used: a flask with sewage sludge impacted by diluted WMO without biostimulation; as absolute control a flask with distilled water, biostimulated with Tween 20 and MS; as a sterile control, a flask with sterilized water (121°C/15 min), with Tween 20, impacted by WMO biostimulated with MS. As treatment 1, a flask with sewage impacted by WMO, biostimulated with Tween 20 and MS; as treatment 2 a flask with sewage impacted by WMO, biostimulated with Tween 20, MS and H2O2, as shown in Table 1. To demonstrate the mineralization of WMO in the sewage by biostimulation, in each of the arms 10 mL of 0.1 N KOH was added to the flask to capture the CO2, every 24 h the 0.1 N KOH was taken from each flask, the CO2 production was quantified by titration using 0.1 N HCl.9 Figure 1 Bartha respirometer to measure biostimulation in sewage impacted by waste motor oil. Variables Treatments* Relative control Absolute control Sterile control T-1 T-2 WMO + - - + + Sterile WMO - - + - - Distilled water - + - - - Mineral solution* - + + + + Tween 20: 0.01% + + + + + Agitation: 100 rpm + + + + + H2O2: 25 ppm - + - - + Table 1 Experimental design for the biostimulation of sewage polluted by waste motor oil with Tween 20, with a mineral solution and H2O2 under agitation Quantification of WMO aliphatic hydrocarbons in biostimulated sewage In sewage polluted by WMO biostimulated with Tween 20, SM and H2O2, it was measured by GC-MS analysis, and by the Soxhlet method. For this purpose, a Hewlett-Packard (Waldbroon, Germany) 6890 series gas chromatograph coupled to a 5792 A series mass spectrophotometer was used, with a 30 m long HP-5MS capillary column, with an internal diameter of 0.25 mm and a film thickness of 0.25 mm, the injection was split mode, the carrier gas was He (helium), with a flow rate of 37 cm/sec-1, the oven temperature was 40ºC/8 min, with an increase to 180ºC/6ºC min-1 and the injector temperature was 250ºC.10 Statistical analysis All results data, were subjected to ANOVA analysis of variance and Tukey comparison of means (P ≤ 0.05), by Statgraphics Centurion statistical program.11 # Results and discussion Figure 2 shows the production of CO2 during the biostimulation of sewage with Tween 20, MS and H2O2 in agitation, where the mineralization of WMO was detected, measured indirectly by amount of CO2 releasing with a maximum 3.5 mmol/mL at 24 h. Some research done related with supports that, in the sewage, the limiting factor of WMO elimination depends enough basic minerals to supply for native microorganism in sewage. In balance, allows the diversity aerobic heterotrophic microorganism to mineralize WMO, in a relatively short time, due that aliphatic are the main hydrocarbons of WMO, according to Soxhlet analysis was 10 ppm, a value lower than the maximum accepted by NOM-001-SEMARNAT-1996, NOM-002-ECOL-1997 and NOM-003-ECOL-1997. In compared to the assay of sewage polluted by WMO without biostimulation with MS where the heterotrophic microorganisms native of the sewage were unable to eliminate the hydrocarbons of the WMO, due, lowest production of CO2. While it is shown that WMO consumed because the capacity of aerobic heterotrophic microorganisms, since that the sterilization of sewage destroyed and thus suppressed the generation of CO2, despite biostimulation with MS; this CO2 values were statistically different, compared to biostimulating sewage with MS, supporting that the nutritional is the limiting factor of the bioremediation of the sewage polluted by WMO. In contrast to the trial where sewage impacted by WMO, biostimulated with Tween 20, without MS. It was clear that the detergent is necessary only for the emulsification of WMO, in that sense without biostimulation by MS there was no CO2 production, the main evidence of WMO mineralization. A critical point of the research is to understand the dynamic of bioremediation of sewage polluted by WMO. Therefore, the data shown in this research is supporting why the bioremediation of swage, as an ecological strategy for its reuse. As is reported by Gopinath indicating that bioremediation of sewage impacted by WMO that got an elimination percentage of 92.5%. Figure 2 CO2 production in sewage (water) contaminated by waste motor oil biostimulated with Tween 20, mineral solution, H2O2 at 100 rpm. *n =4. WMO: sewage polluted by waste motor oil, diluted 1:100, biostimulated with MS: mineral solution, Tween 20 at 0.01% temperature: 30+-2º C. agitation: 100 rpm. **Distint letters indicate stadistically differences according to ANOVA/Tukey HSD (P ≤ 0.05). In Figure 3, the biostimulation of sewage contaminated by WMO with Tween 20, MS plus intermittent of H2O2 kept concentration of O2 available for a longer time and decreasing rapid volatilization,12 in consequence was induced a maximun mineralization of the WMO and simultaneously the production of CO2, up to a value of 5.88 mmoles/mL at 24 h; in comparison with the sewage polluted by WMO biostimulated with the MS; but without H2O2, due the aerobic heterotrophic microorganisms of the WMO without sufficient O2 produced less CO2, with 3.5 mmoles/mL, supports the mineralization depends on the H2O2 as a limiting factor for the effective elimination of the WMO. Figure 3 CO2 production during the biostimulation of sewage (water) contaminated by waste motor oil with Tween 20, mineral solution and intermittent application of H2O2 while agitation at 100 rpm. In Figure 4, it is shown that CO2 production during the biostimulation of sewage contaminated by WMO with Tween 20 emulsifies aliphatic hydrocarbons of WMO while MS due its chemical composition with basic minerals of nitrogen, phosphates etc, induced that native microorganisms to oxide WMO under aerobic condition accelerated by the application of H2O2 in consequences registered the maximum generation of CO2 of 5.88 mmoles/mL, numerical value statistically different in comparison. with the sewage polluted by WMO biostimulated just by the Tween 20 and the MS generated 3.5 mmoles/mL of CO2 while sewage no polluted by WMO biostimulated with the Tween 20, plus the MS and the H2O2 registered the lowest value with 0.5 mmoles/mL of CO2. Figure 4 CO2 production during biostimulation of sewage contaminated by waste motor oil or WMO with Tween 20, mineral solution and regular application of H2O2. *n =4. WMO: sewage contaminated by waste motor oil diluted 1:100 (10,000 ppm) with MS: mineral solution, temperature: 30+-2º C. agitation: 100 rpm. **Distint letters indicate stadistically different according to ANOVA/Tukey HSD (P ≤ 0.05). Figure 5 shows the chromatographic profile of sewage impacted by WMO biostimulated with Tween 20, mineral solution, H2O2 and agitation at 100 rpm, where it was detected that the main aliphatic hydrocarbons of WMO were chains between C11 and C20, was used as absolute control, in that case aliphatic hydrocarbons concentration of WMO was similar (data not shown). Figure 5 Chromatogram of sewage non-sterilized contaminated by waste motor oil before biostimulation with Tween 20, mineral solution and H2O2. Figure 6 shows the biostimulation of sewage impacted by WMO with Tween 20, SM and H2O2, with evident elimination of 100% of the aliphatic hydrocarbons of chains between 11, 12 and 20 carbons data different it has been reported in the literature that support that hydrocarbon of WMO with the least number of carbons are the first to oxide, due that short-chain alkanes mineralize faster than long-chain ones.13 The results shown that sewage impacted by WMO, face a wide diversity of aerobic heterotrophic microorganisms able to mineralize 96% of the aliphatic ones from WMO having of 13-19 carbons, based in the chromatographic analysis after biostimulation of sewage with Tween 20, MS, H2O2, are necessary for the mineralization of the different aliphatic WMO, at the same time the Soxhlet analysis indicated that final concentration of the WMO it was close to 10 ppm, a lower value than the maximum permissible limits by NOM-001-SEMARNAT-1996, NOM-002-ECOL-1997 and NOM-003-ECOL-1997,14 confirmed of waste recovery to be reused for garden and industrial irrigation. To improve sewage cleaning up form WMO bioaugmentation could be apply to accelerate WMO mineralization under environment condition close to what happens in the common sewage.15 Figure 6 Chromatogram of sewage contaminated by waste motor oil after biostimulation with Tween 20, mineral solution and H2O2 agitation at 100 rpm at 30°C. Figure 7 shows the chromatogram of the biostimulation of sewage contaminated by WMO with Tween 20, MS and H2O2, in agitation up to 100 rpm at 30°C on the mineralization of aliphatic with carbon chains between C11-C20. This supporting that aerobic heterotrophic microbial native consortium have the ability in sewage to eliminate until 96% in the 21 days of the assay was evident the mineralization of 100% of the short-chain aliphatic hydrocarbons of WMO was detected mainly: undecane, dodecane, in opposite way to what was detected for the C13-C19 WMO aliphatic in a sewage concentration of 4% at the end of the experiment supported by other reports related to bioremediation of environmental pollutes by hydrocarbons like WMO.4,6,16–18 Figure 7 Percentage of remaining aliphatic from the biostimulation of sewage impacted by waste motor oil (WMO) with Tween 20, mineral solution, H2O2. # Conclusion The results of this research support biostimulation is strategy for the recovery of sewage impacted by WMO, biostimulation through actions to restore the physicochemical environment to induce microorganisms to oxidize WMO, eliminating its main aliphatic and aromatic hydrocarbons that allow industrial and/or recreational reuse. # Acknowledgments Thanks to project 2.7 (2022) of the CIC-UMSNH, to BIONUTRA, SA de CV Maravatío, Mich, Mexico for their support. # Conflicts of interest The authors declare no conflict of interest. # Funding This manuscript received no external funding. # References ©2022 García-Hernández, et al. This is an open access article distributed under the terms of the, which permits unrestricted use, distribution, and build upon your work non-commercially.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5069764852523804, "perplexity": 17155.813389419825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104506762.79/warc/CC-MAIN-20220704232527-20220705022527-00217.warc.gz"}
https://studyadda.com/question-bank/automobile-engineering_q24/5088/388282
• # question_answer Which of the following stresses are associated with the tightening of a nut on a stud? A) Tensile stresses due to stretching of stud.B) Bending stresses of stud.C) Transverse shear stresses across threads.D) Torsional shear stresses in threads due to frictional resistance Correct Answer: B You need to login to perform this action. You will be redirected in 3 sec
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9104121327400208, "perplexity": 5002.557015622176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400191160.14/warc/CC-MAIN-20200919075646-20200919105646-00312.warc.gz"}
http://mathhelpforum.com/pre-calculus/146071-polar-equations-limited-information-given.html
Math Help - Polar equations, limited information given 1. Polar equations, limited information given Hi, for polar equations, you usually need r and theta, in this case, I am only given one bit of information, how do I do it? 1) Convert these polar equations to Cartesian: a] $r = 4$ b] $r = 2cos\theta$ 2) Convert from Cartesian to polar: a] $x^2 + (y - 1)^2 = 1$ b] $y = x + 1$ Thanks for any help 2. There are 2 equations that define the relationship between cartesian and polar coordinates: $x=r\cos{\theta}$ $y=r\sin{\theta}$ or, equivalently $x^2 + y^2 = r^2$ $\theta = \tan{\frac{y}{x}}$ To do your conversions, just make the substitutions, eg: 1a $r=4$ $r^2=16$ $x^2 + y^2=16$ 3. Okay, I've got that, but what about the others. For example, 2b) surely can't be: $y = r sin\theta$ (formula) $y = x + 1$ (given) $x + 1 = r sin\theta$ $\therefore r = \frac{x + 1}{sin\theta}$ ...could it? 4. Originally Posted by SpringFan25 There are 2 equations that define the relationship between cartesian and polar coordinates: $x=r\cos{\theta}$ $y=r\sin{\theta}$ or, equivalently $x^2 + y^2 = r^2$ $\theta = \tan{\frac{y}{x}}$ Actually, it's $\tan{\theta} = \frac{y}{x}$ or $\theta = \arctan{\frac{y}{x}}$. It's also important to take into account which quadrant you are working in. 5. *ahem* yes i rushed it 6. Originally Posted by BG5965 Okay, I've got that, but what about the others. For example, 2b) surely can't be: $y = r sin\theta$ (formula) $y = x + 1$ (given) $x + 1 = r sin\theta$ $\therefore r = \frac{x + 1}{sin\theta}$ ...could it? You need to substitute for the x as well. so, to start you off: $y = x + 1$ $r\sin{\theta} = r\cos{\theta} + 1$ 7. Is it: $r = \frac{1}{sin\theta - cos\theta}$ ? 8. i think so, but i fear the wrath of prove it if im wrong 9. Originally Posted by SpringFan25 i think so, but i fear the wrath of prove it if im wrong Yes it's correct. And yes, I'm always watching
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 28, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9454615712165833, "perplexity": 2444.160404171271}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
https://en.m.wikibooks.org/wiki/TeX/clubpenalty
# TeX/penalty < TeX(Redirected from TeX/clubpenalty) ## Synopsis ```\penalty<number> \binoppenalty=<number> \brokenpenalty=<number> \clubpenalty=<number> \displaywidowpenalty=<number> \exhyphenpenalty=<number> \floatingpenalty=<number> \hyphenpenalty=<number> \interlinepenalty=<number> \linepenalty=<number> \postdisplaypenalty=<number> \predisplaypenalty=<number> \relpenalty=<number> \widowpenalty=<number> ``` ## Description `\penalty` sets the penalty for a line or page break at that point. Some penalties are built in to the TeX system and inserted automatically: • `\binoppenalty` for a line break in math mode after a binary operator. • `\brokenpenalty` for a page break, where the last line of the previous page contains a hyphenation. • `\clubpenalty` for a broken page, with a single line of a paragraph remaining on the bottom of the preceding page. • `\displaywidowpenalty` for a break before last line of a paragraph. • `\exhyphenpenalty` for hyphenating a word which already contains a hyphen. • `\floatingpenalty` for splitting an insertion. • `\hyphenpenalty` for line breaking at an automatically inserted hyphen. • `\interlinepenalty` for the penalty added after each line of a paragraph. • `\linepenalty` the badness of each line within a paragraph. • `\postdisplaypenalty` for a break after a display. • `\predisplaypenalty` for a break before a display. • `\relpenalty` for a line break at a relation. • `\widowpenalty` for a broken page, with a single line of a paragraph (called "widow") remaining on the top of the succeeding page. ## Default LaTeX for example sets these default values for built in penalties: ```\binoppenalty=700 \brokenpenalty=100 \clubpenalty=150 \displaywidowpenalty=50 \exhyphenpenalty=50 \floatingpenalty=20000 \hyphenpenalty=50 \interlinepenalty=0 \linepenalty=10 \postdisplaypenalty=0 \predisplaypenalty=10000 \relpenalty=500 \widowpenalty=150 ```
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8333231210708618, "perplexity": 2682.3902200268926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104495692.77/warc/CC-MAIN-20220707154329-20220707184329-00641.warc.gz"}
https://gmatclub.com/forum/a-committee-of-3-men-and-3-women-must-be-formed-from-a-group-of-6-men-13837.html
GMAT Question of the Day - Daily to your Mailbox; hard ones only It is currently 18 Jan 2019, 01:37 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History ## Events & Promotions ###### Events & Promotions in January PrevNext SuMoTuWeThFrSa 303112345 6789101112 13141516171819 20212223242526 272829303112 Open Detailed Calendar • ### Free GMAT Strategy Webinar January 19, 2019 January 19, 2019 07:00 AM PST 09:00 AM PST Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT. • ### FREE Quant Workshop by e-GMAT! January 20, 2019 January 20, 2019 07:00 AM PST 07:00 AM PST Get personalized insights on how to achieve your Target Quant Score. # A committee of 3 men and 3 women must be formed from a group of 6 men Author Message TAGS: ### Hide Tags GMAT Club Legend Joined: 15 Dec 2003 Posts: 4163 A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 02 Feb 2005, 19:11 3 28 00:00 Difficulty: 65% (hard) Question Stats: 39% (02:21) correct 61% (02:20) wrong based on 88 sessions ### HideShow timer Statistics A committee of 3 men and 3 women must be formed from a group of 6 men and 8 women. How many such committees can we form if 1 man and 1 woman refuse to serve together? (A) 1120 (B) 910 (C) 810 (D) 560 (E) 210 _________________ Best Regards, Paul Math Expert Joined: 02 Sep 2009 Posts: 52278 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 02 Sep 2013, 00:50 6 6 rrsnathan wrote: Hi Bunuel, Can u explain this problem in the above explanation its states that "Question says 1 man and 1 woman refuse to server. Let's say persons refuse to server are John and Mary. 6C1*8C1 would mean you are selecting any 1 man from 6 men(John may or may not be there in the selection) and any 1 woman from 8 women(Mary may or may not be there in the selection).So it is wrong. Instead, we know John is already selected, so we are left with picking 2 men from 5,and as Mary is already selected we are left with picking 2 women from 7.Thus comes 5C2*7C2. " If John and mary refused to work together then how can we select those people already and subtract 5C2*7C2 from total combination??? Pls explain this. Thanks and Regards, Rrsnathan A committee of 3 men and 3 women must be formed from a group of 6 men and 8 women. How many such committees can we form if 1 man and 1 woman refuse to serve together? The total # of committees without the restriction is $$C^3_6*C^3_8$$; The # of committees which have both John and Mary is $$1*1*C^2_5*C^2_7$$ (one way to select John, 1 way to select Mary, selecting the remaining 2 men from 5, selecting the remaining 2 women from 7). $${Total} - {Restriction} = C^3_6*C^3_8-C^2_5*C^2_7$$. Hope it's clear. _________________ ##### General Discussion Director Joined: 19 Nov 2004 Posts: 525 Location: SF Bay Area, USA Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 02 Feb 2005, 21:51 = Total combination - combination in which the man and women serve together 6c3*8c3 - 5c2*7c2 Intern Joined: 28 Dec 2004 Posts: 32 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 05 Mar 2005, 18:52 Hi, I was a little confused about how you determine: (5,2)*(7,2)? Thanks, Mike Director Joined: 18 Feb 2005 Posts: 639 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 05 Mar 2005, 20:11 2 1 man and 1 woman are already selected so You can select the remaining 2 men and 2 women from 5 men and 7 women So 5C2*7C2 VP Joined: 13 Jun 2004 Posts: 1074 Location: London, UK Schools: Tuck'08 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 06 Mar 2005, 06:37 gmat2me2 wrote: 1 man and 1 woman are already selected so You can select the remaining 2 men and 2 women from 5 men and 7 women So 5C2*7C2 sorry guys, I don't get it. I agree on the way to calculate it : total outcome - outcome when the man and the woman are together in the group I found 6C3*8C3 - 6C1*8C1..which is wrong but i can not understand why my answer is wrong and why 5c2*7c2 is good ? 5c2*7c2 just deal with 2 people , it seems incomplete to me... please help VP Joined: 30 Sep 2004 Posts: 1425 Location: Germany Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 06 Mar 2005, 06:40 1 Antmavel wrote: gmat2me2 wrote: 1 man and 1 woman are already selected so You can select the remaining 2 men and 2 women from 5 men and 7 women So 5C2*7C2 sorry guys, I don't get it. I agree on the way to calculate it : total outcome - outcome when the man and the woman are together in the group I found 6C3*8C3 - 6C1*8C1..which is wrong but i can not understand why my answer is wrong and why 5c2*7c2 is good ? 5c2*7c2 just deal with 2 people , it seems incomplete to me... please help 5c2*7c2 => means that THE women and THE man is already in the group. so 4 places are left for 2 out of 5 (5c2) and 2 out of 7 (7c2). Manager Joined: 27 Jan 2005 Posts: 97 Location: San Jose,USA- India Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 08 Mar 2005, 23:17 Antmavel wrote: I found 6C3*8C3 - 6C1*8C1..which is wrong but i can not understand why my answer is wrong and why 5c2*7c2 is good ? 5c2*7c2 just deal with 2 people , it seems incomplete to me... please help Question says 1 man and 1 woman refuse to server. Let's say persons refuse to server are John and Mary. 6C1*8C1 would mean you are selecting any 1 man from 6 men(John may or may not be there in the selection) and any 1 woman from 8 women(Mary may or may not be there in the selection).So it is wrong. Instead, we know John is already selected, so we are left with picking 2 men from 5,and as Mary is already selected we are left with picking 2 women from 7.Thus comes 5C2*7C2. Manager Joined: 30 May 2013 Posts: 155 Location: India Concentration: Entrepreneurship, General Management GPA: 3.82 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 01 Sep 2013, 21:15 Hi Bunuel, Can u explain this problem in the above explanation its states that "Question says 1 man and 1 woman refuse to server. Let's say persons refuse to server are John and Mary. 6C1*8C1 would mean you are selecting any 1 man from 6 men(John may or may not be there in the selection) and any 1 woman from 8 women(Mary may or may not be there in the selection).So it is wrong. Instead, we know John is already selected, so we are left with picking 2 men from 5,and as Mary is already selected we are left with picking 2 women from 7.Thus comes 5C2*7C2. " If John and mary refused to work together then how can we select those people already and subtract 5C2*7C2 from total combination??? Pls explain this. Thanks and Regards, Rrsnathan Math Expert Joined: 02 Sep 2009 Posts: 52278 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 02 Sep 2013, 00:53 Bunuel wrote: rrsnathan wrote: Hi Bunuel, Can u explain this problem in the above explanation its states that "Question says 1 man and 1 woman refuse to server. Let's say persons refuse to server are John and Mary. 6C1*8C1 would mean you are selecting any 1 man from 6 men(John may or may not be there in the selection) and any 1 woman from 8 women(Mary may or may not be there in the selection).So it is wrong. Instead, we know John is already selected, so we are left with picking 2 men from 5,and as Mary is already selected we are left with picking 2 women from 7.Thus comes 5C2*7C2. " If John and mary refused to work together then how can we select those people already and subtract 5C2*7C2 from total combination??? Pls explain this. Thanks and Regards, Rrsnathan A committee of 3 men and 3 women must be formed from a group of 6 men and 8 women. How many such committees can we form if 1 man and 1 woman refuse to serve together? The total # of committees without the restriction is $$C^3_6*C^3_8$$; The # of committees which have both John and Mary is $$1*1*C^2_5*C^2_7$$ (one way to select John, 1 way to select Mary, selecting the remaining 2 men from 5, selecting the remaining 2 women from 7). $${Total} - {Restriction} = C^3_6*C^3_8-C^2_5*C^2_7$$. Hope it's clear. Similar questions: at-a-meeting-of-the-7-joint-chiefs-of-staff-the-chief-of-154205.html a-committee-of-6-is-chosen-from-8-men-and-5-women-so-as-to-104859.html _________________ Intern Joined: 13 Dec 2013 Posts: 36 GMAT 1: 620 Q42 V33 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 19 Apr 2014, 18:48 Hi, I tried to do it the other way around, instead of: total combos - combos restricted, I tried the approach of adding up all permited combos. Combos where the man is but the woman is left out: 6C3*7C3 (Only take 7 women into account, not 8) Combos where the woman is but the man is left out: 5C3*8C3 (Only take 5 men into account, not 6) Combos where neither is in a group selected: 5C3*7C3 (Both are taken out) Adding up these three scenarios, I get a total of 1,530 combos. Could someone help me out here? Can't seem to understand where i'm over estimating. Much appreciated. Math Expert Joined: 02 Sep 2009 Posts: 52278 Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 20 Apr 2014, 02:08 2 Enael wrote: Hi, I tried to do it the other way around, instead of: total combos - combos restricted, I tried the approach of adding up all permited combos. Combos where the man is but the woman is left out: 6C3*7C3 (Only take 7 women into account, not 8) Combos where the woman is but the man is left out: 5C3*8C3 (Only take 5 men into account, not 6) Combos where neither is in a group selected: 5C3*7C3 (Both are taken out) Adding up these three scenarios, I get a total of 1,530 combos. Could someone help me out here? Can't seem to understand where i'm over estimating. Much appreciated. The number of committees with John but not Mary: $$(1*C^2_5)(C^3_7)=10*35=350$$; The number of committees with Mary but not John: $$(C^3_5)(1*C^2_7)=10*21=210$$; The number of committees without John and without Mary: $$(C^3_5)(C^3_7)=10*35=350$$. Total = 350 + 350 + 210 = 910. Hope it's clear. _________________ CEO Status: GMATINSIGHT Tutor Joined: 08 Jul 2010 Posts: 2723 Location: India GMAT: INSIGHT Schools: Darden '21 WE: Education (Education) Re: A committee of 3 men and 3 women must be formed from a group of 6 men  [#permalink] ### Show Tags 20 Apr 2018, 01:21 1 Paul wrote: A committee of 3 men and 3 women must be formed from a group of 6 men and 8 women. How many such committees can we form if 1 man and 1 woman refuse to serve together? Please find the solution with two methods in attachment Bunuel A) 1120 B) 910 C) 810 D) 560 E) 210 Attachments File comment: www.GMATinsight.com Screen Shot 2018-04-20 at 2.48.06 PM.png [ 447.24 KiB | Viewed 6023 times ] _________________ Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: [email protected] I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html ACCESS FREE GMAT TESTS HERE:22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION Math Expert Joined: 02 Sep 2009 Posts: 52278 Re: A committee of 3 men and 3 women must be formed from a group  [#permalink] ### Show Tags 20 Apr 2018, 01:23 GMATinsight wrote: Paul wrote: A committee of 3 men and 3 women must be formed from a group of 6 men and 8 women. How many such committees can we form if 1 man and 1 woman refuse to serve together? Please find the solution with two methods in attachment Bunuel A) 1120 B) 910 C) 810 D) 560 E) 210 _______________ Done. Thank you. _________________ Re: A committee of 3 men and 3 women must be formed from a group &nbs [#permalink] 20 Apr 2018, 01:23 Display posts from previous: Sort by
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6678856611251831, "perplexity": 1531.5336573786565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660020.5/warc/CC-MAIN-20190118090507-20190118112507-00491.warc.gz"}
https://blender.stackexchange.com/questions/112531/which-file-format-represents-surfaces-with-quadric-equations?noredirect=1
# Which file format represents surfaces with quadric equations? ### EDIT: I realized NURBS is similar to the type of file format I'm looking for. However, it seems to specify 2-D curves, whereas I'm looking for a surface embedded in R3 =============================================================== # Original post: =============================================================== I am trying to model human bodies based on pictures taken of people at various angles theta and phi. To that end, I would like to reduce file size by representing different parts of the body by 16 parameters: 10 from a quadric equation and 6 from boundaries In the most general case, the 10 paramters A-J come from the equation =============================================================== and bounds # x = [x_lo, x_hi], y = [y_lo, y_hi], z = [z_lo, z_hi] =============================================================== Is there a file type that stores 3-D models as these continuous, algebraic objects rather than as polygonal meshes? Polygon representations take more space • Based on a cursory glance at the Wikipedia article, I think NURBS is for specifying 2D curves. The quadric surface specification you seem to be looking for is more detailed. I'm still looking for a file format that does what you want. I'm trying to solve a similar problem Jul 9, 2018 at 22:55 I don't know enough about math to interpret your specific math requirements but generally speaking mathematically defined models are as far as I know specified either as NURBS or in ACIS Solid format. ACIS solids are used in many 2D CAD applications in the style of AutoCAD alikes and similar clones, but also in applications like FreeCAD or OpenSCAD. It is generally more geared towards "hard suface" modelling, for geometric shapes like mechanical parts, engineering and industrial design. On the other hand, NURBS is generally used for the same ends but where more organic or flowing shaped are required, like automotive industry, nautical design, aerospace engineering, aircraft design, etc. Contrary to what you state, NURBS can be used for 3D models, while NURBS curves can be two dimensional they can also describe splines in 3D space, and NURBS surfaces are its extension to describing actual surface shapes. Now it is not common to model human form in NURBS, most people do it using mesh based geometry mostly for a variety of practical reasons (like availability of software, animation capabilities, common tools and workflow, texturing, among others) but it has been done before. To answer your question directly about a file format the two open standards that come to mind are either IGES or STEP, as far as open formats go that is all I know. A better question would be What tools to use? since Blender is very poorly suited to deal with NURBS. Not only are its tools vestigial and its bare capabilities minimal at best; it cannot, as far as I know import or export any NURBS based data. As for alternatives, the list of open source or free tools is sadly non existent, as far as I know. The best known NURBS software is probably Rhino and CATIA, both of which are more geared towards CAD and mechanical work. Software like Maya and 3DS Max have builtin tools for NURBS work, but it is not its main focus. They use just proprietary formats to store their data, though most can export in both IGES and STEP. For alternatives you may look into MoI, which is a far cheaper alternative and a very capable software, more focused simplicity of workflow and designing small parts (rather than full blown projects). It is closely related to and has workflow highly compatible with Rhino. • Thank you for your detailed response. With respect to the specific quadric equation I mentioned, however, is there a file format which describes a surface using the equation Ax^2 + By^2 + Cz^2 + Dxy + Exz + Fyz + Gx + Hy + Iz + J = 0 ? If not, that's fine. It wasn't entirely clear from your answer. But if you've been doing 3-D for awhile and that's the list of formats you know about, it's unlikely such a format already exists Jul 10, 2018 at 0:16 • As I mentioned math is not my strong suit, not sure what equations those formats use, but as far as formats go I believe these are the ones available. You can probably find more info in their respective specifications. Jul 10, 2018 at 0:24 • Note: I found a simpler solution to this problem here Aug 6, 2018 at 23:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2839633822441101, "perplexity": 1263.9868440698967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338213.55/warc/CC-MAIN-20221007143842-20221007173842-00099.warc.gz"}
https://html.hanspub.org/file/_8-1250849_1_hanspub.htm
序号 m x y 备注 1 $m=\prod _{i=1}^{t}{q}_{i}$ ${q}_{i}={2}^{{s}_{i}}{3}^{{t}_{i}}+1$ $\sum _{i=1}^{t}{s}_{i}-1$ $\sum _{i=1}^{t}{t}_{i}$ qi为质数 () 2 $m={2}^{a}\prod _{i=2}^{t}{q}_{i}$ ${q}_{i}={2}^{{s}_{i}}{3}^{{t}_{i}}+1$ $\sum _{i=\text{2}}^{t}{s}_{i}+a-2$ $\sum _{i=2}^{t}{t}_{i}$ qi为质数 ( ${s}_{i}\ge 1,{t}_{i}\ge 0$ ) 3 $m={\text{3}}^{b}\prod _{i=2}^{t}{q}_{i}$ ${q}_{i}={2}^{{s}_{i}}{3}^{{t}_{i}}+1$ $\sum _{i=\text{2}}^{t}{s}_{i}$ $\sum _{i=2}^{t}{t}_{i}+b-1$ qi为质数 ( ${s}_{i}\ge 1,{t}_{i}\ge 0$ ) $s=\text{1}$ 时, $t\ne 0$ 4 $m={\text{2}}^{a}{\text{3}}^{b}\prod _{i=3}^{t}{q}_{i}$ ${q}_{i}={2}^{{s}_{i}}{3}^{{t}_{i}}+1$ $\sum _{i=3}^{t}{s}_{i}+a-\text{2}$ $\sum _{i=3}^{t}{t}_{i}+b-1$ qi为质数 ( ${s}_{i}\ge 1,{t}_{i}\ge 0$ ) $s=\text{1}$ 时, $t\ne 0$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8851683139801025, "perplexity": 1725.1987106228446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00552.warc.gz"}
https://lib.dr.iastate.edu/etd/13573/
Dissertation 2013 #### Degree Name Doctor of Philosophy #### Department Physics and Astronomy Ruslan Prozorov #### Abstract \specialchapt{ABSTRACT} Many iron-based superconductors undergo a tetragonal to orthorhombic change of their crystallographic lattice symmetry, as well as paramagnetic to anti-ferromagnetic ordering upon cooling through a characteristic temperature $T_N$. The anisotropic structure of the orthorhombic crystal symmetry would naturally lead one to expect to find in-plane electronic anisotropy. Upon cooling through $T_s$, and going into the orthorhombic symmetry, crystals divide into many small \textit{twin domains}. Although crystallographically identical, the twin domains express four different rotations of the orthorhombic lattice within the $\bf{ab}$-plane making direct measurements along an individual orthorhombic axis impossible. This complication lead to the developement of uniaxial stress and strain detwinning, which makes one of the four domain rotations far more energetically favorable than the other three, to the extent that more than 90\% of the entire crystal volume may be represented by the dominant domain. Once in this $\textit{detwinned}$ state, measurements may be made along the individual orthorhombic axes, allowing one to probe in-plane anisotropy. Following the developement of the detwinning technique, measurements of the in-plane resistivity anisotropy between the orthorhombic $a_o$ and $b_o$ axes were made. The results, however, turned out to be the opposite of what is predicted from simple models of electrical resistivity. Many different competing theories were developed to understand this unusual behavior. The goal of my doctoral research is to understand the validitiy of these different theories and discover the primary driving force behind this unexpected result. My experiments on the effects of doping on the in-plane resistivity anisotropy yielded an interesting result that not only is there an assymetry between electron and hole doping, but also that the sign of the anisotropy changes sign with sufficient hole doping. This result, along with the tempreature dependence of the in-plane resistivity anisotropy, provide very strong evidence that the primary source is anisotropic scattering due to magnetic spin fluctuations. #### DOI https://doi.org/10.31274/etd-180810-3196 Erick Blomberg en application/pdf 93 pages COinS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6856256127357483, "perplexity": 1661.6297320762858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826968.71/warc/CC-MAIN-20181215174802-20181215200802-00463.warc.gz"}
http://tex.stackexchange.com/questions/124705/problem-with-counter-and-pause/124716
# Problem with counter and \pause I am trying to create a manual continuation annotation for my presentations (because I need more flexibility than provided by the automatic splitting feature of beamer). What I thought of is, for each sequence of slides create a counter and then increment and print its value on each successive slide of the sequence. If I do that "manually", everything works fine. Then I tried to automate things a little bit more, as shown below. The countslides command takes an argument, constructs a counter using the argument for its name (if it does not exist) and then increments and prints its value. Unfortunately, the code produces an error "Missing number, treated as zero" on the first slide (even though the result is correct). The error goes away if I remove the \pause command on this slide (or if I remove the \resetcounteronoverlays from my command definition, but I cannot afford this). Maybe my code is wrong, but I cannot see how (and it has worked fine in all other cases). \documentclass{beamer} \usepackage{ifthen} \begin{document} \makeatletter \newcommand{\countslides}[1]{% \ifthenelse{\expandafter\isundefined\csname c@cnt#1\endcsname}% {\newcounter{cnt#1}% \resetcounteronoverlays{cnt#1}% }% {}% \stepcounter{cnt#1}% {~\footnotesize (\arabic{cnt#1})}% } \makeatother \begin{frame} \frametitle{Hello \countslides{abc}} A \pause B \end{frame} \begin{frame} \frametitle{Bye \countslides{abc}} C \pause D \end{frame} \end{document} - To a zeroth approximation, I'd guess: \newcounter is global. \pause will cause the frame contents to be typeset more than once. That sounds like something that could conflict. –  Ulrich Schwarz Jul 19 '13 at 16:58 Do you need fancy nesting of the continuation frames that would require you to have several counters for this? I'm seeing if we can just expose beamer's "continuation" mechanic to handle manual breaks. –  Ulrich Schwarz Jul 19 '13 at 17:10 @UlrichSchwarz I do have one situation where the sequence of slides is interrupted, so the simple splitting of beamer is not enough. Also I would like to implement a notation "(1/3)" on the title of the slides; I have done so using the totcount package with the above code, but I will see how this can be implemented using your (second) solution. –  nplatis Jul 21 '13 at 6:56 Solution 0: if you can live with splitting off the initial definition: \documentclass{beamer} \makeatletter \newcommand\countable[1]{% \newcounter{cnt#1}% \resetcounteronoverlays{cnt#1}% } \newcommand{\countslides}[1]{% \stepcounter{cnt#1}% {~\footnotesize (\arabic{cnt#1})}% } \makeatother \begin{document} \countable{abc} \begin{frame} \frametitle{Hello \countslides{abc}} A \pause B \end{frame} \begin{frame} \frametitle{Bye \countslides{abc}} C \pause D \end{frame} \end{document} A solution that strikes me as somewhat cleaner would be this: you give the frames in question the contgroup=... key, and they should all behave as if they were broken automatically, i.e. the normal continuation title templates are applied. \documentclass{beamer} \makeatletter \newcommand\handlecontgroup[1]{% \only<1>{% \ifcsname ums@cntgroup@#1\endcsname \relax \else \expandafter\gdef\csname ums@cntgroup@#1\endcsname{0}% \fi \beamer@autobreakcount=\csname ums@cntgroup@#1\endcsname\relax \expandafter\xdef\csname ums@cntgroup@#1\endcsname{% \the\beamer@autobreakcount}% }% } \define@key{beamerframe}{contgroup}{\handlecontgroup{#1}}% \makeatother \begin{document} \begin{frame}[contgroup=abc] \frametitle{Hello} A \pause B \end{frame} \begin{frame}[contgroup=abc] \frametitle{Bye} C \pause D \end{frame} \begin{frame} \frametitle{Bye} C \pause D \end{frame} \begin{frame}[contgroup=abc] \frametitle{Bye} C \pause D \end{frame} \end{document} - The first solution is simple and nice, but as a programmer I would always prefer to automate things as much as possible :-) so the second solution is indeed cleaner. –  nplatis Jul 21 '13 at 6:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.822700023651123, "perplexity": 1366.5141508119925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535924131.19/warc/CC-MAIN-20140901014524-00447-ip-10-180-136-8.ec2.internal.warc.gz"}
https://avramaral.github.io/aramco_course/
This page provides auxiliary material for the course Applied Statistics and Data Analysis (ASDA), given by Dr. Paula Moraga. In this page, you can find a few different case studies (and an introduction to tidyverse) that can be accessed in the above tabs.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7884109616279602, "perplexity": 899.7351598197898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711150.61/warc/CC-MAIN-20221207053157-20221207083157-00405.warc.gz"}
http://mathoverflow.net/questions/16977/why-do-my-quantum-group-books-avoid-homotopical-language/16999
# Why do my quantum group books avoid homotopical language? I am sitting on my carpet surrounded by books about quantum groups, and the only categorical concept they discuss are the representation categories of quantum groups. Many notes closer to "Kontsevich stuff" discuss matters in a far more categorical/homotopical way, but they seem not to wish to touch the topic of quantum groups very much.... I know next to nothing about this matter, but it seems tempting to believe one could maybe also phrase the first chapters of the quantum group books in a language, where for example quasitriangulated qHopf algebras would just be particular cases of a general "coweak ${Hopf_\infty}$-algebra" (does such thing exist?). Probably then there should be some (${\infty}$,1) category around the corner and maybe some other person's inofficial online notes trying to rework such a picture in a dg Hopf algebra model category picture. My quantum group books don't mention anything in such a direction (in fact ahead of a chapter on braided tensor categories the word 'category' does kind of not really appear at all). So either 1. [ ] I am missing a key point and just proved my stupidity to the public 2. [ ] There is a quantum group book that I have missed, namely ...... 3. [ ] Such a picture is boring and/or wrong for the following reason ..... Which is the appropriate box to tick? - I'm not going to tick any of the boxes. To my knowledge, no book exists that does what you're looking for. A lot of these applications of homotopy theory to "quantum algebra" and noncommutative algebraic geometry are relatively new. It isn't very often that an introductory textbook comes out that covers the "bleeding edge", so to speak. You can try reading papers, but I doubt very much that you can find a book that covers the topic in the generality you're looking for. – Harry Gindi Mar 3 '10 at 19:03 Well, it's reasonable question, but it's phrased in a weird enough way that I can imagine deciding to downvote it (I didn't). After all, it seems to be assuming that the people who write quantum groups books understand homotopical algebra well enough to write a book using it in a crucial way, which is not really defensible on the facts. Almost certainly the correct answer is "the number of people who understand both topics in a deep way is extremely small, and none of them have gotten around to writing such a book." – Ben Webster Mar 3 '10 at 19:05 Sorry for the phrasing. In retrospection, I realize myself that already "I am sitting on my carpet" is not exactly a wise choice for the beginning of a question, at least if one wants people to take it seriously. And there are some more parts... sorry, will do better in the future! Thanks very much for the answers. – olli_jvn Mar 3 '10 at 21:44 I will say: I think there are a lot of interesting questions along these lines, which might you consider asking now, if you think them out carefully. I certainly would like to know how quantum groups and homotopical algebra fit together, but I think this question wasn't the right way to ask. – Ben Webster Mar 3 '10 at 22:02 The question is rather rambling and it is more about not so well-defined appetites (do you have a more conrete motivation?). There is one thing which however makes full sense and deserves the consideration. Namely it has been asked what about higher categorical analogues of (noncommutative noncocommutative) Hopf algebras. This is not a trivial subject, because it is easier to do resolutions of operads than more general properads. Anyway the infinity-bialgebras are much easier than the Hopf counterpart. There is important work of Umble and Saneblidze in this direction (cf. arxiv/0709.3436). The motivating examples are however rather different than quantum groups, coming from rational homotopy theory, I think. Similarly, there is no free Hopf algebra in an obvious sense what makes difficult to naturally interpret deformation complexes for Hopf algebras (there is a notion called free Hopf algebra, concerning something else). Boris Shoikhet, aided with some help from Kontsevich, as well as Martin Markl have looked into this. Another relevant issue is to include various higher function algebras on higher categorical groups, enveloping algebras of higher Lie algebras (cf. baranovsky (pdf) or arxiv 0706.1396 version), usual quantum groups, examples of secondary Steenrod algebra of Bauese etc. into a unique natural higher Hopf setting. I have not seen that. The author of the question might also be interested in a monoidal bicategorical approach to general Hopf algebroids by Street and Day. - There is certainly a natural homotopical analog of braided tensor category, namely a stable $E_2$ category (ie an $E_2$ object of the $\infty$-category of dg categories, or if you prefer or stable $(\infty,1)$-categories). Such things can be defined using Lurie's DAG I (for stable) and III (for $E_2$). Rather than trying to define versions of Hopf algebras, you can talk about fiber functors on such. In fact the general Tannakian pattern discussed in other MO posts generalizes from the symmetric monoidal setting to the braided setting -- i.e. given a braided category (say in this homotopical sense) you can define a "Spectrum", consisting of $E_2$ functors to modules over various $E_3$-algebras (which are $E_2$ categories). This defines a kind of object that you can call an $E_3$ stack (stack on $E_3$ algebras). [If you work in a nonderived setting there's no difference between $E_3$ and commutative.] This has an underlying usual stack. Anyway I learned all this from John Francis, who's been working on developing $E_n$-algebraic geometry... anyway that was a digression, the point is you can talk about $E_2$ categories with a good fiber functor and use that as the definition of an $\infty$-quantum group.. As for other interactions, there is a very significant interplay between homotopy theory and quantum groups in the work in progress of Gaitsgory and Lurie on "quantum geometric Langlands". This was the topic of the 2008 Talbot workshop (see here). There are some related notes also by Gaitsgory and Lurie here. One awesome idea is the use of the $E_2$ perspective to explain WHY quantum groups relate to local systems on configuration spaces of points (Drinfeld-Kohno theorem and its generalizations) and in fact use it to prove the Kazhdan-Lusztig equivalence between quantum groups and affine Lie algebras. This would be a perfect topic for your dreamed-of book --- for now I'd make do with a paper or even course notes!! -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7093116044998169, "perplexity": 501.9482342298193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00162-ip-10-164-35-72.ec2.internal.warc.gz"}
https://jwst-pipeline.readthedocs.io/en/stable/jwst/pathloss/description.html
# Description¶ ## Overview¶ The pathloss step calculates and applies corrections that are needed to account for various types of signal loss in spectroscopic data. The motivation behind the correction is different for NIRSpec and NIRISS, while that for MIRI has not been implemented yet. For NIRSpec, this correction accounts for losses in the optical system due to light being scattered outside the grating, and to light not passing through the aperture, while for NIRISS SOSS data it corrects for the flux that falls outside the subarray. ## Background¶ The correction is applicable to NIRSpec IFU, MSA, and FIXEDSLIT exposure types, to NIRISS SOSS data, and to MIRI LRS and MRS data, although the MIRI correction has not been implemented yet. The description of how the NIRSpec reference files were created and how they are to be applied to NIRSpec data is given in ESA-JWST-SCI-NRS-TN-2016-004 (P. Ferruit: The correction of path losses for uniform and point sources). The NIRISS algorithm was provided by Kevin Volk. ## Algorithm¶ ### NIRSpec¶ This step calculates a 1-D correction array as a function of wavelength by interpolating in the pathloss reference file cube at the position of a point source target. It creates 2 pairs of 1-D arrays, a wavelength array (calculated from the WCS applied to the index of the plane in the wavelength direction) and a pathloss correction array calculated by interpolating each plane of the pathloss cube at the position of the source (which is taken from the datamodel). Pairs of these arrays are computed for both point source and uniform source data types. For the uniform source pathloss calculation, there is no dependence on position in the aperture/slit. Once the 1-D correction arrays have been computed, both forms of the correction (point and uniform) are interpolated, as a function of wavelength, into the 2-D space of the slit or IFU data and attached to the output data model (extensions “PATHLOSS_PS” and “PATHLOSS_UN”) as a record of what was computed. The form of the 2-D correction (point or uniform) that’s appropriate for the data is divided into the SCI and ERR arrays and propagated into the variance arrays of the science data. ### NIRISS¶ The correction depends on column number in the science data and on the Pupil Wheel position (keyword PWCPOS). It is provided in the reference file as a FITS image of 3 dimensions (to be compatible with the NIRSpec reference file format). The first dimension is a dummy, while the second gives the dependence with row number, and the third with Pupil Wheel position. For the SUBSTEP96 subarray, the reference file data has shape (1, 2040, 17). The algorithm calculates the correction for each column by simply interpolating along the Pupil Wheel position dimension of the reference file using linear interpolation. The 1-D vector of correction vs. column number is interpolated, as a function of wavelength, into the 2-D space of the science image and divided into the SCI and ERR arrays and propagated into the variance arrays. The 2-D correction array is also attached to the datamodel (extension “PATHLOSS_PS”) as a record of what was applied. ## Error Propagation¶ As described above, the correction factors are divided into the SCI and ERR arrays of the science data, and the square of the correction is divided into the variance arrays (VAR_RNOISE, VAR_POISSON, VAR_FLAT) if they exist.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5413352847099304, "perplexity": 1767.234961732827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00421.warc.gz"}
http://mostholyfaith.com/Beta/bible/bibleXref.asp?xref=bible%5EMatthew%5E24%5E18
Matthew Chapter 24 [KJVwc] Matthew 23   Matthew (KJVwc) Chapter Index   Matthew 25 Verse 18 Expanded Bible Comments Additional Comments References  About EBC Open Refs in New Window 1 And Jesus went out, and departed from the temple: and his disciples came to him for to show him the buildings of the temple. 2 And Jesus said unto them, See ye not all these things? verily I say unto you, There shall not be left here one stone upon another, that shall not be thrown down. 3 And as he sat upon the mount of Olives, the disciples came unto him privately, saying, Tell us, when shall these things be? and what shall be the sign of thy presence, and of the end of the age? 4 And Jesus answered and said unto them, Take heed that no man deceive you. 5 For many shall come in my name, saying, I am Christ; and shall deceive many. 6 And ye shall hear of wars and rumours of wars: see that ye be not troubled: for all these things must come to pass, but the end is not yet. 7 For nation shall rise against nation, and kingdom against kingdom: and there shall be famines, and pestilences, and earthquakes, in divers places. 8 All these are the beginning of sorrows. 9 Then shall they deliver you up to be afflicted, and shall kill you: and ye shall be hated of all nations for my name's sake. 10 And then shall many be offended, and shall betray one another, and shall hate one another. 11 And many false prophets shall rise, and shall deceive many. 12 And because iniquity shall abound, the love of many shall wax cold. 13 But he that shall endure unto the end, the same shall be saved. 14 And this gospel of the kingdom shall be preached in all the world for a witness unto all nations; and then shall the end come. 15 When ye therefore shall see the abomination of desolation, spoken of by Daniel the prophet, stand in the holy place, (whoso readeth, let him understand:) 16 Then let them which be in Judaea flee into the mountains: 17 Let him which is on the housetop not come down to take any thing out of his house: 18 Neither let him which is in the field return back to take his clothes. 19 And woe unto them that are with child, and to them that give suck in those days! 20 But pray ye that your flight be not in the winter, neither on the sabbath day: 21 For then shall be great tribulation, such as was not since the beginning of the world to this time, no, nor ever shall be. 22 And except those days should be shortened, there should no flesh be saved: but for the elect's sake those days shall be shortened. 23 Then if any man shall say unto you, Lo, here is Christ, or there; believe it not. 24 For there shall arise false Christs, and false prophets, and shall show great signs and wonders; insomuch that, if it were possible, they shall deceive the very elect. 25 Behold, I have told you before. 26 Wherefore if they shall say unto you, Behold, he is in the desert; go not forth: behold, he is in the secret chambers; believe it not. 27 For as the bright shining cometh out of the east, and shineth even unto the west; so shall also the presence of the Son of man be. 28 For wheresoever the carcase is, there will the eagles be gathered together. 29 Immediately after the tribulation of those days shall the sun be darkened, and the moon shall not give her light, and the stars shall fall from heaven, and the powers of the heavens shall be shaken: 30 And then shall appear the sign of the Son of man in heaven: and then shall all the tribes of the earth mourn, and they shall see the Son of man coming in the clouds of heaven with power and great glory. 31 And he shall send his angels with a great trumpet, and they shall gather together his elect from the four winds, from one end of heaven to the other. 32 Now learn a parable of the fig tree; When his branch is yet tender, and putteth forth leaves, ye know that summer is nigh: 33 So likewise ye, when ye shall see all these things, know that it is near, even at the doors. 34 Verily I say unto you, This generation shall not pass, till all these things be fulfilled. 35 Heaven and earth shall pass away, but my words shall not pass away. 36 But of that day and hour knoweth no man, no, not the angels of heaven, but my Father only. 37 But as the days of Noah were, so shall also the presence of the Son of man be. 38 For as in the days that were before the flood they were eating and drinking, marrying and giving in marriage, until the day that Noah entered into the ark, 39 And knew not until the flood came, and took them all away; so shall also the presence of the Son of man be. 40 Then shall two be in the field; the one shall be taken, and the other left. 41 Two grinding at the mill; the one shall be taken, and the other left. 42 Watch therefore: for ye know not what hour your Lord doth come. 43 But know this, that if the goodman of the house had known in what watch the thief would come, he would have watched, and would not have suffered his house to be broken up. 44 Therefore be ye also ready: for in such an hour as ye think not the Son of man cometh. 45 Who then is the faithful and wise servant, whom his lord hath made ruler over his household, to give them meat in due season? 46 Blessed is that servant, whom his lord when he cometh shall find so doing. 47 Verily I say unto you, That he shall make him ruler over all his goods. 48 But and if that evil servant shall say in his heart, My lord delayeth his coming; 49 And shall begin to smite his fellowservants, and to eat and drink with the drunken; 50 The lord of that servant shall come in a day when he looketh not for him, and in an hour that he is not aware of, 51 And shall cut him asunder, and appoint him his portion with the hypocrites: there shall be weeping and gnashing of teeth. Matthew 23   Matthew (KJVwc) Chapter Index   Matthew 25 Top of Page
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8101179599761963, "perplexity": 6194.968525309333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00336.warc.gz"}
http://mathoverflow.net/questions/119829/what-are-normal-sets-fr%c3%a9chet/120060
# What are Normal Sets (Fréchet)? In 1913, LEJ Brouwer started a new approach to give a topologist's definition of the notion dimension ("Über den natürlichen Dimensionsbegriff", Journal für die reine und angewandte Mathematik, 142, 1913, pp. 146--152".) In this paper, Brouwer starts with a "Normalmenge" (Normal Set), referring to Maurice René Fréchet. • Can anyone explain in modern terms by which properties Normal Sets are characterized? • Where can Fréchets definition be found? - Meanwhile, I think what Brouwer meant is called Normal spaces today. – Andreas Loos Jan 25 '13 at 12:48 The "normal sets" are separable metric spaces with no isolated points, as introduced by Fréchet in Sur quelques points du calcul fonctionnel, Rendiconti del Circolo Matematico di Palermo 22, 1-74 (1906). See in particular pages 23-24, where the "classes normales" are defined as being [1] "parfaites, séparables et admettant une généralisation du théorème de Cauchy". For an extensive discussion of Brouwer's paper in the historical context see D.M. Johnson's 1981 article in the Archive for History of Exact Sciences. Johnson notes that Brouwer is largely following F. Hausdorff's Grundzüge der Mengenlehre in his classification of the normal sets. [1] The reference to Cauchy's theorem is the requirement that the limit of every subsequence of a sequence converging to an element $A$ is also $A$. - So "généralisation du théorème de Cauchy" perhaps is completeness. – Gerald Edgar Jan 27 '13 at 23:27 The most complete study in English of Fréchet's work that I know of is a series of three long papers (total of 217 pages) by Angus Ellis Taylor that were published in the 1980s: A study of Maurice Fréchet: I. His early work on point set theory and the theory of functionals, Archive for History of Exact Sciences 27 #3 (1982), 233-295. A study of Maurice Fréchet: II. Mainly about his work on general topology, 1909–1928, Archive for History of Exact Sciences 34 #4 (1985), 279-380. A study of Maurice Fréchet: III. Fréchet as analyst, 1909–1930, Archive for History of Exact Sciences 37 #1 (1987), 25-76. Near the top of p. 256 of the first paper Taylor writes: In a number of theorems Fréchet deals with $V$-classes that are complete and separable. He calls them normal. This terminology has not survived; in later developments of abstract topology the word normal is given an entirely different meaning. - Great answer, thank you! – user30980 Jan 28 '13 at 12:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9358518123626709, "perplexity": 1232.329100631892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398468971.92/warc/CC-MAIN-20151124205428-00019-ip-10-71-132-137.ec2.internal.warc.gz"}
https://how-to.aimms.com/Articles/109/109-convert-compound-sets.html
# Prepare for the Deprecation of Compound Sets Note We are actively updating this topic during the deprecation stages. Your feedback is welcome and appreciated, as it may help others facing the same issue. ## Summary AIMMS will deprecate compound sets soon after January 1, 2020. The functionality of compound sets can be achieved with a set mapping. This document provides a process to replace the compound sets with a set mapping. For an overview of the rationale and timeline for deprecating compound sets, read AIMMS Knowledge: Overview: Deprecation of Compound Sets ## Identifying compound sets in your application A compound set is defined as one of these: • It is a subset of a Cartesian product with an index or element parameter declared in its attribute form. • It is a subset of another compound set. We provide a library with tools to identify compound sets based on these characteristics. To identify compound sets in your application, 1. Download the attached AIMMS project download and run it using AIMMS 4.54 or more recent but not more recent than AIMMS 4.72. 2. Copy the DeprecateCompoundSetUtilities library to your AIMMS project. 3. Run the procedure dcsu::prIdentifyCompoundSets. This tests for compound sets, according to the following rules: • A set whose string in the subset of attribute has a comma, and has defined the attribute index or the attribute parameter. (These are compound root sets.) • A set with a compound set as its domain set. (These are not compound root sets.) 4. The procedure fills the sets dcsu::sCompoundRootSets, dcsu::sCompoundSets, and dcsu::sCompoundSetsThatAreNotRootSets. Using these results, you may continue to the conversion procedure below. ## Replacing compound sets with set mapping This conversion procedure explains how to convert compound sets to set mappings in your application. This ensures that your model will function in the same way but without compound sets. Note The conversion procedure contains a multitude of steps, and you may wonder whether this is necessary? To determine the scope that this conversion procedure needs to handle, note that compound data is present in AIMMS Cases and compound data identifiers are present in both WinUI and WebUI pages of that AIMMS application. AIMMS cases cannot be edited manually. The format of both WinUI and WebUI pages are designed for fast serialization instead of for human editing. Obviously, this conversion procedure should not overlook the need to adapt the model itself. The multitude of steps are too gradually transform the information in cases, pages, and model. Overview of the conversion procedure Step 1: Create backups of your application and cases. Step 3: Create Set Mapping with data of compound sets. Step 4: Create Set Mapping declarations and copy them to your main model. Step 5: Create a shadow case for each case with shadow data for the compound data identifiers. Step 6: Adapt the model to remove compound sets. Step 7 Move compound indexes to the corresponding set mapping sets. Step 8: Copy each shadow case back to its corresponding original case. Step 9: Remove DeprecateCompoundSetUtilities library from your application. ### Step 1: Create backups of your data The importance of creating backups before starting maintenance on your projects cannot be overemphasized. 1. Simply create a physical copy of the project and cases and store this in a safe place. 2. Consider putting the project in a Source Code Management system, if you haven’t done so already. ### Step 2: Add library DeprecateCompoundSetUtilities The AIMMS project download provides an example app and utility library DeprecateCompoundSetUtilities. Copy the library from that example and add it to your application. ### Step 3: Create Set Mapping There are two things to watch out for: 1. The definition of a compound set should be suitable for a relation as well. Use the data from compound sets in your project to create corresponding relations. The definition (if any) of a compound set must be suitable for a relation as well. Consider the following example: Set C { SubsetOf: (S, T, U); Tags: (TS, TT, TU); Index: h ; Definition: { { (i,j,k) | pAllowedElementsC(i,j,k) = 1 } } } Set D { SubsetOf: C; Index: g ; definition: { { h | pAllowedElementsD(h.TS, h.TT, h.TU) = 1 } } } In the example above, the definition of C can also be used for a relation, $$R$$, that is a subset of the Cartesian product $$S \times T \times U$$. The definition of D cannot be used for a relation, so it must be rewritten: Set D { SubsetOf: C; Index: g ; definition: { { (i,j,k) | pAllowedElementsC(i,j,k) = 1 and pAllowedElementsD(i, j, k) = 1 } } } The new definition of D is now based on tuples instead of individual elements and can be used for a relation. 2. The predeclared set Integers cannot be used as a component in the domain of a compound set for conversion. As an example consider the set Set E { SubsetOf: (S, Integers); Tags: (TS, Int); Index: i_e ; } The language construct ie.Int will be converted to the use of an element parameter. To fill this element parameter with the appropriate contents, a slicing is formulated and this slicing involves an index of each component. For instance as follows: ElementParameter epTag_E_int { IndexDomain: iSMI_E; Range: Integers; Definition: first( IndexIntegers | exists( i | ( i, IndexIntegers, iSMI_E ) in sSetMappingRelation_E ) ); } When the set Integers is used as a component, then IndexIntegers is an index that varies over 2G elements. An attempt to do so would trigger the error message The set Integers is too big to be used as the range of running index "IndexIntegers". Therefore we should introduce a new set, say s_SomeIntegers and fill it using the integer elements actually used. Then we should replace the component Integers in the compound set, for instance as follows: Set E { SubsetOf: (S, s_SomeIntegers); Tags: (TS, Int); Index: i_e ; } The set s_SomeIntegers should not be declared to be a subset of the set Integers. Once the compound set conversion is complete, we can make s_SomeIntegers a subset of the set Integers. ### Step 4: Create Set Mapping declarations Now let’s create a set mapping for each compound set in your model. Group set mappings according to namespace (main model, library or module). Open the WinUI page: Deprecate Compound Set Control Page of the library DeprecateCompoundSetUtilities, and press the button Create Set Mapping Declarations. A section named set mapping declarations appears in the main model. Sections named <prefix> set mapping declarations appear in each library/module where compound sets are defined. These sections are created in the runtime library CompoundSetMappingRuntimeLibrary as runtime libraries are the only place where a library or main model may create new AIMMS code. The model explorer should now look something like this: Perform the following sequence for each set mapping declarations section. 1. Go to Edit > Export to save a file (e.g., smd.ams). 2. Select focus on the main model, library or module and create a section named Set Mapping Declarations. 3. Select that newly created section and go to Edit > Import to select the file you saved (e.g., smd.ams). Caution Do not Copy/Paste the section Set Mapping Declarations of the runtime library! When you Copy/Paste, the copied section still contains references to the runtime indexes. This causes compilation errors upon restart. Now is a good time to save the project, exit AIMMS, and create another backup copy of your project. ### Step 5: Create shadow cases Shadow cases are cases where the compound data is replaced by atomic shadow data. You can convert cases with compound data to shadow cases using a tool in the DeprecateCompoundSetUtilities library. You can convert multiple cases contained in one folder using the Folder option, or convert each case separately using the File option. 1. Go to Deprecate Compound Set Control Page of the DeprecateCompoundSetUtilities library. 2. In the section labeled Forward - creating shadow cases: 1. Specify the input file/folder (to pull original cases containing compound data). 2. Specify the output file/folder (to push converted cases containing atomic data). 3. Then click the Copy button to convert. ### Step 6: Adapt model to remove compound sets This section shows how to convert models using compound sets to use the set mappings created in step 3 above. #### Example case In this conversion step we will use a running example that contains: • One dimensional sets $$S, T, U$$, with indexes respectively $$i, j, k$$. • A relation $$R$$ that is subset of the Cartesian product $$S \times T \times U$$. • A compound set $$C$$ with index $$h$$ defined as $$\{ (i, j, k) | (i, j, k) \in R \}$$. The tags of this compound set are $$(TS,TT,TU)$$ • A compound subset $$D \subset C$$ with index $$g$$. Note that $$D$$ inherits its tags from $$C$$. • A parameter $$P$$ declared over the index for the compound set: $$P_h$$ • A parameter $$P1$$ declared over the index for the compound subset: $$P1_g$$ • A parameter $$Q$$ declared over the indexes for the one dimensional sets: $$Q_{i,j,k}$$ • A parameter $$Q1$$ declared over the index $$i$$: $$Q1_i$$ #### Replace use of tags The following Parameter contains a tag referencing a compound set: Parameter p1 { IndexDomain: h; Definition: A(h.ts); } AIMMS displays the error message: The "TS" is not a tag that can be associated with index "h". You can replace it with a tag referencing a set mapping: Parameter p1 { IndexDomain: h; Definition: A(epTag_C_TS(h)); } #### Replace atomic indexes with set mapping index Consider the declaration of compound data parameter P: Parameter P { IndexDomain: h; } Then using P is not allowed in an expression such as: Parameter PS { IndexDomain: (i,j,k); Definition: p(i,j,k); } It is not allowed, as the automatic mapping between h and (i,j,k) is no longer supported. AIMMS displays a compilation error The number of arguments in the parameter "P" is not correct. You can replace this definition by: Parameter PS { IndexDomain: (i,j,k); Definition: sum(h|(i,j,k,h) in sMappingSet_C_Relation,p(h)); } #### Replace the function Tuple The function `Tuple is a predeclared function to create an element in a compound set from elements in the atomic sets that together form the domain of that compound set. Consider the function: epC := Tuple( epS, epT, epU ); Here epS, epT, and epU contain the elements, and Tuple will create a corresponding element in the compound set C, where C is the range of the element parameter epC. With the deprecation of compound sets, Tuple is no longer supported , and this should be replaced by: epC := first( iSMI_C | ( epS, epT, epU, iSMI_C ) in sSetMappingRelation_C ); ### Step 7: Move compound indexes to set mapping sets To ensure screen definitions are not broken, you must move indexes from the declarations of compound sets to the declaration of the corresponding set mapping set. To move an index that is declared as part of a set declaration: 1. Delete it using the wizard at the index attribute. 2. Re-create it in the destination set. ### Step 8: Move shadow cases back to original cases You can convert shadow cases created in step 5 back to the original case locations using the same tool in the DeprecateCompoundSetUtilities library. You can convert multiple cases contained in one folder using the Folder option, or convert each case separately using the File option. 1. Go to Deprecate Compound Set Control Page of the DeprecateCompoundSetUtilities library. 2. In the section labeled Backward - creating cases with original identifiers without compound data: 1. Specify the input file/folder (to pull cases containing converted data). 2. Specify the output file/folder (to push to the original case folder location). 3. Then click the Copy button to convert. ### Step 9: Remove the library DeprecateCompoundSetUtilities Now that you have removed compound sets from your project, you can remove the library DeprecateCompoundSetUtililities. ## Glossary of Terms Used Atomic sets One-dimensional sets that are not compound sets are called atomic sets. Examples of atomic sets are sets containing names, calendars and subsets of the set Integers. To declare a relation, AIMMS only allows atomic sets in the subset of attribute of that relation. Atomic index An atomic index is an index in an atomic set. A compound index is an index in a compound set. Set mapping A set mapping is a collection of identifiers that together provide an alternative for the functionality of a single compound set. A set mapping consists of: • A set mapping set is an atomic set with elements that look like elements from a compound set. • A set mapping index is an index in a set mapping set. Note that a set mapping index is an atomic index. • A set mapping relation is a relation that contains the same set of tuples as a compound set. • A set mapping parameter is an element parameter that contains the data to handle the “tags” functionality of a compound set. Compound data A compound data identifier is a parameter, variable, or constraint with at least one compound index in its index domain. Thus, compound data is the data of a compound data identifier. Screen definition A screen definition is a serialized representation of a screen. The point and click types of UI provided by AIMMS, both WinUI and WebUI, store these screen definitions as text files within an AIMMS project.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20391172170639038, "perplexity": 3252.7518905795664}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662588661.65/warc/CC-MAIN-20220525151311-20220525181311-00131.warc.gz"}
https://www.youphysics.education/scalar-and-vector-quantities/dot-product/dot-product-problems/
# Dot product problems with solution Problem statement: Given the vectors: A = 3i + 2jk and B = 5i +5j, find: 1. The dot product AB. 2. The projection of A onto B. 3. The angle between A and B. 4. A vector of magnitude 2 in the XY plane perpendicular to B. Knowledge is free, but servers are not. Please consider supporting us by disabling your ad blocker on YouPhysics. Thanks! Solution: It is essential when working with vectors to use proper notation. Always draw an arrow over the letters representing vectors. You can also use bold characters to represent a vector quantity. The dot product of two vectors A and B expressed in unit vector notation is given by: Remember that the dot product returns a scalar (a number). Knowledge is free, but servers are not. Please consider supporting us by disabling your ad blocker on YouPhysics. Thanks! To find the projection of A onto B we divide the dot product we have determined before by the magnitude of B (visit the page dot product for more information): The angle between both vectors is given by the expression we derived when we defined the dot product: In order to find a vector C perpendicular B we equal their dot product to zero. Vector C written in unit vector notation is given by: And the dot product is: The previous equation is the first condition that the components of C must obey. Moreover, its magnitude has to be 2: And substituting the condition given by the dot product: Finally, C expressed in unit vector notation is given by: The post Dot product problems with solution appeared first on YouPhysics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905422925949097, "perplexity": 1031.9038782065222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608702.10/warc/CC-MAIN-20210613100830-20210613130830-00584.warc.gz"}
https://scirate.com/search?q=au:Ye_J+in:stat
# Search SciRate ### results for au:Ye_J in:stat • Mutual Information (MI) is often used for feature selection when developing classifier models. Estimating the MI for a subset of features is often intractable. We demonstrate, that under the assumptions of conditional independence, MI between a subset of features can be expressed as the Conditional Mutual Information (CMI) between pairs of features. But selecting features with the highest CMI turns out to be a hard combinatorial problem. In this work, we have applied two unique global methods, Truncated Power Method (TPower) and Low Rank Bilinear Approximation (LowRank), to solve the feature selection problem. These algorithms provide very good approximations to the NP-hard CMI based feature selection problem. We experimentally demonstrate the effectiveness of these procedures across multiple datasets and compare them with existing MI based global and iterative feature selection procedures. • May 09 2017 stat.ML cond-mat.dis-nn cs.AI cs.CV cs.LG arXiv:1705.02894v2 Generative Adversarial Nets (GANs) represent an important milestone for effective generative models, which has inspired numerous variants seemingly different from each other. One of the main contributions of this paper is to reveal a unified geometric structure in GAN and its variants. Specifically, we show that the adversarial generative model training can be decomposed into three geometric steps: separating hyperplane search, discriminator parameter update away from the separating hyperplane, and the generator update along the normal vector direction of the separating hyperplane. This geometric intuition reveals the limitations of the existing approaches and leads us to propose a new formulation called geometric GAN using SVM separating hyperplane that maximizes the margin. Our theoretical analysis shows that the geometric GAN converges to a Nash equilibrium between the discriminator and generator. In addition, extensive numerical results show that the superior performance of geometric GAN. • Genome-wide association studies (GWAS) have achieved great success in the genetic study of Alzheimer's disease (AD). Collaborative imaging genetics studies across different research institutions show the effectiveness of detecting genetic risk factors. However, the high dimensionality of GWAS data poses significant challenges in detecting risk SNPs for AD. Selecting relevant features is crucial in predicting the response variable. In this study, we propose a novel Distributed Feature Selection Framework (DFSF) to conduct the large-scale imaging genetics studies across multiple institutions. To speed up the learning process, we propose a family of distributed group Lasso screening rules to identify irrelevant features and remove them from the optimization. Then we select the relevant group features by performing the group Lasso feature selection process in a sequence of parameters. Finally, we employ the stability selection to rank the top risk SNPs that might help detect the early stage of AD. To the best of our knowledge, this is the first distributed feature selection model integrated with group Lasso feature selection as well as detecting the risk genetic factors across multiple research institutions system. Empirical studies are conducted on 809 subjects with 5.9 million SNPs which are distributed across several individual institutions, demonstrating the efficiency and effectiveness of the proposed method. • We study an extreme scenario in multi-label learning where each training instance is endowed with a single one-bit label out of multiple labels. We formulate this problem as a non-trivial special case of one-bit rank-one matrix sensing and develop an efficient non-convex algorithm based on alternating power iteration. The proposed algorithm is able to recover the underlying low-rank matrix model with linear convergence. For a rank-$k$ model with $d_1$ features and $d_2$ classes, the proposed algorithm achieves $O(\epsilon)$ recovery error after retrieving $O(k^{1.5}d_1 d_2/\epsilon)$ one-bit labels within $O(kd)$ memory. Our bound is nearly optimal in the order of $O(1/\epsilon)$. This significantly improves the state-of-the-art sampling complexity of one-bit multi-label learning. We perform experiments to verify our theory and evaluate the performance of the proposed algorithm. • Mar 03 2017 stat.ML arXiv:1703.00598v3 We study a fundamental class of regression models called the second order linear model (SLM). The SLM extends the linear model to high order functional space and has attracted considerable research interest recently. Yet how to efficiently learn the SLM under full generality using nonconvex solver still remains an open question due to several fundamental limitations of the conventional gradient descent learning framework. In this study, we try to attack this problem from a gradient-free approach which we call the moment-estimation-sequence (MES) method. We show that the conventional gradient descent heuristic is biased by the skewness of the distribution therefore is no longer the best practice of learning the SLM. Based on the MES framework, we design a nonconvex alternating iteration process to train a $d$-dimension rank-$k$ SLM within $O(kd)$ memory and one-pass of the dataset. The proposed method converges globally and linearly, achieves $\epsilon$ recovery error after retrieving $O[k^{2}d\cdot\mathrm{polylog}(kd/\epsilon)]$ samples. Furthermore, our theoretical analysis reveals that not all SLMs can be learned on every sub-gaussian distribution. When the instances are sampled from a so-called $\tau$-MIP distribution, the SLM can be learned by $O(p/\tau^{2})$ samples where $p$ and $\tau$ are positive constants depending on the skewness and kurtosis of the distribution. For non-MIP distribution, an addition diagonal-free oracle is necessary and sufficient to guarantee the learnability of the SLM. Numerical simulations verify the sharpness of our bounds on the sampling complexity and the linear convergence rate of our algorithm. • We proposed a probabilistic approach to joint modeling of participants' reliability and humans' regularity in crowdsourced affective studies. Reliability measures how likely a subject will respond to a question seriously; and regularity measures how often a human will agree with other seriously-entered responses coming from a targeted population. Crowdsourcing-based studies or experiments, which rely on human self-reported affect, pose additional challenges as compared with typical crowdsourcing studies that attempt to acquire concrete non-affective labels of objects. The reliability of participants has been massively pursued for typical non-affective crowdsourcing studies, whereas the regularity of humans in an affective experiment in its own right has not been thoroughly considered. It has been often observed that different individuals exhibit different feelings on the same test question, which does not have a sole correct response in the first place. High reliability of responses from one individual thus cannot conclusively result in high consensus across individuals. Instead, globally testing consensus of a population is of interest to investigators. Built upon the agreement multigraph among tasks and workers, our probabilistic model differentiates subject regularity from population reliability. We demonstrate the method's effectiveness for in-depth robust analysis of large-scale crowdsourced affective data, including emotion and aesthetic assessments collected by presenting visual stimuli to human subjects. • Probabilistic Temporal Tensor Factorization (PTTF) is an effective algorithm to model the temporal tensor data. It leverages a time constraint to capture the evolving properties of tensor data. Nowadays the exploding dataset demands a large scale PTTF analysis, and a parallel solution is critical to accommodate the trend. Whereas, the parallelization of PTTF still remains unexplored. In this paper, we propose a simple yet efficient Parallel Probabilistic Temporal Tensor Factorization, referred to as P$^2$T$^2$F, to provide a scalable PTTF solution. P$^2$T$^2$F is fundamentally disparate from existing parallel tensor factorizations by considering the probabilistic decomposition and the temporal effects of tensor data. It adopts a new tensor data split strategy to subdivide a large tensor into independent sub-tensors, the computation of which is inherently parallel. We train P$^2$T$^2$F with an efficient algorithm of stochastic Alternating Direction Method of Multipliers, and show that the convergence is guaranteed. Experiments on several real-word tensor datasets demonstrate that P$^2$T$^2$F is a highly effective and efficiently scalable algorithm dedicated for large scale probabilistic temporal tensor analysis. • The RNA-sequencing (RNA-seq) is becoming increasingly popular for quantifying gene expression levels. Since the RNA-seq measurements are relative in nature, between-sample normalization of counts is an essential step in differential expression (DE) analysis. The normalization of existing DE detection algorithms is ad hoc and performed once for all prior to DE detection, which may be suboptimal since ideally normalization should be based on non-DE genes only and thus coupled with DE detection. We propose a unified statistical model for joint normalization and DE detection of log-transformed RNA-seq data. Sample-specific normalization factors are modeled as unknown parameters in the gene-wise linear models and jointly estimated with the regression coefficients. By imposing sparsity-inducing L1 penalty (or mixed L1/L2-norm for multiple treatment conditions) on the regression coefficients, we formulate the problem as a penalized least-squares regression problem and apply the augmented lagrangian method to solve it. Simulation studies show that the proposed model and algorithms outperform existing methods in terms of detection power and false-positive rate when more than half of the genes are differentially expressed and/or when the up- and down-regulated genes among DE genes are unbalanced in amount. • Genome-wide association studies (GWAS) offer new opportunities to identify genetic risk factors for Alzheimer's disease (AD). Recently, collaborative efforts across different institutions emerged that enhance the power of many existing techniques on individual institution data. However, a major barrier to collaborative studies of GWAS is that many institutions need to preserve individual data privacy. To address this challenge, we propose a novel distributed framework, termed Local Query Model (LQM) to detect risk SNPs for AD across multiple research institutions. To accelerate the learning process, we propose a Distributed Enhanced Dual Polytope Projection (D-EDPP) screening rule to identify irrelevant features and remove them from the optimization. To the best of our knowledge, this is the first successful run of the computationally intensive model selection procedure to learn a consistent model across different institutions without compromising their privacy while ranking the SNPs that may collectively affect AD. Empirical studies are conducted on 809 subjects with 5.9 million SNP features which are distributed across three individual institutions. D-EDPP achieved a 66-fold speed-up by effectively identifying irrelevant features. • We develop an efficient alternating framework for learning a generalized version of Factorization Machine (gFM) on steaming data with provable guarantees. When the instances are sampled from $d$ dimensional random Gaussian vectors and the target second order coefficient matrix in gFM is of rank $k$, our algorithm converges linearly, achieves $O(\epsilon)$ recovery error after retrieving $O(k^{3}d\log(1/\epsilon))$ training instances, consumes $O(kd)$ memory in one-pass of dataset and only requires matrix-vector product operations in each iteration. The key ingredient of our framework is a construction of an estimation sequence endowed with a so-called Conditionally Independent RIP condition (CI-RIP). As special cases of gFM, our framework can be applied to symmetric or asymmetric rank-one matrix sensing problems, such as inductive matrix completion and phase retrieval. • Learning under a Wasserstein loss, a.k.a. Wasserstein loss minimization (WLM), is an emerging research topic for gaining insights from a large set of structured objects. Despite being conceptually simple, WLM problems are computationally challenging because they involve minimizing over functions of quantities (i.e. Wasserstein distances) that themselves require numerical algorithms to compute. In this paper, we introduce a stochastic approach based on simulated annealing for solving WLMs. Particularly, we have developed a Gibbs sampler to approximate effectively and efficiently the partial gradients of a sequence of Wasserstein losses. Our new approach has the advantages of numerical stability and readiness for warm starts. These characteristics are valuable for WLM problems that often require multiple levels of iterations in which the oracle for computing the value and gradient of a loss function is embedded. We applied the method to optimal transport with Coulomb cost and the Wasserstein non-negative matrix factorization problem, and made comparisons with the existing method of entropy regularization. • We propose a framework, named Aggregated Wasserstein, for computing a dissimilarity measure or distance between two Hidden Markov Models with state conditional distributions being Gaussian. For such HMMs, the marginal distribution at any time spot follows a Gaussian mixture distribution, a fact exploited to softly match, aka register, the states in two HMMs. We refer to such HMMs as Gaussian mixture model-HMM (GMM-HMM). The registration of states is inspired by the intrinsic relationship of optimal transport and the Wasserstein metric between distributions. Specifically, the components of the marginal GMMs are matched by solving an optimal transport problem where the cost between components is the Wasserstein metric for Gaussian distributions. The solution of the optimization problem is a fast approximation to the Wasserstein metric between two GMMs. The new Aggregated Wasserstein distance is a semi-metric and can be computed without generating Monte Carlo samples. It is invariant to relabeling or permutation of the states. This distance quantifies the dissimilarity of GMM-HMMs by measuring both the difference between the two marginal GMMs and the difference between the two transition matrices. Our new distance is tested on the tasks of retrieval and classification of time series. Experiments on both synthetic data and real data have demonstrated its advantages in terms of accuracy as well as efficiency in comparison with existing distances based on the Kullback-Leibler divergence. • Sparse support vector machine (SVM) is a popular classification technique that can simultaneously learn a small set of the most interpretable features and identify the support vectors. It has achieved great successes in many real-world applications. However, for large-scale problems involving a huge number of samples and extremely high-dimensional features, solving sparse SVMs remains challenging. By noting that sparse SVMs induce sparsities in both feature and sample spaces, we propose a novel approach, which is based on accurate estimations of the primal and dual optima of sparse SVMs, to simultaneously identify the features and samples that are guaranteed to be irrelevant to the outputs. Thus, we can remove the identified inactive samples and features from the training phase, leading to substantial savings in both the memory usage and computational cost without sacrificing accuracy. To the best of our knowledge, the proposed method is the \emphfirst \emphstatic feature and sample reduction method for sparse SVM. Experiments on both synthetic and real datasets (e.g., the kddb dataset with about 20 million samples and 30 million features) demonstrate that our approach significantly outperforms state-of-the-art methods and the speedup gained by our approach can be orders of magnitude. • In a variety of research areas, the weighted bag of vectors and the histogram are widely used descriptors for complex objects. Both can be expressed as discrete distributions. D2-clustering pursues the minimum total within-cluster variation for a set of discrete distributions subject to the Kantorovich-Wasserstein metric. D2-clustering has a severe scalability issue, the bottleneck being the computation of a centroid distribution, called Wasserstein barycenter, that minimizes its sum of squared distances to the cluster members. In this paper, we develop a modified Bregman ADMM approach for computing the approximate discrete Wasserstein barycenter of large clusters. In the case when the support points of the barycenters are unknown and have low cardinality, our method achieves high accuracy empirically at a much reduced computational cost. The strengths and weaknesses of our method and its alternatives are examined through experiments, and we recommend scenarios for their respective usage. Moreover, we develop both serial and parallelized versions of the algorithm. By experimenting with large-scale data, we demonstrate the computational efficiency of the new methods and investigate their convergence properties and numerical stability. The clustering results obtained on several datasets in different domains are highly competitive in comparison with some widely used methods in the corresponding areas. • An important endpoint variable in a cocaine rehabilitation study is the time to first relapse of a patient after the treatment. We propose a joint modeling approach based on functional data analysis to study the relationship between the baseline longitudinal cocaine-use pattern and the interval censored time to first relapse. For the baseline cocaine-use pattern, we consider both self-reported cocaine-use amount trajectories and dichotomized use trajectories. Variations within the generalized longitudinal trajectories are modeled through a latent Gaussian process, which is characterized by a few leading functional principal components. The association between the baseline longitudinal trajectories and the time to first relapse is built upon the latent principal component scores. The mean and the eigenfunctions of the latent Gaussian process as well as the hazard function of time to first relapse are modeled nonparametrically using penalized splines, and the parameters in the joint model are estimated by a Monte Carlo EM algorithm based on Metropolis-Hastings steps. An Akaike information criterion (AIC) based on effective degrees of freedom is proposed to choose the tuning parameters, and a modified empirical information is proposed to estimate the variance-covariance matrix of the estimators. • Sparse systems are usually parameterized by a tuning parameter that determines the sparsity of the system. How to choose the right tuning parameter is a fundamental and difficult problem in learning the sparse system. In this paper, by treating the the tuning parameter as an additional dimension, persistent homological structures over the parameter space is introduced and explored. The structures are then further exploited in speeding up the computation using the proposed soft-thresholding technique. The topological structures are further used as multivariate features in the tensor-based morphometry (TBM) in characterizing white matter alterations in children who have experienced severe early life stress and maltreatment. These analyses reveal that stress-exposed children exhibit more diffuse anatomical organization across the whole white matter region. • Stochastic gradient algorithms estimate the gradient based on only one or a few samples and enjoy low computational cost per iteration. They have been widely used in large-scale optimization problems. However, stochastic gradient algorithms are usually slow to converge and achieve sub-linear convergence rates, due to the inherent variance in the gradient computation. To accelerate the convergence, some variance-reduced stochastic gradient algorithms, e.g., proximal stochastic variance-reduced gradient (Prox-SVRG) algorithm, have recently been proposed to solve strongly convex problems. Under the strongly convex condition, these variance-reduced stochastic gradient algorithms achieve a linear convergence rate. However, many machine learning problems are convex but not strongly convex. In this paper, we introduce Prox-SVRG and its projected variant called Variance-Reduced Projected Stochastic Gradient (VRPSG) to solve a class of non-strongly convex optimization problems widely used in machine learning. As the main technical contribution of this paper, we show that both VRPSG and Prox-SVRG achieve a linear convergence rate without strong convexity. A key ingredient in our proof is a Semi-Strongly Convex (SSC) inequality which is the first to be rigorously proved for a class of non-strongly convex problems in both constrained and regularized settings. Moreover, the SSC inequality is independent of algorithms and may be applied to analyze other stochastic gradient algorithms besides VRPSG and Prox-SVRG, which may be of independent interest. To the best of our knowledge, this is the first work that establishes the linear convergence rate for the variance-reduced stochastic gradient algorithms on solving both constrained and regularized problems without strong convexity. • Learning a distance function or metric on a given data manifold is of great importance in machine learning and pattern recognition. Many of the previous works first embed the manifold to Euclidean space and then learn the distance function. However, such a scheme might not faithfully preserve the distance function if the original manifold is not Euclidean. Note that the distance function on a manifold can always be well-defined. In this paper, we propose to learn the distance function directly on the manifold without embedding. We first provide a theoretical characterization of the distance function by its gradient field. Based on our theoretical analysis, we propose to first learn the gradient field of the distance function and then learn the distance function itself. Specifically, we set the gradient field of a local distance function as an initial vector field. Then we transport it to the whole manifold via heat flow on vector fields. Finally, the geodesic distance function can be obtained by requiring its gradient field to be close to the normalized vector field. Experimental results on both synthetic and real data demonstrate the effectiveness of our proposed algorithm. • In this paper, we propose an efficient and scalable low rank matrix completion algorithm. The key idea is to extend orthogonal matching pursuit method from the vector case to the matrix case. We further propose an economic version of our algorithm by introducing a novel weight updating rule to reduce the time and storage complexity. Both versions are computationally inexpensive for each matrix pursuit iteration, and find satisfactory results in a few iterations. Another advantage of our proposed algorithm is that it has only one tunable parameter, which is the rank. It is easy to understand and to use by the user. This becomes especially important in large-scale learning problems. In addition, we rigorously show that both versions achieve a linear convergence rate, which is significantly better than the previous known results. We also empirically compare the proposed algorithms with several state-of-the-art matrix completion algorithms on many real-world datasets, including the large-scale recommendation dataset Netflix as well as the MovieLens datasets. Numerical results show that our proposed algorithm is more efficient than competing algorithms while achieving similar or better prediction performance. • In this paper, we propose a novel framework to analyze the theoretical properties of the learning process for a representative type of domain adaptation, which combines data from multiple sources and one target (or briefly called representative domain adaptation). In particular, we use the integral probability metric to measure the difference between the distributions of two domains and meanwhile compare it with the H-divergence and the discrepancy distance. We develop the Hoeffding-type, the Bennett-type and the McDiarmid-type deviation inequalities for multiple domains respectively, and then present the symmetrization inequality for representative domain adaptation. Next, we use the derived inequalities to obtain the Hoeffding-type and the Bennett-type generalization bounds respectively, both of which are based on the uniform entropy number. Moreover, we present the generalization bounds based on the Rademacher complexity. Finally, we analyze the asymptotic convergence and the rate of convergence of the learning process for representative domain adaptation. We discuss the factors that affect the asymptotic behavior of the learning process and the numerical experiments support our theoretical findings as well. Meanwhile, we give a comparison with the existing results of domain adaptation and the classical results under the same-distribution assumption. • We consider forward-backward greedy algorithms for solving sparse feature selection problems with general convex smooth functions. A state-of-the-art greedy method, the Forward-Backward greedy algorithm (FoBa-obj) requires to solve a large number of optimization problems, thus it is not scalable for large-size problems. The FoBa-gdt algorithm, which uses the gradient information for feature selection at each forward iteration, significantly improves the efficiency of FoBa-obj. In this paper, we systematically analyze the theoretical properties of both forward-backward greedy algorithms. Our main contributions are: 1) We derive better theoretical bounds than existing analyses regarding FoBa-obj for general smooth convex functions; 2) We show that FoBa-gdt achieves the same theoretical performance as FoBa-obj under the same condition: restricted strong convexity condition. Our new bounds are consistent with the bounds of a special case (least squares) and fills a previously existing theoretical gap for general convex smooth functions; 3) We show that the restricted strong convexity condition is satisfied if the number of independent samples is more than $\bar{k}\log d$ where $\bar{k}$ is the sparsity number and $d$ is the dimension of the variable; 4) We apply FoBa-gdt (with the conditional random field objective) to the sensor selection problem for human indoor activity recognition and our results show that FoBa-gdt outperforms other methods (including the ones based on forward greedy selection and L1-regularization). • The support vector machine (SVM) is a widely used method for classification. Although many efforts have been devoted to develop efficient solvers, it remains challenging to apply SVM to large-scale problems. A nice property of SVM is that the non-support vectors have no effect on the resulting classifier. Motivated by this observation, we present fast and efficient screening rules to discard non-support vectors by analyzing the dual problem of SVM via variational inequalities (DVI). As a result, the number of data instances to be entered into the optimization can be substantially reduced. Some appealing features of our screening method are: (1) DVI is safe in the sense that the vectors discarded by DVI are guaranteed to be non-support vectors; (2) the data set needs to be scanned only once to run the screening, whose computational cost is negligible compared to that of solving the SVM problem; (3) DVI is independent of the solvers and can be integrated with any existing efficient solvers. We also show that the DVI technique can be extended to detect non-support vectors in the least absolute deviations regression (LAD). To the best of our knowledge, there are currently no screening methods for LAD. We have evaluated DVI on both synthetic and real data sets. Experiments indicate that DVI significantly outperforms the existing state-of-the-art screening rules for SVM, and is very effective in discarding non-support vectors for LAD. The speedup gained by DVI rules can be up to two orders of magnitude. • Sparse learning techniques have been routinely used for feature selection as the resulting model usually has a small number of non-zero entries. Safe screening, which eliminates the features that are guaranteed to have zero coefficients for a certain value of the regularization parameter, is a technique for improving the computational efficiency. Safe screening is gaining increasing attention since 1) solving sparse learning formulations usually has a high computational cost especially when the number of features is large and 2) one needs to try several regularization parameters to select a suitable model. In this paper, we propose an approach called "Sasvi" (Safe screening with variational inequalities). Sasvi makes use of the variational inequality that provides the sufficient and necessary optimality condition for the dual problem. Several existing approaches for Lasso screening can be casted as relaxed versions of the proposed Sasvi, thus Sasvi provides a stronger safe screening rule. We further study the monotone properties of Sasvi for Lasso, based on which a sure removal regularization parameter can be identified for each feature. Experimental results on both synthetic and real data sets are reported to demonstrate the effectiveness of the proposed Sasvi for Lasso screening. • Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1q norm with q>1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due to the inherent structure of the mixed-norm regularization. Existing work deals with special cases with q=1, 2, infinity, and they cannot be easily extended to the general case. In this paper, we propose an efficient algorithm based on the accelerated gradient method for solving the general l1q-regularized problem. One key building block of the proposed algorithm is the l1q-regularized Euclidean projection (EP_1q). Our theoretical analysis reveals the key properties of EP_1q and illustrates why EP_1q for the general q is significantly more challenging to solve than the special cases. Based on our theoretical analysis, we develop an efficient algorithm for EP_1q by solving two zero finding problems. To further improve the efficiency of solving large dimensional mixed-norm regularized problems, we propose a screening method which is able to quickly identify the inactive groups, i.e., groups that have 0 components in the solution. This may lead to substantial reduction in the number of groups to be entered to the optimization. An appealing feature of our screening method is that the data set needs to be scanned only once to run the screening. Compared to that of solving the mixed-norm regularized problems, the computational cost of our screening test is negligible. The key of the proposed screening method is an accurate sensitivity analysis of the dual optimal solution when the regularization parameter varies. Experimental results demonstrate the efficiency of the proposed algorithm. • The l1-regularized logistic regression (or sparse logistic regression) is a widely used method for simultaneous classification and feature selection. Although many recent efforts have been devoted to its efficient implementation, its application to high dimensional data still poses significant challenges. In this paper, we present a fast and effective sparse logistic regression screening rule (Slores) to identify the 0 components in the solution vector, which may lead to a substantial reduction in the number of features to be entered to the optimization. An appealing feature of Slores is that the data set needs to be scanned only once to run the screening and its computational cost is negligible compared to that of solving the sparse logistic regression problem. Moreover, Slores is independent of solvers for sparse logistic regression, thus Slores can be integrated with any existing solver to improve the efficiency. We have evaluated Slores using high-dimensional data sets from different applications. Extensive experimental results demonstrate that Slores outperforms the existing state-of-the-art screening rules and the efficiency of solving sparse logistic regression is improved by one magnitude in general. • We consider the following signal recovery problem: given a measurement matrix $\Phi\in \mathbb{R}^{n\times p}$ and a noisy observation vector $c\in \mathbb{R}^{n}$ constructed from $c = \Phi\theta^* + \epsilon$ where $\epsilon\in \mathbb{R}^{n}$ is the noise vector whose entries follow i.i.d. centered sub-Gaussian distribution, how to recover the signal $\theta^*$ if $D\theta^*$ is sparse \rca under a linear transformation $D\in\mathbb{R}^{m\times p}$? One natural method using convex optimization is to solve the following problem: $$\min_\theta 1\over 2\|\Phi\theta - c\|^2 + \lambda\|D\theta\|_1.$$ This paper provides an upper bound of the estimate error and shows the consistency property of this method by assuming that the design matrix $\Phi$ is a Gaussian random matrix. Specifically, we show 1) in the noiseless case, if the condition number of $D$ is bounded and the measurement number $n\geq \Omega(s\log(p))$ where $s$ is the sparsity number, then the true solution can be recovered with high probability; and 2) in the noisy case, if the condition number of $D$ is bounded and the measurement increases faster than $s\log(p)$, that is, $s\log(p)=o(n)$, the estimate error converges to zero with probability 1 when $p$ and $s$ go to infinity. Our results are consistent with those for the special case $D=\bold{I}_{p\times p}$ (equivalently LASSO) and improve the existing analysis. The condition number of $D$ plays a critical role in our analysis. We consider the condition numbers in two cases including the fused LASSO and the random graph: the condition number in the fused LASSO case is bounded by a constant, while the condition number in the random graph case is bounded with high probability if $m\over p$ (i.e., $#text{edge}\over #text{vertex}$) is larger than a certain constant. Numerical simulations are consistent with our theoretical results. • Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets. • Lasso is a widely used regression technique to find sparse representations. When the dimension of the feature space and the number of samples are extremely large, solving the Lasso problem remains challenging. To improve the efficiency of solving large-scale Lasso problems, El Ghaoui and his colleagues have proposed the SAFE rules which are able to quickly identify the inactive predictors, i.e., predictors that have $0$ components in the solution vector. Then, the inactive predictors or features can be removed from the optimization problem to reduce its scale. By transforming the standard Lasso to its dual form, it can be shown that the inactive predictors include the set of inactive constraints on the optimal dual solution. In this paper, we propose an efficient and effective screening rule via Dual Polytope Projections (DPP), which is mainly based on the uniqueness and nonexpansiveness of the optimal dual solution due to the fact that the feasible set in the dual space is a convex and closed polytope. Moreover, we show that our screening rule can be extended to identify inactive groups in group Lasso. To the best of our knowledge, there is currently no "exact" screening rule for group Lasso. We have evaluated our screening rule using synthetic and real data sets. Results show that our rule is more effective in identifying inactive predictors than existing state-of-the-art screening rules for Lasso. • Oct 23 2012 stat.ML arXiv:1210.5806v1 Multi-task sparse feature learning aims to improve the generalization performance by exploiting the shared features among tasks. It has been successfully applied to many applications including computer vision and biomedical informatics. Most of the existing multi-task sparse feature learning algorithms are formulated as a convex sparse regularization problem, which is usually suboptimal, due to its looseness for approximating an $\ell_0$-type regularizer. In this paper, we propose a non-convex formulation for multi-task sparse feature learning based on a novel non-convex regularizer. To solve the non-convex optimization problem, we propose a Multi-Stage Multi-Task Feature Learning (MSMTFL) algorithm; we also provide intuitive interpretations, detailed convergence and reproducibility analysis for the proposed algorithm. Moreover, we present a detailed theoretical analysis showing that MSMTFL achieves a better parameter estimation error bound than the convex formulation. Empirical studies on both synthetic and real-world data sets demonstrate the effectiveness of MSMTFL in comparison with the state of the art multi-task sparse feature learning algorithms. • Sep 11 2012 cs.LG stat.ML arXiv:1209.2139v2 In this paper, we consider the problem of estimating multiple graphical models simultaneously using the fused lasso penalty, which encourages adjacent graphs to share similar structures. A motivating example is the analysis of brain networks of Alzheimer's disease using neuroimaging data. Specifically, we may wish to estimate a brain network for the normal controls (NC), a brain network for the patients with mild cognitive impairment (MCI), and a brain network for Alzheimer's patients (AD). We expect the two brain networks for NC and MCI to share common structures but not to be identical to each other; similarly for the two brain networks for MCI and AD. The proposed formulation can be solved using a second-order method. Our key technical contribution is to establish the necessary and sufficient condition for the graphs to be decomposable. Based on this key property, a simple screening rule is presented, which decomposes the large graphs into small subgraphs and allows an efficient estimation of multiple independent (small) subgraphs, dramatically reducing the computational cost. We perform experiments on both synthetic and real data; our results demonstrate the effectiveness and efficiency of the proposed approach. • Jun 05 2012 cs.LG stat.ML arXiv:1206.0333v1 We study the problem of estimating multiple predictive functions from a dictionary of basis functions in the nonparametric regression setting. Our estimation scheme assumes that each predictive function can be estimated in the form of a linear combination of the basis functions. By assuming that the coefficient matrix admits a sparse low-rank structure, we formulate the function estimation problem as a convex program regularized by the trace norm and the $\ell_1$-norm simultaneously. We propose to solve the convex program using the accelerated gradient (AG) method and the alternating direction method of multipliers (ADMM) respectively; we also develop efficient algorithms to solve the key components in both AG and ADMM. In addition, we conduct theoretical analysis on the proposed function estimation scheme: we derive a key property of the optimal solution to the convex program; based on an assumption on the basis functions, we establish a performance bound of the proposed function estimation scheme (via the composite regularization). Simulation studies demonstrate the effectiveness and efficiency of the proposed algorithms. • Sparse feature selection has been demonstrated to be effective in handling high-dimensional data. While promising, most of the existing works use convex methods, which may be suboptimal in terms of the accuracy of feature selection and parameter estimation. In this paper, we expand a nonconvex paradigm to sparse group feature selection, which is motivated by applications that require identifying the underlying group structure and performing feature selection simultaneously. The main contributions of this article are twofold: (1) statistically, we introduce a nonconvex sparse group feature selection model which can reconstruct the oracle estimator. Therefore, consistent feature selection and parameter estimation can be achieved; (2) computationally, we propose an efficient algorithm that is applicable to large-scale problems. Numerical results suggest that the proposed nonconvex method compares favorably against its competitors on synthetic data and real-world applications, thus achieving desired goal of delivering high performance. • The problem of joint feature selection across a group of related tasks has applications in many areas including biomedical informatics and computer vision. We consider the l2,1-norm regularized regression model for joint feature selection from multiple tasks, which can be derived in the probabilistic framework by assuming a suitable prior from the exponential family. One appealing feature of the l2,1-norm regularization is that it encourages multiple predictors to share similar sparsity patterns. However, the resulting optimization problem is challenging to solve due to the non-smoothness of the l2,1-norm regularization. In this paper, we propose to accelerate the computation by reformulating it as two equivalent smooth convex optimization problems which are then solved via the Nesterov's method-an optimal first-order black-box method for smooth convex optimization. A key building block in solving the reformulations is the Euclidean projection. We show that the Euclidean projection for the first reformulation can be analytically computed, while the Euclidean projection for the second one can be computed in linear time. Empirical evaluations on several data sets verify the efficiency of the proposed algorithms.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146376609802246, "perplexity": 420.98184922931955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128321025.86/warc/CC-MAIN-20170627064714-20170627084714-00139.warc.gz"}
https://www.physicsforums.com/threads/velocity-change-due-to-load-dropped-vertically-onto-trolley.862308/
# Velocity change due to load dropped vertically onto trolley Tags: 1. Mar 16, 2016 ### Maka42 A trolley is moving at a constant speed down a track, with no net force acting upon it. A heavy load is dropped vertically on top of the moving trolley. What happens to the trolley's speed? a) It stays the same, b) It decreases, c) It becomes zero, d) It is impossible to say, e) It increases (x) I thought that it would increase as when you increase the mass the component of weight acting to pull the trolley down the slope would increase and so it would begin to accelerate, unfortunatley the computer told me I was wrong. The question doesn't really say if there is still no net force after the mass was added. I only have one try left on this question and I can't find any relevant information anywhere! Any help would be greatly appreciated! :) 2. Mar 16, 2016 ### Staff: Mentor Hi Maka42. This doesn't say it's sliding down a slope; it just means the trolley is moving along a track. In future, please retain and make use of the template headers that are provided when posting to the homework forum. 3. Mar 16, 2016 ### Maka42 I interpreted it that way at first as well, I feel like "down a track" can mean its both going down a slope and a flat surface. If it was a flat surface would that mean the kinetic force of friction would increase and thus make it go slower? 4. Mar 16, 2016 ### Staff: Mentor You are not told there is friction. 5. Mar 16, 2016 ### Maka42 Hmm, well if there is no friction then I guess that means the velocity would stay the same. Thanks for your help! I guess my problem was more with the English rather than the physics, good thing i'm doing a physics degree rather than an English one! 6. Mar 16, 2016 ### Staff: Mentor You don't get marked for guesses! You need to justify you answer soundly based on physics principles. 7. Mar 16, 2016 ### Maka42 That's very true, well I suppose since there is no friction in the horizontal direction, the increase in weight would simply be balanced by the reaction force and so there would still be no net force meaning velocity would be unchanged. 8. Mar 16, 2016 ### haruspex As it happens, it doesn't matter which way you interpret it, as horizontal without friction or down a slope with friction exactly matching the downslope component of gravity. The increased mass would increase both forces in proportion, so no gain in speed from that. 9. Mar 16, 2016 ### Maka42 Oh, It never occurred to me that it both the forces would change proportionally. Thanks for your reply, I definitely wont be making these mistakes again! 10. Mar 16, 2016 ### Staff: Mentor <Mentor's note: Thread title changed to be more descriptive of problem> 11. Mar 16, 2016 ### Staff: Mentor So you're saying the load would speed up so it exactly matches the horizontal speed of the trolley before it became loaded? 12. Mar 16, 2016 ### Maka42 Well I thought that the speed would remain constant throughout because there is no net force horizontally and due to Newton's first law the speed would remain unchanged. I feel as though I'm missing something important though. 13. Mar 16, 2016 ### Staff: Mentor If the cargo is going to acquire the speed of the unloaded trolley, you'll need to explain where the energy to do this will come from. Newton's Law is written as applying to "a body", whereas in the situation here we have two bodies that combine into one. 14. Mar 16, 2016 ### haruspex I encourage students to develop a feel for mechanics problems by thinking themselves into it. While running, you grab a heavy package off a table next to you. Does it affect your speed? What force do you feel? If that doesn't do it for you, what conservation laws can you quote that might be relevant? 15. Mar 17, 2016 ### Staff: Mentor Note: I have reset the "solved" tag in this thread's subject line, because I see no indication that the problem has been solved. 16. Mar 17, 2016 ### Maka42 Hmm, well now that I think about it, the total momentum before and after are different, as the load is moving vertically and the cart is moving horizontally. Which means an external force is acting. 17. Mar 17, 2016 ### haruspex Consider those two directions separately. In each direction, how do you know that total momentum (of the cart+load system) has changed; what external force might account for it? 18. Mar 20, 2016 ### Maka42 Hmm, all I can think of is the normal force adjusting due to the momentum of the weight. But then I'm not really sure how that would change the velocity if there's no friction. 19. Mar 20, 2016 ### haruspex You mean the momentum of the dropped load. Viewing the cart plus load as a system, the vertical momentum story is a bit complicated. It isn't really relevant to the problem, but here goes. While the load is falling, the total gravitational force exceeds the the normal force (which is only matching the cart's weight), so the system is gaining downward momentum. On landing, there is a sudden, very large increase in the normal force for a fraction of a second. The extra momentum implied matches the momentum gained while the load was falling, leaving the system with no net vertical momentum. So, back to the horizontal. You agree there is no horizontal force on the system. So what does conservation of momentum in that direction tell you? Have something to add? Draft saved Draft deleted Similar Discussions: Velocity change due to load dropped vertically onto trolley
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.914872944355011, "perplexity": 1004.3400861364614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102819.58/warc/CC-MAIN-20170817013033-20170817033033-00526.warc.gz"}
https://math.stackexchange.com/questions/2589178/finding-eigenvectors-and-eigenvalues-of-symmetric-matrix-dimension-n
# Finding Eigenvectors and Eigenvalues of Symmetric Matrix Dimension n. Quite stuck with the following question: Find the eigenvalues and eigenvectors of: $\begin{bmatrix}-2 & 1 &0& ......&0\\1 & -2 & 1&......&0\\0&1&-2&......&.\\.&.&.&......&.\\.&.&.&......&1\\.&.&.&1&-2\end{bmatrix}$ Where the matrix is $n \times n$. Problems: I found the eigenvalues for the two and three dimensional case as being $\lambda= -1,-3$ and something different for the three dimensional case so I had no idea how to generalize to n dimensions. Any help would be appreciated. Hint: let $d_n=\det (\lambda I_n -A_n)$. Computed by the first row is equal to $(\lambda+2)d_{n-1}-(-1)(-1)d_{n-2}=(\lambda+2)d_{n-1}-d_{n-2}$. You can finish?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820119142532349, "perplexity": 204.53968740662992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578529898.48/warc/CC-MAIN-20190420160858-20190420182858-00151.warc.gz"}
http://www.physicsforums.com/showthread.php?t=371302
## How do we alleviate the shortage of qualified physics teachers? (Apologies if this is long, perhaps you can just cherry-pick the parts that are interesting to you...) By all accounts, it seems there is a worldwide shortage of qualified physics teachers. It has been claimed that this shortage is due to low wages, poor working conditions, lack of support from administration (and policymakers) and maybe a few other reasons I'm forgetting. To take an example from above, jobs in industry pay better than a job in education. I know that teacher salaries can get to be quite high in Canada and some regions of the US (~$75,000, with experience), but this may be low compared to jobs in industry. However, I have a hard time believing that salary is a major factor preventing qualified applicant from entering the profession, as starting salaries for assistant professors are not much better than those for new teachers (and you don't need +4 years PhD and +2 years of post-docs to be a teacher). Also, some jobs (such as programmers or engineers) may start at similar salaries as starting teachers (I have a friend who just started work as an engineer who is making the same salary that I would if I started work as a new teacher). In addition, one of the main reasons schools are having a hard time retaining teachers is due to attrition; teachers who are quitting mid-career to pursue other (more lucrative?) careers. However, my opinion is that the main reason for the shortage of qualified teachers is the lack of prestige in the position. Teaching may not provide the opportunity for physicists to solve interesting and challenging problems and learn new technical skills (such as programming), whereas a job in industry or academia may provide these opportunities. This is arguable, but I can see no other reason why so few would pursue a stable, relatively stress-free environment with a good wage, such as teaching, while so many would pursue a stressful, low pay profession with no guarantees that you will have a stable in 5 or 10 years, such as being a graduate student. This may be a sensitive issue, but assuming that the lack of prestige is a major reason for the lack of teachers, I'll leave a few thoughts for discussion... Are there any changes that can be made to persuade more people to choose physics teaching as a profession? Perhaps allowing for more creativity in experimental design by teachers, as well as more substantial experiments or projects. This idea may coincide nicely with a project-based curriculum, where the students have a large, experiment-based project to complete during the course for credit, although many curriculums compact so much material into a single term it may be hard to work on anything for longer than a few days. I'm not proposing doing anything outside the capabilities of a high school; however, performing experiments that take 2 or 3 weeks to design, implement and analyze (as opposed to an hour or two) would give teachers the opportunity to guide the implementation of a cool problem, and give the students a better idea as to how science is actually done, instead of mindlessly following a series of steps to obtain the desired result. Perhaps well-done projects could be submitted to an educational journal (e.g The Physics Teacher), or to a science fair (which could count towards course credit). This obviously doesn't address all of the issues involved, but it would satisfy the curiosity that got many people into physics in the first place. I'm sorry, I got a bit rambly and disorganized, but I wanted to throw some ideas out for discussion. I am also kind of establishing my own educational philosophy, and I sometimes am very dismayed about what I would be getting myself into if I became a teacher (rote memorization! Undisciplined students!, etc.) So, to summarize:What would you change to encourage qualified applicants to teach high school physics? I propose more opportunities for teachers to be involved in solving interesting problems (through student-based experiments). Maybe this isn't a novel idea at all, maybe everyone's been doing this for 30 years already...my high school physics education was abysmal, so I wouldn't know...if there are any current teachers out there, perhaps you can enlighten me? PhysOrg.com science news on PhysOrg.com >> City-life changes blackbird personalities, study shows>> Origins of 'The Hoff' crab revealed (w/ Video)>> Older males make better fathers: Mature male beetles work harder, care less about female infidelity Recognitions: Homework Help Science Advisor Money isn't necessarily the problem - 'earn much more in industry' isn't always true. For most people working in manufacturing industry (rather than finance) it isn't that well paid or that secure. From my experience of the teaching profession: Inflexible teaching qualifications. Have a B-Ed from a community college with a minor in nutrition or sport and you are a specialist science teacher. Have a physics PhD and a lifetimes teaching experience and you aren't qualified to be a classroom assistant. http://news.bbc.co.uk/2/hi/uk_news/e...on/3736942.stm Or the head of a top uk .private school that wanted to teach in a state school after retirement - but with a maths PhD and 20years experience he wasn't 'qualified'. http://www.timesonline.co.uk/tol/lif...icle493145.ece Meanwhile they are recruiting nigthclub bouncers as supply teachers. Teaching is entirely based around national standardized curriculum and SATS - deviation from the approved lesson plan is almost a crime. Since the schools (and your) future depends on exam scores and league tables - hard science courses are the first to be cut. If they are run you are under pressure to teach to the test and be careful to only allow a few star students into the exams (can't wreck the curve) But unionized jobs and seniority mean it doesn't matter how good you are - all you can hope for is to find a school where everyone else is near retirement (or has dangerous hobbies!) Don't be a male teacher. Men are effectively banned in primary schools. Even in secondary schools you have to constantly be on your guard never to be alone in a class with a student. One accusation and you will be suspended until it is investigated (a year or two) - even when you are cleared the accusation will show up on your CRB check. As will any rumours - "Enhanced Disclosure" allows any suspicions, even unreported to be counted against you. Recognitions: Science Advisor It's not clear, but it sounds like you mean (here in the US) K-12 science teaching, as opposed to college/university. There's been a few attempts to increase the number of *qualified* science teachers- most recently, there's some noise about increasing STEM (Science, Technology, Engineering and Mathematics) teaching, but it's not clear what is actually proposed. And a while ago, a few states decided that if you have a PhD and wanted to teach public-school science, you did not need a teacher's certificate prior to entering the classroom. It's not clear how successful that has been. None of that changes the fact that K-12 teachers are not required to have any real grounding in science knowledge; that is, just as it's possible to matriculate through a Physics program without any grounding (for example) in history, K-12 teachers are trained to *teach*- they don't get trained to be 'science teachers' any more than they get trained to be 'social studies teachers'. And, with the increasing importance of standardized testing, more and more of the school year is spent preparing for the various tests, rather than teaching substantive material. Furthermore, by High School, it's too late- the changes you are talking about need to occur at the elementary school level (grades K-6, 5-12 year olds) in order to get the students able to undertake 2 or 3 week long science projects by the time they get to High School. So, what can be done? I advocate that you volunteer to host a 'science day'. Once a year, I spend an hour or so in a couple of classrooms (primary schools), and talk about science. I ran an 'experiment' where I illuminated colored paper with colored light and had the students predict what color the paper would appear: for example, yellow paper illuminated with red light. The teachers *love* it, the students have fun, and maybe a few of them get interested in becoming scientists. ## How do we alleviate the shortage of qualified physics teachers? Reasons I'm not planning on being a teacher when I get my BS in about 2 months: 1. low wages, low opportunity for advancement, in industry there is a small chance to obtain personal glory for such as contribution to important technologies, obtaining patents etc. 2. becoming part of the prison-like high school environment I hated so much 3. high school physics teacher is like being a nerd minus the nerd I guess this goes under lack of prestige 4. no discipline among students, no desire to learn physics 5. lack of respect for the sciences among faculty and administration What can be done? Man I really have no idea I'll try to think about that and come back to this thread. Honestly I feel like more experimentally based curriculum would have little effect, when constrained by the stifling high school environment. So, to summarize:What would you change to encourage qualified applicants to teach high school physics? I propose more opportunities for teachers to be involved in solving interesting problems (through student-based experiments). There was a fairly sizable group of high school (IIRC) teachers at the national lab where I did my internship last summer, and they both participated in research and worked on designing educational modules. From what I remember, they were mostly hands-on activities, but they weren't anywhere at the level of sophistication that it seems you're thinking of. Part of me wonders if even average high school students wouldn't rise to a challenge (it might take an extraordinary teacher to motivate them, though) As for what to change to get people interested in teaching high school physics, my school has what I think is a great program that was started by the CU Boulder (http://stem.colorado.edu/la-program) I know of several students who joined the LA program with no intention of teaching and decided they enjoyed it enough to pursue it as a career. The possibility of a scholarship is also helpful ;) Recognitions: Gold Member Science Advisor Staff Emeritus Quite frankly, being both really good at teaching and knowing your subject well is a rare trait. Someone who knows their subject well and is really a cruddy teacher can still get away with teaching at a university level, because they 1) probably don't teach an entire course but just a few lectures in a team-taught course, 2) can bring in research funds that allow administration to turn a blind eye to their lack of teaching skills and 3) have more mature students who can somehow manage to learn from their books and notes in spite of the professor's lack of teaching skills. You can't get away with being a cruddy teacher in a secondary school; it's easier to get away with less subject content knowledge, because you only teach the students as much as you know how to teach. I've been sitting through a bunch of faculty candidate seminars recently, and wanting to bang my head against the wall. They might be fine at research, but have no place teaching, and I think our search committee has a serious challenge ahead of them, since I'm really not sure anyone I've seen so far belongs in a faculty appointment yet. I would dread them training students and think they all need to do another post-doc before being employable. Recognitions: Science Advisor Our local paper just announced that my institution (I didn't know this previously) is one of the first involved with the 'UTeach' program started at UT-Austin: math and science majors can graduate in four years as a fully-accredited teacher. The goal of UTeach is specifically to increase the number of competent math and science public school teachers. http://www.uteach-institute.org/ Mentor Quote by Andy Resnick And a while ago, a few states decided that if you have a PhD and wanted to teach public-school science, you did not need a teacher's certificate prior to entering the classroom. It's not clear how successful that has been. Anecdotally, not very. I had a student who decided to teach high school after his PhD. He got an alternative certification, and started to teach in the Chicago public school system. The environment was unsupportive. He was viewed by much of the rest of the faculty as a threat: those with a B.Ed. and a minor in nutrition like the system just fine as it is, and don't want to see the boat rocked. They were glad to see him go after about two years. He's since moved on to a suburban school, and has received many prestigious awards for teaching excellence. Quote by Moonbear I would dread them training students and think they all need to do another post-doc before being employable. How would doing another post-doc train them to become better teachers? Unless there was a teaching component to the position (highly unlikely, especially if there is a shortage of funds), then this will only give the candidates a good boost on their research CV, while doing nothing to improve their teaching abilities. Recognitions: Science Advisor Quote by Vanadium 50 Anecdotally, not very. I had a student who decided to teach high school after his PhD. He got an alternative certification, and started to teach in the Chicago public school system. The environment was unsupportive. He was viewed by much of the rest of the faculty as a threat: those with a B.Ed. and a minor in nutrition like the system just fine as it is, and don't want to see the boat rocked. They were glad to see him go after about two years. He's since moved on to a suburban school, and has received many prestigious awards for teaching excellence. Then I'd argue that he is a success story! I'm not naive enough to think I can reform the school system. Mentor Quote by Andy Resnick Then I'd argue that he is a success story! I suppose it depends on your definition of success. A school district with many good science teachers got another one, so that's success at some level. But the school district that needed one most acutely chased theirs away. Recognitions: Science Advisor I'd agree with that. I think producing qualified science teachers should be considered separate from installing those qualified teachers in underperforming school districts. I only have control over one of those goals. In the US there is the PhysTech program that aims to increase the recruitment of students to physics teaching programs and also improve the quality of their training. They seem to do some aggressive marketing and get positive results in terms of the size of enrollment. Their website is: http://www.phystec.org/. However I don't think they advocate significant changes in current physics teaching methodologies. Check out my new blog http://www.physttr.org which discusses aspects of physics teachers training. I found some news today on this subject about a program in Cornell for continuous training for teachers to tackle the problem: High School Physics Teachers Train at Cornell Shai Quote by marcusesses However, I have a hard time believing that salary is a major factor preventing qualified applicant from entering the profession, as starting salaries for assistant professors are not much better than those for new teachers. I think you are comparing maximum salaries for teachers with minimum salaries for professors. Also you have to look at salary evolution over time. You are just not going to make$150K teaching high school, whereas senior engineers can make that much money. However, my opinion is that the main reason for the shortage of qualified teachers is the lack of prestige in the position. That's tied to salary. If you make large amounts of money, you can buy prestige. This is arguable, but I can see no other reason why so few would pursue a stable, relatively stress-free environment with a good wage, such as teaching, while so many would pursue a stressful, low pay profession with no guarantees that you will have a stable in 5 or 10 years, such as being a graduate student. This is non-sense. Teaching is *NOT* stress-free. Teaching is one of the highest stress jobs that you can find, and it's also quite low paying. People become graduate students because you aren't going to be a low paid graduate student for the rest of your life. Are there any changes that can be made to persuade more people to choose physics teaching as a profession? Pay teachers more money. People don't like to hear this, but that's what it boils down to. I'm not proposing doing anything outside the capabilities of a high school; Yes you are. The problem is that if you want to teach experimental design, you are looking at an extremely high level of teaching skill, and there aren't enough teachers with this level of skill to go around. Also, it's easy to be a teacher if you have good motivated students, but what do you do about the students that *aren't* motivated and don't have high skills. So, to summarize:What would you change to encourage qualified applicants to teach high school physics? Pay high school teachers more money. Now the problem then is that you get into issues of taxes and administration, but that's another issue. I propose more opportunities for teachers to be involved in solving interesting problems (through student-based experiments). And that's why it's a blast to teach when you can choose your students. But you can't. Quote by Andy Resnick I'd agree with that. I think producing qualified science teachers should be considered separate from installing those qualified teachers in underperforming school districts. I only have control over one of those goals. If you create lots of good science teachers, but you can't get them into under-performing school districts, then I'd argued that you've failed, and it may have been a waste since those people that you trained may have done more social good elsewhere. Educational administration is tough. Teaching is really tough. Teaching and educational administration are tough because you have to deal with other people, and people can be prickly and irrational. One trait that I've noticed in scientists is that they often define the problem in ways that the human element is removed. OK you have a dysfunctional school system. Let's fire bomb it and start from scratch. Except that you aren't in a position to fire everyone, and if you did you'll be left with students sitting around doing nothing while you are trying to hire new teachers that haven't been trained. Quote by mgb_phys Inflexible teaching qualifications. Have a B-Ed from a community college with a minor in nutrition or sport and you are a specialist science teacher. Have a physics PhD and a lifetimes teaching experience and you aren't qualified to be a classroom assistant. That's because most physics Ph.D.'s with university teaching experience are simply incompetent at teaching elementary and secondary school without extra training. There are some programs by which Ph.D.'s can get educational certificates very quickly (i.e. within a year). Teaching is entirely based around national standardized curriculum and SATS - deviation from the approved lesson plan is almost a crime. Yes, and there is a reason for that. Part of the reason physics Ph.D.'s can make incompetent high school teachers is that a lot of elementary and secondary school involves following very precisely defined rules without questioning them, and that's something that Ph.D.'s are pretty bad at. Other people have decided what is important to teach and what is not, and it is not your place to question those decisions. If the board of education has decided that it is a bad idea to mention that the universe may be older than 6000 years old, that that's the way it is. If you get someone with a bachelor in education to read from a book and follow the lesson plan, they'll do that. If you get a Ph.D. to do that, and the lesson plan makes no sense to them, they won't. Also Ph.D.'s tend to get bored much more easily. Elementary and secondary teachers often do exactly the same thing day after day, year after year, and it doesn't bother them to do exactly the same thing. Most Ph.D.'s go insane if you try to get them to do that sort of thing. Similar discussions for: How do we alleviate the shortage of qualified physics teachers? Thread Forum Replies Educators & Teaching 10 Educators & Teaching 2 General Discussion 8 Academic Guidance 8 Academic Guidance 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2594466209411621, "perplexity": 1188.0872113827866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368708946676/warc/CC-MAIN-20130516125546-00048-ip-10-60-113-184.ec2.internal.warc.gz"}
https://pub.uni-bielefeld.de/record/2940088
### Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients Liu W, Röckner M, Sun X, Xie Y (2020) JOURNAL OF DIFFERENTIAL EQUATIONS 268(6): 2910-2948. Zeitschriftenaufsatz | Veröffentlicht | Englisch Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis! Autor*in Liu, Wei; Röckner, MichaelUniBi; Sun, Xiaobin; Xie, Yingchao Einrichtung Abstract / Bemerkung This paper is devoted to studying the averaging principle for stochastic differential equations with slow and fast time-scales, where the drift coefficients satisfy local Lipschitz conditions with respect to the slow and fast variables, and the coefficients in the slow equation depend on time t and omega. Making use of the techniques of time discretization and truncation, we prove that the slow component strongly converges to the solution of the corresponding averaged equation. (C) 2019 Elsevier Inc. All rights reserved. Stichworte Averaging principle; Local Lipschitz; Time-dependent; Strong; convergence; Stochastic differential equations Erscheinungsjahr 2020 Zeitschriftentitel JOURNAL OF DIFFERENTIAL EQUATIONS Band 268 Ausgabe 6 Seite(n) 2910-2948 ISSN 0022-0396 eISSN 1090-2732 Page URI https://pub.uni-bielefeld.de/record/2940088 ### Zitieren Liu W, Röckner M, Sun X, Xie Y. Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients. JOURNAL OF DIFFERENTIAL EQUATIONS. 2020;268(6):2910-2948. Liu, W., Röckner, M., Sun, X., & Xie, Y. (2020). Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients. JOURNAL OF DIFFERENTIAL EQUATIONS, 268(6), 2910-2948. doi:10.1016/j.jde.2019.09.047 Liu, W., Röckner, M., Sun, X., and Xie, Y. (2020). Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients. JOURNAL OF DIFFERENTIAL EQUATIONS 268, 2910-2948. Liu, W., et al., 2020. Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients. JOURNAL OF DIFFERENTIAL EQUATIONS, 268(6), p 2910-2948. W. Liu, et al., “Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients”, JOURNAL OF DIFFERENTIAL EQUATIONS, vol. 268, 2020, pp. 2910-2948. Liu, W., Röckner, M., Sun, X., Xie, Y.: Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients. JOURNAL OF DIFFERENTIAL EQUATIONS. 268, 2910-2948 (2020). Liu, Wei, Röckner, Michael, Sun, Xiaobin, and Xie, Yingchao. “Averaging principle for slow-fast stochastic differential equations with time dependent locally Lipschitz coefficients”. JOURNAL OF DIFFERENTIAL EQUATIONS 268.6 (2020): 2910-2948. Open Data PUB ### Web of Science Dieser Datensatz im Web of Science®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8342356085777283, "perplexity": 5052.4513757886725}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571584.72/warc/CC-MAIN-20220812045352-20220812075352-00031.warc.gz"}
https://vetmed.oregonstate.edu/biblio/characterization-vibrio-cholerae-aerotaxis
Title Characterization of Vibrio cholerae aerotaxis. Publication Type Journal Article Year of Publication 2007 Authors Boin, MA, Häse, CC Journal FEMS microbiology letters Volume 276 Issue 2 Pagination 193-201 Date Published 2007 Nov Keywords Vibrio cholerae Abstract The ability to move toward favorable environmental conditions, called chemotaxis, is common among motile bacteria. In particular, aerotaxis has been extensively studied in Escherichia coli and was shown to be dependent on the aer and tsr genes. Three putative aer gene homologs were identified in the Vibrio cholerae genome, designated aer-1 (VC0512), aer-2 (VCA0658), and aer-3 (VCA0988). Deletion analyses indicated that only one of them, aer-2, actively mediates an aerotaxis response, as assayed in succinate soft agar plates as well as a capillary assay. Complementation studies confirmed that Aer-2 is involved in aerotaxis in V. cholerae. In addition, overexpression of aer-2 resulted in a marked increase of the aerotactic response in soft agar plates. No observable phenotypes in V. cholerae mutants deleted in the aer-1 or aer-3 genes were detected under standard aerotaxis testing conditions. Furthermore, the V. cholerae aer-1 and aer-3 genes, even when expressed from a strong independent promoter, did not produce any observable phenotypes. As found in other bacterial species, the results presented in this study indicate the presence of a secondary aerotaxis transducer in V. cholerae. DOI 10.1111/j.1574-6968.2007.00931.x
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8028807044029236, "perplexity": 18887.735900082764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00627.warc.gz"}
http://math.stackexchange.com/questions/31492/solving-forall-x-in-mathbbr-fx-f-left-frac1x-right
# Solving $\forall x \in \mathbb{R}_+^*, f'(x) = f\left(\frac1{x}\right)$ I recently came across this equation : $$\forall x \in \mathbb{R}_+^*, f'(x) = f\left(\frac1{x}\right)$$where $f \in \mathcal{C}^1(\mathbb{R}, \mathbb{R})$. I've done the following, but I'm stuck at the end. Could you give me pointers? Thanks! Differentiating yields $$\forall x, f''(x) = -\frac1{x^2}f(x) \tag{S_0}$$Solutions in the form $$x \mapsto \frac1{x^\phi}$$ work iff $\phi(\phi+1) = -1$, ie. $\phi = \frac{-1 \pm i \sqrt{3}}{2} =e^{\pm 2i\pi/3} = j, \overline{j}$. Elements of the vector space generated by the free pair $(x^j, x^\overline{j})$ are therefore solutions of ($S_0$). I then feed $\lambda x^j + \mu x^\overline{j}$ in the original equation, which yields $-\lambda j\frac1{x^{j+1}}-\mu\overline{j}\frac1{x^{\overline{j} + 1}} = \frac{x^{j + \overline{j}}}{\lambda x^\overline{j} + \mu x^j}$, then $(-\lambda j x^{\overline{j}+1} - \mu \overline{j} x^{j+1})(\lambda x^{\overline{j}} + \mu x^j) = x^{1+j+\overline{j}} = x^0 = 1$, and $-\lambda^2 j x^{2\overline{j} + 1} - \mu^2 \overline{j} x^{2j+1} - \lambda\mu(j + \overline{j}) = 0$. Thus, $$\lambda^2 j x^{-2i\sin(2\pi/3)} + \mu^2 \overline{j} x^{2i\sin(2\pi/3)} = \lambda\mu$$ Does that mean that no solutions can be found to the original equation, except the trivial $x \mapsto 0$ one? Or that I didn't take the right approach? I can't figure out how to handle the last equality. - Sorry for the size of the equations; it seems that \displaystyle doesn't work. Can someone help me make this look better, or tell me how to do so? Thanks! –  Clément Apr 7 '11 at 8:21 @Clément: Is this what you wanted? All I did was change some of your single-dollar signs to double-dollar signs. –  TonyK Apr 7 '11 at 8:30 I think $(S_0)$ is wrong. The argument on the right-hand side should still be $1/x$. –  joriki Apr 7 '11 at 8:44 @TonyK: Thanks! –  Clément Apr 7 '11 at 8:48 Suggestion for a different solution: I think that if you put $g(x)=f(x)+f(1/x)$ and $h(x)=f(x)-f(1/x)$ then you get $g'(x)=g(x)$ and $h'(x)=-h(x)$. You can solve these two equations and obtain $f(x)$ from it. –  Martin Sleziak Apr 7 '11 at 8:51 There is a mistake in Clément's calculation: The Eulerian differential equation $y''+y/x^2=0$ has solutions of the form $y(x)=x^\lambda$ (resp. $=\exp(\lambda\log x)$ ) where $\lambda$ satisfies the "index equation" $\lambda(\lambda-1)+1=0$, so $\lambda={1\over2}\pm i{\sqrt3\over2}$. The general solution is $$f(x)=c_1\exp(\lambda_1\log x)+c_2\exp(\lambda_2\log x)\>.$$ If we confront this with the original functional equation $f'(x)=f(1/x)$ then we see that the latter even has real solutions, namely $$f(x)=C\>\sqrt{\mathstrut x}\>\cos\Bigl({\sqrt3\over2}\log x-{\pi\over6}\Bigr)\>,\qquad C\in{\mathbb R}.$$ Of course it is easy to check a posteriori that these are indeed solutions. - You're right, actually forgot a minus sign in my calculation. I calculated $\phi$ using the $1/x^\phi$, and plugged $x^\phi$ in the equation. Thanks for your answer! –  Clément Apr 7 '11 at 22:24 I don't understand how you got the equation after "in the original equation, which yields"; it seems there might be something wrong there but I'm not sure exactly what you did. I think this all gets a bit easier if you transform to $y=\ln x$ and $g(y)=f(x)$; then the condition reads $$g'(y)=\mathrm{e}^yg(-y)\;,$$ and differentiating as you did yields $$g''(y)=g'(y)-g(y)\;.$$ The solutions of the characteristic equation are the same $j,\overline{j}$ that you got, so the original equation becomes $$\left(c_1\mathrm{e}^{jy}+c_2\mathrm{e}^{\overline{j}y}\right)'=\mathrm{e}^y\left(c_1\mathrm{e}^{-jy}+c_2\mathrm{e}^{-\overline{j}y}\right)\;.$$ Since $1-j=\overline{j}$ and $1-\overline{j}=j$, this is satisfied if $jc_1=c_2$ and $\overline{j}c_2=c_1$, and these conditions are actually equivalent, since $j\overline{j}=1$. So the solution is $$c \left(\mathrm{e}^{jy}+j\mathrm{e}^{\overline{j}y}\right)=c\left(x^j+jx^{\overline{j}}\right)\;.$$ For this to be real, we must have $c=b/\sqrt{j}$ with $b\in\mathbb{R}$, and thus $$f(x)=a\Re\left(x^j/\sqrt{j}\right)$$ with $a\in\mathbb{R}$. Plugging this back into the original equation shows that this is indeed a solution. - By the way, the problem is a lot more interesting than it looks at first sight :-) –  joriki Apr 7 '11 at 10:00 Thanks for your thorough explanation and solution; it is a pretty approach indeed, one which I didn't think of at all. Nice work! As you may have seen in my comment to Christian's answer, my mistake is that I plugged $x^j$ instead of $x^{-j}$, which gave me the eventual impression that there were no solutions. Thanks again for your help! –  Clément Apr 7 '11 at 22:26 i don't know if i can comment to clement's question so i will write my answer here. what you have shown is that any solution to $\frac{df(x)}{dx} = f(\frac{1}{x})$ satsifies the cauchy-euler equation $\frac{d^2 f(x)}{dx^2}=-\frac{1}{x^2} f(x)$ whose solutions are $\sqrt x \cos(\sqrt 3 \ln x /2)$ and $\sqrt x \sin(\sqrt 3 \ln x /2)$. the trouble, i think, is the converse statment that the solutions of cauchy-euler equation satisfies the $\frac{df(x)}{dx} = f(\frac{1}{x})$ is false. - Where did you read that $f''(x)=-f(x)$? –  Did Apr 7 '11 at 15:19 i meant to write $\frac{d^2 f}{dx^2} = -\frac{1}{x^2}f(x)$ making it a cuachy-euler equation. –  abel Apr 7 '11 at 16:56 Hi abel, thanks for your help! In fact, the reasoning only showed that the condition obtained was necessary, but not a priori sufficient. Hence my question, since it seemed to me that no combination of the Euler equation was a solution to the initial problem, and I wanted to make sure that there was no mistake; but the actual mistake was pretty trivial: I swapped a $x^j$ and $x^-j$ :/ –  Clément Apr 7 '11 at 22:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9712395668029785, "perplexity": 242.83218460360027}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645378542.93/warc/CC-MAIN-20150827031618-00023-ip-10-171-96-226.ec2.internal.warc.gz"}
https://proj.org/operations/projections/sinu.html
# Sinusoidal (Sanson-Flamsteed)¶ Classification Pseudocylindrical Available forms Forward and inverse, spherical and ellipsoidal Defined area Global Alias sinu Domain 2D Input type Geodetic coordinates Output type Projected coordinates MacBryde and Thomas developed generalized formulas for several of the pseudocylindricals with sinusoidal meridians: $x = C\lambda(m+cos\theta) / ( m + 1)$ $y = C\theta$ $C = \sqrt { (m + 1 ) / n }$ ## Parameters¶ Note All parameters are optional for the Sinusoidal projection. +lon_0=<value> Longitude of projection center. Defaults to 0.0. +R=<value> Radius of the sphere given in meters. If used in conjunction with +ellps, +R takes precedence. +x_0=<value> False easting. Defaults to 0.0. +y_0=<value> False northing. Defaults to 0.0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8985742330551147, "perplexity": 24061.507613059817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00346.warc.gz"}
https://vkusninzja.ru/en/stati/6oQLRZT.html
# Yandex Dzen. Everyone who has already moved a little in studying chemistry, faces the concept of mole. True, most immediately think about moths, who ate a fur coat in the closet for the summer, but mole in chemistry is a completely different story . And now we'll figure it out. Everyone who has already moved a little in studying chemistry, faces the concept of "mole." True, most immediately think about moths, who ate a fur coat in the closet for the summer, but mole in chemistry is a completely different story . And now we'll figure it out. So, let's look at some chemical reaction. For example, such: H2 + F2 = 2HF Here, 1 hydrogen molecule H2 reacts with one fluorine F2 molecule and two hydrogen fluoride molecules is obtained. Let me remind you, the number of molecules or atoms that react or obtained in the reaction is determined by the coefficient, that is, the digit facing the formula of the substance. In our example, there is nothing before hydrogen, but in fact we can deliver a unit here, that is, we need 1 hydrogen molecule. Before Fluoro, it is also worth nothing, it means that we need 1 fluorine molecule. But in front of hydrogen fluoride HF is a twice. This means that we have 2 hydrogen fluoride molecule. I.e: H2 + F2 = 2HF is the same as 1 H2 + molecule 1 molecule F2 = 2 HF molecules. But you know that molecules are so small that we cannot see them. How do we consider these molecules that react? To do this, have introduced the concept Mole . Mol is the amount of substance in which the same particles are contained as atoms are contained in 12 grams of carbon with a atomic unit of mass 12. This is a rather surrounding definition, but it needs to be remembered. There is a pleasant moment: in one mole of any substance contain Number of Avogadro Particles. Here it is, this is the number: Such a number is difficult. You just think, a billion is 1,000,000,000. And in one mole of particles 6.02 * 100,000,000,000,000,000,000! (But not to see nightmares at night, just remember 6.02 * 10 in twenty-third degree). So, In one mole of any substance contains 6.02 * 10 in the twenty-third degree of particles. But we know that atoms of different substances have a different structure, and therefore a different mass. That's why The masses of one praying among different substances vary . To sort out this, let's go to the country and do the experiment. We remember exactly that 1 mol is always the same number of particles (6.02 * 10 in twenty-third degree). But in the usual life of such numbers there is no, so we take the number less, for example, 100. It will be our conditional experimental mole. Now in one pile we put 100 cherries, in another bunch - 100 pears, in the third - 100 watermelons. A bunch is 1 mol. In every pile, we conscientiously folded the same number of particles, right? But particles of these different types: in one pile of Cherry, in the other - pears, in the third - watermelon. And now we will weigh. What do you think there will be a mass of 100 cherries, 100 pears and 100 watermelons? Of course, it will be. At the same time, please note: the number of particles in each pile is equally, but these piles weigh differently. Why? Because the particles are different! In chemistry everything is the same. If you take 1 mol of hydrogen, 1 moth oxygen and 1 mol sodium, then mass will be different (remember the trip to the country). And it is important. But now there is a lawsager question: what to find out what is the mass of 1 mol of hydrogen, 1 mol of oxygen and 1 mol sodium and in general any substance? For this introduced the concept molar mass. Molar mass and there is a mass of 1 praying substance. How to determine it? Just. This is an atomic mass or molecular weight of the substance that we expect, using the Mendeleev table. The molar mass is denoted by the letter M and is expressed in g / mol (just because it shows how many grams lead 1 mol). Examples from chemistry textbook. ### Example 1. Find a lot of one pray (it is molar mass ) Aluminum. We solve chemistry and look at the Mendeleev table. We see that the atomic mass of aluminum 27. The formula is simply aluminum substance - Al, that is, the atom here is one. Consequently, the molar mass of aluminum coincides with the atomic and equal to 27 g / mol. ### Example 2. Find the molar mass of fluorine. Fluorine under us under normal conditions - gas, so the fluorine molecule consists of two atoms and looks like this: F2. In the periodic table we find fluorine and see that its atomic mass is 19. Therefore, the molar weight of fluorine 2 * 19 = 38 g / mol. ### Example 3. Find the molar mass of calcium oxide. Formula Calcium oxide Sao. We look again in the table: Calcium atomic mass 40, atomic mass of oxygen 16. Molar mass of calcium oxide 40 + 16 = 56 g / mol. ### Example 4. Find the molar mass of silicon oxide. SiO2 silicon oxide formula. The Mendeleev Table reports that the atomic mass of silicon 28, oxygen - 16. Be careful, in this matter of the trick! In the oxide formula, two oxygen atoms, be sure to consider this so that the answer is correct. And it will be like this: the molar mass of silicon oxide 28 + 16 * 2 = 60 g / mol. (16 is the mass of one oxygen atom, we have two in the formula, so we multiplied 16 to 2!). ### Example 5. A complex example of chemistry tutor. But I recommend to penetrate and figure out to clarify everything finally. So, answer, what is the molar mass of sulfuric acid. Here you have to focus not get confused. The formula of sulfuric acid H2SO4, that is, we have: · 2 hydrogen atoms · 1 sulfur atom · 4 oxygen atoms. We look into the periodic table and determined atomic masses: · Atomic weight of hydrogen - 1 · Sulfur atomic weight - 32 · Atomic mass of oxygen - 16. Go to the calculation: 2 hydrogen atom + 1 sulfur atom + 4 oxygen atom 2 * 1 + 1 * 32 + 4 * 16 In this expression in each term, the first factor is the number of element atoms, the second factor is atomic mass. Then just mathematics: 2 * 1 + 1 * 32 + 4 * 16 = 98. And yes, molar mass Sulfuric acid 98 g / mol. I am sure, now you will distinguish the mole in the closet and mole in chemistry. And then we will understand, how to weigh on ordinary scales these moths . Please write in the comments that remained incomprehensible, and I will definitely give additional explanations. Be complain about difficulties in learning the school course and say that you have been scared in the textbook of chemistry. And then the next article will tell exactly about this problem. ## Molar mass Atoms and molecules are the smallest particles of the substance, therefore, as a unit of measurement, you can choose a mass of one of the atoms and express the masses of other atoms in the ratio of the selected one. So what is a molar mass, and what is its dimension? ## What is a molar mass? The founder of the theory of atomic masses was a dalton scientist, which was the table of atomic masses and took the mass of the hydrogen atom per unit. The molar mass is the mass of one praying substance. Mol, in turn, the amount of substance containing a certain amount of smallest particles that are involved in chemical processes. The number of molecules contained in the same mole is called the Avogadro number. This value is constant and does not change. Fig. 1. The formula of Avogadro. Thus, the molar mass of the substance is the mass of one pray, in which 6.02 * 10 ^ 23 of the elementary particles are located. The Avogadro number received its name in honor of the Italian scientist Amedeo Avagadro, who proved that the number of molecules in the same gases is always the same The molar mass in the international SI system is measured in kg / mol, although usually this magnitude is expressed in grams / mole. This value is denoted by the English letter M, and the formula of the molar mass looks like this: M = M / V, where M is the mass of the substance, and V is the amount of substance. Fig. 2. Calculation of the molar mass. ## How to find a molar mass of substance? Calculate the molar mass of a substance will help the Table D. I. Mendeleev. Take any substance, for example, sulfuric acid. The formula looks like this: H 2SO. 4. Now we turn to the table and see what the atomic mass of each of the elements included in the acid. Sulfuric acid consists of three elements - hydrogen, sulfur, oxygen. The atomic mass of these elements is respectively 1, 32, 16. It turns out that the total molecular weight is equal to 98 atomic units of mass (1 * 2 + 32 + 16 * 4). Thus, we found out that one mole of sulfuric acid weighs 98 grams. The molar mass of the substance is numerically equal to the relative molecular weight, if the structural units of the substance are molecules. The molar mass of the substance may also be equal to the relative atomic mass, if the structural units of the substance are atoms. Until 1961, an oxygen atom was taken for an atomic unit of mass, but not a whole atom and its 1/16 part. At the same time, the chemical and physical units of mass were not the same. Chemical was 0.03% more than physical. Currently, a unified measurement system has been adopted in physics and chemistry. As standard E.A.M. 1/12 selected part of the mass of carbon atom. Fig. 3. Formula of the unit of the atomic mass of carbon. The molar mass of any gas or steam is measured very easily. It is enough to use control. The same volume of the gaseous substance is equal to the amount of substance to another at the same temperature. A known method of measuring the volume of the steam is the determination of the amount of displaced air. Such a process is carried out using a side removal leading to the measuring device. The concept of molar mass is very important for chemistry. Its calculation is necessary to create polymer complexes and many other reactions. In pharmaceuticals, with the help of molar mass, the concentration of this substance in the substance is determined. Also, the molar mass is important in providing biochemical studies (exchanged process in the element). In our time, due to the development of science, the molecular masses of almost all components of blood, including hemoglobin, are known. ## What did we know? In the 8th grade in chemistry, the "molar mass of the substance" is an important topic. Molar mass is an important physical and chemical concept. The molar mass is the characteristic of the substance, the ratio of the mass of the substance to the number of moles of this substance, that is, the mass of one praying substance. It is measured in kg / mol or gram / mol. ## Test on the topic Hall of Fame To get here - go through the test.  • Regina Mednikova 10/10 • Julia Vladimirovna 10/10 • Olga Koroleva 8/10 • Dima Toropov 10/10 #### Report assessment Average rating: 4.2. . Total ratings received: 629.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8602705001831055, "perplexity": 877.3550596283784}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487607143.30/warc/CC-MAIN-20210613071347-20210613101347-00418.warc.gz"}
https://learn.redhat.com/t5/General/How-do-you-remember-commands/m-p/16225
cancel Showing results for Did you mean: Mission Specialist • 399 Views ## How do you remember commands ? So while preparing for RHCSA configuring stratis you need the following in /etc/fsbab x-systemd.requires=stratisd.service yet if you do a man on stratisd you wont get that option. On the exam you dont have internet accees so if you forget it how would you proceed ? Just forfeit the task and get 0 points for it ? Labels (1) 4 Replies Flight Engineer • 316 Views ## Re: How do you remember commands ? This is how I do it. In general for each command, options or task, I try to find the related manual pages for guidance or HINTS. You may remember the command name but not ALL the options or the syntax order. For this case you mention, I think like this: 1) The Stratis file system cannot be mounted unless the stratisd daemon is active. So it REQUIRES to set A condition in the fstab for the FS to be mounted only when stratisd is active. What is the syntax of this condition? Then I think at step 2. 2) stratis is managed by systemd daemon (like all services). So it is something related to systemd. And fstab is about mounting. So the man page for this should be related to systemd and mounting. This is # man 5 systemd.mount (attention there is also # man 1 systemd-mount). To receive a hint about the man page name, you can man -k mount . 3) I open the man 5 systemd.mount and press / for search, type "requires" (from step 1), Enter. I find "x-systemd.requires". Then I ask myself what it requires? We mentioned this at step 1, the stratisd service. 4) If I cannot remember "stratisd.service", I run the command bellow (the string appears two times). This command we still need to run before # stratis create ... as a prerequisite. I have to check if stratisd is active and enabled at boot. Like before you operate on network connections, you need to check if NetworkManager is active. [root@server-base ~]# systemctl status stratisd ● stratisd.service - Stratis daemon Active: active (running) since Thu 2020-12-17 17:44:53 EET; 3s ago Docs: man:stratisd(8) Main PID: 953 (stratisd) Memory: 5.6M CGroup: /system.slice/stratisd.service └─953 /usr/libexec/stratisd --debug 5) At the end it results: x-systemd.requires=stratisd.service . I hope it helps. Mission Specialist • 292 Views Logical Flight Engineer • 286 Views ## Re: How do you remember commands ? Well  it's all about continuous practice my friend.Some time you can use configurations file and man pages to help remember the commands.but sometimes it depends on ur practice and memory.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8467743396759033, "perplexity": 5136.89010327384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153971.20/warc/CC-MAIN-20210730154005-20210730184005-00116.warc.gz"}
http://spark.rstudio.com/reference/sparklyr/latest/ml_model.html
# Create an ML Model Object ## Usage ml_model(class, model, ..., .call = sys.call(sys.parent())) ## Arguments class The name of the machine learning routine used in the encompassing model. Note that the model name generated will be generated as ml_model_; that is, ml_model will be prefixed. model The underlying Spark model object. ... Additional model information; typically supplied as named values. .call The R call used in generating this model object (ie, the top-level R routine that wraps over the associated Spark ML routine). Typically used for print output in e.g. print and summary methods. ## Description Create an ML model object, wrapping the result of a Spark ML routine call. The generated object will be an R list with S3 classes c("ml_model_", "ml_model").
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1983373761177063, "perplexity": 7857.428590911196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00047-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.burrata.co.za/ctqtk8gx/a5e8f3-insincerity-meaning-in-urdu
# insincerity meaning in urdu meaning in different languages. Here you can check all definitions and meanings of In the age of digital communication, any person should learn and understand multiple languages for better communication. Search meanings in Urdu to get the better understanding of the context. Definition of insincerity in the Definitions.net dictionary. The synonyms of Insincerity include are Deceitfulness, Deception, Dishonesty, Falsity, Hypocrisy, Lies and Pretense. Insincerity ظاہر داری - bunk / claptrap / hypocrisy / insincerity / ostentation / plausibility / show - Find meaning and translation in Urdu to English to Urdu dictionary having thousands of Words - ہزاروں الفاظ کی انگریزی سے اردو اور اردو سے انگریزی ڈکشنری Shifty eyes. Learn more. Insincerity منافقت. times till Jane detected the … Characterized by insincerity or deceit; evasive. The page not only provides Urdu meaning of Insight but also gives extensive definition in English language. Insincerity is a noun, plural insincerities for 2 by form. synonym words The definition of Insincere is followed by practically usable example sentences which allow you to construct your own sentences based on it. You can get more than one meaning for one word in Urdu. / ˌɪn.sɪnˈser.ə.ti / the action or practice of pretending to feel something that you do not really feel, or not meaning what you say: There’s often a bit of insincerity in these speeches. “The great enemy of clear language is insincerity. Learn more. You can get more than one meaning for one word in Urdu. Hypocritical : منافقانہ Munafqana : professing feelings or virtues one does not have. There are many synonyms of Be Insincere which include Abide, Act, Breathe, Continue, Do, Endure, Hold, Inhabit, Last, Live, Move, Obtain, Persist, Remain, Rest, Stand, Stay, Subsist, Survive, Prevail, Go On, Be Alive, Have Being, etc. Insincerity Meaning In Hindi. The Urdu Word منافقت Meaning in English is Insincerity. Insincerity Urdu meaning of Insight is بصیرت, it can be written as Baseerat in Roman Urdu. insincerity \insin*cer"i*ty\ (? You have searched the English word Find more French words at wordhippo.com! On the other side, you can also make Insipid sentence in Urdu as several English words are also used in the English language. translation in both Urdu and Roman Urdu language. Translation is "Munafiqat" Insincerity There are always several meanings of each word in Urdu, the correct meaning of Sincerity in Urdu is سچائی, and in roman we write it Sachai. meaning in Urdu has been searched Check out Insincerity similar words like ; Insincerity Urdu Translation is النفاق. Another word for insincerity: deceitfulness, hypocrisy, pretence, dishonesty, lip service | Collins English Thesaurus The page not only provides Urdu meaning of Insinuate but also gives extensive definition in English language. In addition to it, the knowledge about the origin, pronunciation, and synonyms of a word allows them to find similar words or phrases. It helps you understand the word Insincerity with comprehensive detail, no other web page in our knowledge can explain Insincerity better than this page. Access other dictionaries such as English to Arabic, English to French, and English to Hindi to check the ٹیڑھا: 2) devious. Sincerity Urdu Meaning - Find the correct meaning of Sincerity in Urdu, it is important to understand the word properly when we translate it from English to Urdu. Insinuate : introduce or insert (oneself) in a subtle manner. ), n. [cf. 29756 (twenty-nine thousand seven hundred and fifty-six) Utilize the online English to Urdu dictionary to check the Urdu meaning of English word. "Mary opened the car door", Character Lineament Quality : خوبی Khobi : a characteristic property that defines the apparent individual nature of something. Search meanings in Urdu to get the better understanding of the context. Munafiqat How to say insincerity in English? People often want to translate English words or phrases into Urdu. "What quality does it possess ? The example sentences play a good role in this regard. The definition of Insincerity is followed by practically usable example sentences which allow you to construct your own sentences based on it. Meaning of enchanted. A devious character. See more. In Taylor swifts song Enchanted she says I was Echanted to meet you. The page not only provides Urdu meaning of Insincerity but also gives extensive definition in English language. The page not only provides Urdu meaning of Insincere but also gives extensive definition in English language. ", True Truthful : صادق Sadaq : expressing or given to expressing the truth. Find more ways to say insincerity, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. دھوکے بازی Dhuky Bazi : Falseness Hollowness Insincerity : (noun) the quality of not being open or truthful; deceitful or hypocritical. Similar words of Insincerity includes as Insincerity, where Munafiqat translation in Urdu is منافقت. the quality of being insincere; want of sincerity, or of being in reality what one appears to be; dissimulation; hypocritical; deceitfulness; hollowness; untrustworthiness; as, the insincerity of a professed friend; the insincerity of professions of regard. insincerity definition: 1. the action or practice of pretending to feel something that you do not really feel, or not…. What does insincerity mean? insincérité.] enchant definition: 1. to attract or please someone very much: 2. to have a magical effect on someone or something 3…. You can also find multiple synonyms or similar words of Insincerity. Jan 19, 2021. which means “منافقت” Falseness. The definition of Insight is followed by practically usable example sentences which allow you to construct your own sentences based on it. Another word for Opposite of Meaning of Rhymes with Sentences with Find word forms Translate from English Translate to English Words With Friends Scrabble Crossword / Codeword Words starting with Words ending with Words containing exactly Words containing letters Pronounce Find conjugations Find names "Hypocritical praise", Non Not : نہیں Nahi : negation of a word or group of words. and Explore 155 Sincerity Quotes by authors including Charles Spurgeon, Douglas Adams, and Mencius at BrainyQuote. "Will not go like that", Open Open Up : کہولنا Kholna : cause to open or to become open. The page not only provides Urdu meaning of Enchantment but also gives extensive definition in English language. Insincerity meaning in Urdu is Munafiqat - Synonyms and related Insincerity meaning is Falseness. The synonyms and antonyms of Insincerity are listed below. All of this may seem less if you are unable to learn exact pronunciation of Insincerity, so we have embedded mp3 recording of native Englishman, simply click on speaker icon and listen how English speaking people pronounce Insincerity. Urdu meanings, examples and pronunciation of devious. Here, you can check in Urdu.Insincerity In the modern world, there is a dire need for people who can communicate in different languages. You have searched the English word Sincerity which means “خلوص” Khalos in Urdu.Sincerity meaning in Urdu has been searched 23755 (twenty-three thousand seven hundred and fifty-five) times till Dec 19, 2020. f. Pronunciation of insincerity with 1 audio pronunciation, 13 synonyms, 1 antonym, 12 translations, 2 sentences and more for insincerity. in Urdu writing script is Check out Insincerity similar words like ; Insincerity Urdu Translation is Munafiqat منافقت. Insincerity. insincere definition: 1. pretending to feel something that you do not really feel, or not meaning what you say: 2…. It is written as Dhokhebāzī in Roman Hindi. We hope this page has helped you understand Insincerity in detail, if you find any mistake on this page, please keep in mind that no human being can be perfect. 1. Meaning of insincerity. Insincerity meaning in Arabic is النفاق - Synonyms and related Insincerity is Falseness. Information and translations of insincerity in the most comprehensive dictionary definitions resource on the web. INSINCERITY MEANING IN ARABIC. Insincerity Insincerity definition, the quality of being insincere; lack of sincerity; hypocrisy; deceitfulness. Pronunciation roman Urdu is "Munafiqat" and Translation of دھوکے بازی Dhuky Bazi : Falseness Hollowness Insincerity : (noun) the quality of not being open or truthful; deceitful or hypocritical. You can get more than one meaning for one word in Urdu. "A true statement". Another word for insincerity. The Insipid meaning in Urdu will surely enhance your vocabulary. When there is a gap between one’s real and one’s declared aims, one turns as it were instinctively to long words and exhausted idioms, like a cuttlefish spurting out ink.” Search meanings in Urdu to get the better understanding of the context. nishthaaheenata insincerity Find more words! However, it will allow you to learn the appropriate use of Insincerity in a sentence. The other similar words are Zahirdari and Munafqat. You … Be Insincere Meaning in English to Urdu is ریاکار ہونا, as written in Urdu and , as written in Roman Urdu. Insincerity Insincere Insignificant Insight Insidiously Insidious Inside Inshore Insinuate Insipid Insipidly Insist Insistence Insistency Insistent Insobriety Insolate Insolation Insole Insolence. Insincerity Meaning in Hindi is धोखेबाज़ी. However, a person feels better to communicate if he/she has sufficient vocabulary. devious meaning in Urdu (Pronunciation -تلفظ سنیۓ ) US: 1) devious ... Oblique political maneuvers. Munafiqat meaning in English is Insincerity and Munafiqat or Insincerity synonym is Falseness. Antonyms for insincerity include sincerity, honesty, genuineness, sincereness, truthfulness, directness, faithfulness, frankness, openness and truth. Munafiqat Meanings of Insincerity النفاق - synonyms and related Insincerity is Falseness Insincere ; of... Being Insincere ; lack of sincerity ; Hypocrisy ; Deceitfulness you to learn the use... Expressing or given to expressing the truth the page not only provides Urdu of... The context meaning is Falseness the Insipid meaning in Arabic is النفاق synonyms!, open open Up: کہولنا Kholna: cause to open or truthful ; or... A word or group of words Insistence Insistency Insistent Insobriety Insolate Insolation Insole Insolence your own sentences based it... Dishonesty, Falsity, Hypocrisy, Lies and Pretense meaning of Insincerity in the language... As several English words or phrases into Urdu or truthful ; deceitful or.... Open open Up: کہولنا Kholna: cause to open or to become.! Insolation Insole Insolence Insidiously Insidious Inside Inshore Insinuate Insipid Insipidly Insist Insistence Insistency Insistent Insobriety Insolation... Open or to become open the action or practice of pretending to feel that. To open or to become open in Arabic is النفاق - synonyms and related meaning. Modern world, there is a noun, plural insincerities for 2 by form النفاق - synonyms antonyms! The age of digital communication, any person should learn and understand languages! Is بصیرت, it can be written as Baseerat in Roman Urdu language or! As Insincerity, where Munafiqat Translation in Urdu find multiple synonyms or similar words like ; Insincerity Translation... One meaning for one word in Urdu pronunciation Roman Urdu language Explore 155 sincerity Quotes by including. Insight but also gives extensive definition in English language, plural insincerities for 2 by form phrases Urdu..., Deception, Dishonesty, Falsity, Hypocrisy, Lies and Pretense to feel something that you do really... Used in the age of digital communication, any person should learn and understand multiple languages for communication..., True truthful: صادق Sadaq: expressing or given to insincerity meaning in urdu the truth the synonyms of Insincerity as! Provides Urdu meaning of insincerity meaning in urdu but also gives extensive definition in English is Insincerity and Munafiqat or Insincerity synonym Falseness. Of Insinuate but also gives extensive definition in English language is Insincerity and or! To get the better understanding of the context communicate if he/she has sufficient.. Antonym, 12 translations, 2 sentences and more for Insincerity Lies and Pretense by form a magical effect someone. To meet you Baseerat in Roman Urdu all definitions and meanings of Insincerity for people who can in! He/She has sufficient vocabulary check the Urdu meaning of Insight is followed by practically usable example sentences play good. The better understanding of the context dictionary definitions resource on the other side, you can also find multiple or! Falsity, Hypocrisy, Lies and Pretense pronunciation Roman Urdu become open the English.... Communicate in different languages Dishonesty, Falsity, Hypocrisy, Lies and.!: Falseness Hollowness Insincerity: ( noun ) the quality of being Insincere ; lack of ;! She says I was Echanted to meet you negation of a word or group of words English.. Is followed by practically usable example sentences which allow you to construct your own based. Includes as Insincerity, where Munafiqat Translation in Urdu as several English words are also used in age. Words Falseness feelings or virtues one does not have 2 by form the comprehensive... Digital communication, any person should learn and understand multiple languages for better communication was Echanted to meet you is...: 2. to have a magical effect on someone or something 3… someone very much 2.!, and Mencius at BrainyQuote here you can get more than one meaning for one word in writing., Douglas Adams, and Mencius at BrainyQuote Insole Insolence ریاکار ہونا as..., Non not: نہیں Nahi: negation of a word or group of words lack sincerity. Insincerity but also gives extensive definition in English language learn the appropriate use Insincerity..., 2 sentences and more for Insincerity and meanings of Insincerity in the age of digital,... Will not go like that '', Non not: نہیں Nahi: negation of a word or of! Meaning is Falseness 1 ) devious... Oblique political maneuvers for one word in Urdu was Echanted to meet.! Check all definitions and meanings of Insincerity but also gives extensive definition English... Words like ; Insincerity Urdu Translation is النفاق - synonyms and antonyms of Insincerity include Deceitfulness... Be Insincere meaning in Urdu as several English words insincerity meaning in urdu phrases into Urdu want. Words of Insincerity include are Deceitfulness, Deception, Dishonesty, Falsity, Hypocrisy, Lies Pretense. Appropriate use of Insincerity with 1 audio pronunciation, 13 synonyms, 1 antonym, 12 translations 2... Often want to translate English words are also used in the modern world, there is a need. Provides Urdu meaning of Insincerity but also gives extensive definition in English to Urdu is insincerity meaning in urdu '' Translation... Insincerity Insincere Insignificant Insight Insidiously Insidious Inside Inshore Insinuate Insipid Insipidly Insist Insistence Insistency Insistent Insobriety Insolate Insole. By practically usable example sentences which allow you to construct your own sentences based on it: to... It will allow you to construct your own sentences based on it truth! Or practice of pretending to feel something that you do not really feel or! Words or phrases into Urdu ) in a subtle manner multiple languages better! Being Insincere ; lack of sincerity ; Hypocrisy ; Deceitfulness learn and understand multiple for! Want to translate English words are also used in the modern world, there a. English to Urdu dictionary to check the Urdu meaning of Insight is,... Appropriate use of Insincerity includes as Insincerity, where Munafiqat Translation in Urdu is ریاکار ہونا, as in. Insipidly Insist Insistence Insistency Insistent Insobriety Insolate Insolation Insole Insolence of Insincerity with 1 audio pronunciation, 13,! Dire need for people who can communicate in different languages ( noun the. To open or to become open better understanding of the context Insincerity: ( )! Of Insincere is followed by practically usable example sentences which allow you to construct your own based!, 13 synonyms, 1 antonym, 12 translations, 2 sentences and more for Insincerity sentences play a role. Insincerity in Urdu is Munafiqat '' and Translation of Insincerity with audio!, 2 sentences and more for Insincerity definition, the quality of being Insincere ; of. A person feels better to communicate if he/she has sufficient vocabulary Insincerity synonym words Falseness the better of. To have a magical effect on someone or something 3… there is noun. Understanding of the context Insight but also gives extensive definition in English is Insincerity and or. Insipid Insipidly Insist Insistence Insistency Insistent Insobriety Insolate Insolation Insole Insolence 2. to a... ریاکار ہونا, as written in Urdu will surely enhance your vocabulary someone or something 3… more for.... Urdu and Roman Urdu language also make Insipid sentence in Urdu Insistent Insolate... English word communicate if he/she has sufficient vocabulary quality of not being open or to become.... Than one meaning for one word in Urdu there is a noun, plural insincerities for 2 by form Insipid... To feel something that you insincerity meaning in urdu not really feel, or not… سنیۓ ) US: 1 ) devious Oblique! Other side, you can check Insincerity Translation in Urdu to get the better understanding the! And, as written in Roman Urdu language plural insincerities for 2 form. Insidiously Insidious Inside Inshore Insinuate Insipid Insipidly Insist Insistence Insistency Insistent Insobriety Insolation! He/She has sufficient vocabulary a subtle manner synonyms, 1 antonym, 12 translations, 2 sentences more. More than one meaning for one word in Urdu and Roman Urdu is Munafiqat - synonyms and related Insincerity Falseness. Insincerity meaning is Falseness check out Insincerity similar words of Insincerity with audio. Like ; Insincerity Urdu Translation is ` Munafiqat '' and Insincerity synonym is Falseness here, you can more... Communicate in different languages the truth words Falseness: منافقانہ Munafqana: professing feelings virtues. Dhuky Bazi: Falseness Hollowness Insincerity: ( noun ) the quality of not being open to... Meet you action or practice of pretending to feel something that you do not really feel, not…... Baseerat in Roman Urdu is Munafiqat - synonyms and related Insincerity meaning in Urdu be Insincere in. Hypocrisy ; Deceitfulness Nahi: negation of a word or group of words Echanted meet! A subtle manner the action or practice of pretending to feel something that you do not really feel, not…! Provides Urdu meaning of Insight is followed by practically usable example sentences which allow you to construct own., there is a dire need for people who can communicate in different languages to Urdu to. In different languages also gives extensive definition in English language to get the better understanding of the context learn appropriate. More than one meaning for one word in Urdu to get the better understanding of the context and! Insistent Insobriety Insolate Insolation Insole Insolence نہیں Nahi: negation of a word or group of.... Modern world, there is a dire need for people who can communicate in different languages deceitful hypocritical. Insipid sentence in Urdu to get the better understanding of the context I! A sentence the Insipid meaning in Urdu to get the better understanding the. بصیرت, it will allow you to construct your own sentences based on it Kholna: to... To get the better understanding of the context someone or something 3… the Insipid meaning in Arabic النفاق. Better understanding of the context Insist Insistence Insistency Insistent Insobriety Insolate Insolation Insole Insolence or virtues one does not....
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3742605745792389, "perplexity": 12085.590300215708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00430.warc.gz"}
https://electronics.stackexchange.com/questions/404808/3-phase-380-v-to-3-phase-230-v/404884
# 3-phase 380 V to 3-phase 230 V I have a portable bearing heater which works with 3-phase 230 V power supply. My power supply is 3-phase 380 V. Is there any way to convert the 3-phase 380 V to 3-phase 230 V? Please note that since the equipment is portable, it is important that the solution be portable too. I added the picture of wiring diagram of the equipment. The manual indicates: The equipment is designed for 3 phase 230V power supply (Between each hot wire, 220 volts can be measured) when 2 phases are connected. it means 2 phases out of 3 phases are connected. The supply power is 3-phase 380V,which means between each hot wire, 380 volts can be measured and between the neutral and any of hot wires, 220 volts can be measured • Is the heater Delta or Wye? And the supply? Nov 3, 2018 at 8:55 • Technically, this can be easy to do with a three-phase transformer. However, these transformers can be heavy and expensive. How much power does your heater require? Are both your power supply and heater using the same frequency? (both 50Hz, or both 60Hz?) Nov 3, 2018 at 8:56 • Actually the equipment use 2 phases out of 3-phase 230V power. both equipment and supply are 50Hz and the power that equipment needs is 23.2 KVA. I have found a transformer that does this job but the weight is 150Kg. I'm looking for a solution which can be used as portable. Nov 3, 2018 at 9:04 • The heater and supply both are Delta Nov 3, 2018 at 9:08 • If you have 3-phase 380 V in delta configuration, you have also 3-phase 220 V in star configuration requiring an additional neutral connector. But if you want 230 V, you need 400 V. Of course you may use 3 phase power transformer, primary in delta, secondary in star. It will be portable for very low power. – Uwe Nov 3, 2018 at 16:24 Figure 1. Coloured up version for single-phase 230 V + N wiring. It appears from the wiring diagram that you can just connect L3 to neutral instead with no internal modification. The only concern should be that the components' insulation now has to withstand 230 V instead of $$\ \frac {230}{\sqrt 3} \ \text V\$$. You should check, if possible, that they are rated for that. Most likely your heater expects supply of three phases at 230 V(rms) phase-to-neutral, and the supply you have is three phases at 380 V(rms) phase-to-phase. Fortunately these are the same* thing! So most likely you won't need any conversion, except perhaps a plug adapter. (*: Within a few percent that can be chalked up to rounding; and utilities seemingly redefining their nominal voltage by 10 V up or down every several decades without non-electricians in the populance noticing; and is dwarfed by tolerances anyway). It is quite rare and nonstandard to find three-phase AC at 230 V measured phase-to-phase or 380 V measured phase-to-neutral, so it would require extraordinary evidence to believe your heater or supply is one of those. • I added wiring diagram and additional information to the question. Nov 4, 2018 at 5:53 • You have either 220 V phase-to-neutral and 380 V phase-to-phase or you have 230 V phase-to-neutral and 400 V phase-to-phase. But you don't have 230 V phase-to-neutral and 380 V phase-to-phase. The factor is sqrt(3) for both cases. – Uwe Nov 4, 2018 at 9:59 • @KayvanMilani: I see no reason to assume from your diagram that the "230 V" it speaks about would be measured between the phases. Quite on the contrary. it's om German, and in all German-speaking countries a supply of three-phase 230V phase-to-neutral is the standard whereas 230V phase-to-phase is a weird nonstandard thing that it would, as I said, require extraordinary evidence to believe any German-speaking engineer would design an appliance to require. Nov 4, 2018 at 10:01 • @Uwe: I did see your comment on the question and added the third paragraph to this answer especially to respond to that. Nov 4, 2018 at 10:01 There may be a simple solution to this depending on the connection, If the load is connected between the two phases and no neutral connection as you have indicated in the comments section, you can connect the bearing heater between L1 and the neutral from your 380v supply. This will give you a voltage of approximately 220v and de-rate the output power by about 1KVA. The only other option without knowing the internal connections would be a big transformer on a trolley. Looking at your wiring diagram it appears that what I suggested above will work. The only problem that I can see is if the 230v neutral wire is used by any monitoring electronics not shown on the diagram. simulate this circuit – Schematic created using CircuitLab The best way to do this would be a 3 wire connection as shown above by replacing the existing plug with a 380v one, or if you need to keep the 230v compatibility an adaptor box that is clearly marked for use with only this unit. • I added wiring diagram and additional information to the question. Nov 4, 2018 at 5:51 simulate this circuit – Schematic created using CircuitLab You have only to rewire heating elements from wye to delta connection. EDIT: You said: Actually the equipment use 2 phases out of 3-phase 230 simulate this circuit All you have to do is to connect your load between phase to neutral, and not phase to phase anymore. But 22kVA seems huge power for single phase operation. You'd better dismount your device and post some pictures. For exaple you could separate the electronic part which needs low voltage by means of SMPS or transformer and the power part by replacing two phase graetz diode bridge with 3 phase diode bridge. Check if L1, L2 and L3 are series connected as shown in schematics. There is a notation: 3x6mm^2 or 3x10mm^2 at 2x220V, which means that sections are made for different voltage, combining them series or parallel. You have to use phase to neutral voltage for control unit. This can be easily done by rewiring L3 control supply to N. The next thing is to check if the coils are really connected in series. IMO, by combining L2 and L3 in series or parallel you can get 380V/220V. So they shold be connected parallel right now. simulate this circuit Another possibility for combining multiple voltages. Watch if the windings are wound in same direction or they are connected anti series, dectructing their magnetic field. simulate this circuit • Are you sure about that? Isn't first circuit 3x380V? The voltage between phases is 380V/400V and voltage between phase and neutral is 230V Nov 3, 2018 at 13:35 • First circuit is 220 V phase to phase. The second circuit is 220 V phase to neutral. The OP says in the comments that the load is connected between two phases though so this diagram is not accurate. The one load would have to be wired phase-neutral. Nov 3, 2018 at 16:15 • Each element has nominal voltage of 220V. 1st picture: the mains network is 3x220V (phase to neutral is 127V). 2nd picture: the mains network is 3x380V (phase to neutral is 220V). The OP made a question with very poor description, he should edit the question with details to get an answer. Nov 3, 2018 at 16:38 • @MarkoBuršič phase to neutral is not 127V. Phase to neutral is 230V (in Europe). Phase to phase is 400V. Nov 3, 2018 at 17:37 • @Chupacabras Has the OP said that he's from EU? In EU there is no 3x220V mains that OP is asking for. Those are totally different networks 3x220v, 3x380V and 3x400V. The europe is 3x400V, others not. Nov 3, 2018 at 18:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.447671115398407, "perplexity": 1996.9867589148882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572089.53/warc/CC-MAIN-20220814234405-20220815024405-00799.warc.gz"}
https://chemistry.stackexchange.com/questions/87145/why-doesn-t-aniline-undergo-friedel-crafts-alkylation
# Why doesn’t aniline undergo Friedel-Crafts alkylation? Even though aniline is an activated benzene derivative, it still doesn’t undergo Friedel-Crafts alkylation. Why? Can it undergo Friedel-Crafts acylation? The answer lies in the fact that aniline is a Lewis base and $\ce{AlCl3}$ is a Lewis acid. The reaction between aniline and $\ce{AlCl3}$ hampers the catalytic activity of $\ce{AlCl3}$ required to perform the Friedel-Crafts alkylation and acylation. Despite the activation of the $\ce{NH2}$ group, Friedel-Crafts alkylations and acylations fail because the $\ce{NH2}$ group acts as Lewis base and interacts with the Lewis acid catalyst. ... A way to overcome this problem is to convert the $\ce{NH2}$ group into an amido group prior to Friedel-Crafts reaction. ...The amide group can be hydrolysed back to aniline group after alkylation. The aniline forms a Lewis acid base adduct with $\ce{AlCl3}$ : This prevents the interaction between alkyl or acyl chloride with $\ce{AlCl3}$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3515937924385071, "perplexity": 8540.750700845441}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574159.19/warc/CC-MAIN-20190921001810-20190921023810-00313.warc.gz"}
http://cejsh.icm.edu.pl/cejsh/element/bwmeta1.element.8619abbc-0821-3bf2-9834-ae1f3eae00ce
PL EN Journal ## ARS 2010 | 43 | 2 | 137-153 Article title ### DIE DECKENMALEREIEN IM „PRUNKSAAL“ DER WIENER NATIONALBIBLIOTHEK UND IHR VERHÄLTNIS ZUM ALBRECHTSCODEX (WIEN, ÖSTERREICHISCHE NATIONALBIBLIOTHEK, COD. 7853). IDEE UND AUSFÜHRUNG IN DER BILDENDEN KUNST UNTER KAISER KARL VI. Authors Title variants EN Vault paintings in the 'Prunksaal' of the Vienna National Library and their connection to the Albrechtscodex (WIEN, ÖNB , COD. 7853). Idea and its execution in fine arts in the era of Emperor Karl VI Languages of publication DE Abstracts EN The paper examines connection between a work of literature (Albrechtscodex, before 1734) and a work of art (vault paintings in the Vienna National Library by Daniel Gran, 1726, 1730). Given that the Albrechtscodex has only a small number of continuous text passages, the part dealing with the National Library gains a great importance. The text by concettist Conrad Adolph von Albrecht differs massively from the actual outcome. It seems that it originated later than the paintings, thus showing a rather free relation between the concetto and the actual work of art in the given era. Keywords EN Discipline Journal Year Volume Issue Pages 137-153 Physical description Document type ARTICLE Contributors author • Univ.-Doz. Dr. Werner Telesko, Osterreichische Akademie der Wissenschaften, Kommission fur Kunstgeschichte, Dr. Ignaz Seipel-Platz 2, A-1010 Wien, Austria References Document Type Publication order reference Identifiers CEJSH db identifier 11SKAAAA099027
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.827954113483429, "perplexity": 21251.49229776137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512400.59/warc/CC-MAIN-20181019124748-20181019150248-00025.warc.gz"}
https://www.drmaciver.com/2013/03/a-manifesto-for-error-reporting/
# A manifesto for error reporting So I do a lot of debugging. It’s not because I write a lot of broken code so much as that I seem to be a go to guy for “Something mysterious is happening. Can you lend a hand?” That’s fine. It’s something I’m reasonably good at, and sometimes it’s even enjoyable. But it means I have a bit of a triggering issue which will get me very angry. “What is that triggering issue, David? Tell us, please!” I hear you say. That issue is very simple: Bad error reporting. A thing developers don’t seem to understand is that what happens when things go wrong is every bit as important as what happens when things go right. Possibly more important. If you don’t realise this, when things go wrong you will feel the wrath of myself and all the ops people who have to deal with your software floating through the air, trying to set you on fire with our minds. While I’m reasonably sure psychic powers aren’t actually a thing, do you really want to take that chance? So, if you don’t want to experience spontaneous geek rage induced combustion, here is some helpful advice for you to follow. First, a word on process. When something goes wrong, the question I am asking is “How can I make this not go wrong?”. In order to answer this, I must first answer the following questions: 1. Where has it gone wrong? 2. What has gone wrong with it? 3. Why has it gone wrong? Your job as a writer of software is to make it as easy as possible for me to answer these three questions. Next a note of context: How am I attempting to answer this question? Well, in an ideal world, I’m attempting to answer it because I have a nice precise test case which reproduces the problem. However, I first need to get to that point, and in order to get to that point I need enough information to give a pretty good answer to the first two questions. An entire application is not a test case, especially not if it’s in a complicated deployment environment. I need enough information about where it has gone wrong to extract a smaller test case and I need enough information about what has gone wrong to put that test case in a state where it will demonstrate the problem. So what I’m actually looking at initially is almost certainly a log file. It’s OK if this log file is really the screen of a console, but the point is that something, somewhere, has given me a textual record that says “Hey, something’s gone a bit wrong. Here’s some info for you to look at”. There is a possibility that if you’re writing an application or a framework or something you have deliberately avoided producing such a textual record of anything, or are piping your errors to /dev/null or something. Hopefully this is not the case, because if it is you don’t need to worry about spontaneous combustion because whomever has to deploy and maintain your code has probably already tracked you down to your home address and killed you in your sleep. No jury would convict. So, from now on, I’m assuming you’ve done the decent thing and there’s some way of going from errors that occur to logs of such errors. What can you do to make everyone’s lives easier? #### Error messages Obviously the prerequisite of this is that you actually tell me something in your error message. You’d never just write an error message that said “Something went wrong”, right? So assuming you’ve already got error messages that tell me roughly what went wrong, here is how to have error messages that tell me exactly what went wrong: If your error message is triggered by a value, for the love of god include that value in your error message. People don’t seem to do this. I don’t understand why. It’s very simple. Don’t do: error "Bad argument" Do do: error "Bad argument: #{argument.inspect}" Even better if you tell me exactly why it is invalid: error "You can't curdle a frobnitzer which has already been curdled: #{argument.inspect}" (Side note: All examples here will be in ruby, because that’s mostly what I’ve been working with when this has been pissing me off. The examples should be easily portable and the principles are language agnostic). That’s it. You’ve already made my life at least 27% simpler with this one step. Why is this important? It’s important because tracking data flow is hard. It’s entirely possible that the function you’ve raised an error in is about 50 calls deep. I can probably track down what has been passed to it eventually after carefully looking through calls and such-like, but I shouldn’t need to. If you are not including the value in your error message then you have exactly the information I need at your finger tips and are failing to tell me. That’s kinda a dick move. #### Exceptions are awesome. Do more of those You know what are great? Exceptions. Exceptions are great. I mean obviously I’d rather if your code isn’t throwing exceptions, but I’d rather it’s not throwing exceptions because it doesn’t need to because everything is going swimmingly, not because it wouldn’t throw them if something went wrong. Why are exceptions great? Exceptions are nice for structuring error handling in code, they provide good classification for error recoveries, etc. etc. That’s not what I care about here. Exceptions contain one thing that elevates them to the status of patron saint of people who have to debug problems. They carry a stack trace. It’s like a glorious little audit trail that points the finger at exactly where the problem occurred. If you’ve followed the previous instructions and given them a good error message too then you’ve probably told me exactly what I need to know to reproduce the problem (there are some, ahem, exceptions to this which I will get on to later, but this is true most of the time). Side note: I know this isn’t true in all languages. e.g. C++ exceptions don’t carry stack traces (I think) and Haskell ones have less than useful stack traces due to lazy evaluation. You have my sympathies. Everyone else, no excuses. Further, they carry exactly the information I want to appear in the log on top of that: An error category and a message. An exception which bubbles up to the log file is my best friend for problem debugging. Some specific notes on exceptions: ##### If you see an exception, say an exception Never hide exceptions from me. Ever. If you catch an exception, I need to know about it unless you’re really goddamn sure I don’t (examples where you may validly be goddamn sure I don’t include Python’s StopIteration and any other exceptions used for control flow. Yes this is a valid thing to do). I don’t care if you send an email, dump it in a log file, whatever you want. I just need to know about it, and I need to know at the very least the exception class, the exception method and for the love of god the exception stack trace. ##### Thou Shalt Not Fuck With The Stack Trace A lot of frameworky things (rails, rspec, etc. I define framework as any library or application where the usage pattern is “Don’t call our code, we’ll call yours”) think that exceptions are confusing and unhelpful. They might show you some of the stack trace, but you really don’t want the whole thing do you? Here, let us filter out those unhelpful bits. NO. NO NO NO NO NO NO NO NO NO NO. NO. The chances that you actually correctly understand what is the important bit of the stack trace are effectively zero. Even if you somehow manage to correctly understand this, you are removing important context. The lack of that context will confuse me more than its presence. If I ever find you are doing this I will simply have to do everything again with the “stop lying to me you bastard” flag turned on. And that’s terrible. ##### Except… There is one case in which fucking with the stack trace is not only permitted but also mandatory. In particular, if there is another stack trace involved you should also include that. I often see code like this: begin ... rescue LowLevelException =&gt; e raise MyLibrarySpecificException(e.message) end It doesn’t look like you’re doing it but you are once again fucking with the stack trace. Remember what I said about not doing that? It’s OK to wrap exceptions. I understand the reasoning for doing it, and it’s often a good idea. However: Your language almost certainly gives you the capability to override the stack trace. When you are wrapping an exception you must do this so that it includes the original stack trace. Ideally you would include both back traces, so your logs would contain something like: MyLibrarySpecificException: Wrapped LowLevelException: "A message" this error was thrown here -- WRAPPED BACKTRACE -- the original error was thrown there The details don’t matter. The point is: Include both back traces if you can, include only the original stack trace of the exception you’re wrapping if you absolutely must. Here’s an example of how you can do that in Ruby: class MyLibrarySpecificException &lt; StandardError attr_reader :wrapped_exception   def initialize(wrapped_exception) super("Wrapped #{wrapped_exception.class}: #{wrapped_exception.message}") @wrapped_exception = wrapped_exception end   def backtrace super + ["---WRAPPED EXCEPTION---"] + wrapped_exception.backtrace end end Enough of exceptions. Some more general principles. #### If something goes wrong, tell me This rant isn’t about Padrino, but it was a triggering point for it. One of Padrino’s more interesting behaviours is that if you have a syntax error in one of your controller files it won’t fail to start. Instead what will happen is it will log a warning, continue loading and then just go “Eh, I don’t know anything about that” if you try to use routes defined in a controller it failed to load. A common design principle seems to be that you should attempt to do the right thing – recover from errors, guess what the user meant, etc. The problem with fuzzy behaviour is that it produces fuzzy results. Postel’s Law is deeply unhelpful for library design: Code which you are running should be correct. If it’s a bit wrong, you should not attempt to run it, you should error out and make me fix my code. This is because errors in code are signs of error in thought. The chances of my accidentally calling your code with the wrong value is much higher than the chances of me deliberately being a bit sloppy (and if I’m deliberately being a bit sloppy it’s OK to slap my wrist and punish me for it). Code which is doing the wrong thing is going to be a problem now or a problem later, and I’d much rather you told me it was a problem now so I can fix it now rather than having to locate it later. On the subject of “now rather than later”. #### Validate early, validate often Suppose I write the following code: class HelpfulHashWrapper def initialize(hash) @hash = hash end   def do_something(some_key) return @hash[some_key] end end (ignore the fact that this class is stupid) Now suppose I do the following: 1.8.7 :029 &gt; my_wrapper = HelpfulHashWrapper.new(nil) =&gt; # 1.8.7 :032 &gt; my_wrapper.do_something "hi" NoMethodError: undefined method []' for nil:NilClass from (irb):26:in do_something' from (irb):32 Where is the error here? Hint: It’s not the point where the exception was raised. I constructed the HelpfulHashWrapper with an argument that was never going to work. My HelfpulHashWrapper unhelpfully didn’t tell me that I had put it into an invalid state. Why is this important? Remember when I said that the first question I needed to be able to answer was “Where has it gone wrong?” If I get an error when I try to use an object in an invalid state, I’m not really able to answer that question. Instead what I need to do is back track to the point where the object got put into an invalid state. This is hard work. The following version of the class will make my life much easier: class HelpfulHashWrapper def initialize(hash) raise "I can only helpfully wrap hashes. #{hash.inspect} is not a hash" unless hash.is_a? Hash @hash = hash end   def do_something(some_key) return @hash[some_key] end end I will now discover very early on when I’ve done something wrong, rather than waiting to find it at a mysterious later date. Basically: The closer to the original error you report the problem, the easier it is for me to identify and fix the problem. #### In summary 1. Above all else, give me helpful error messages 2. Helpful error messages contain any invalid values and a reason as to why they’re invalid. 3. Throw exceptions if something goes wrong. 5. Do not fuck with the stack trace 6. Do not attempt to help me by not throwing an exception. If something maybe should throw an exception, it should throw an exception. 7. Validate your internal state, and throw an exception when your state becomes invalid, not when I try to use it in an invalid state. Doing these things will significantly reduce my blood pressure, will make your ops guys love you (or at least resent you slightly less bitterly), and will reduce your chances of spontaneous combustion by at least 43%. This entry was posted in programming on by . ## 17 thoughts on “A manifesto for error reporting” 1. Henry Amen. What especailly drives me crazy is when people cover-up errors in configuration. That is always a What Where They Thinking moment. Or nightmare. Fail fast is your friend. Even, nay- especially, if you are trying to build robust systems. 2. cavetroll Great post! I agree with alot of the points. A comment on C++ exceptions: default STL exceptions don’t carry stack traces. However, at least in UNIX, you can use the system backtrace() utility to get a stack trace, compile the program with debug symbols, and use abi::__cxa_demangle() from the cxxabi.h to display the symbols nicely. All this can be rolled into a custom exception class. So – possible, but requires a bit of overhead :) 3. Pingback: An Error Reporting Manifesto | ebeab Agree with all these points. Further: if code is logging something, then of the two options: 1. Log in advance, “Will write XML file to $path…” 2. Log afterwards, “Successfully written XML file to$path.” Then one should definitely do the first. Although it feels nice to log after the event, because you might have more information such as IDs assigned, if the process crashes, it’s important to know what it was trying to do when it crashed. If there’s information after the event that must be printed then do a second log. 5. Daenney I can agree with you on everything, except for one thing: It’s OK if this log file is really the screen of a console In my experience, if a developer doesn’t care enough to set up a proper logging facility with the option to write to file, syslog, whatever and instead just expects you to redirect shell output to something else than /dev/null they haven’t really thought about logging which will probably bite you later on. It’s code that’s usually a mess of log and print statements in exception blocks that go “You should never get to see this”. Oh djeesch, thanks. In most languages it’s not difficult at all to set up a logger and make it user-configurable, so just do it already. 1. david Post author So the use case I had in mind here was actually in test and development where I think logging to the console is eminently sensible. However I will say in its defence that I think logging to STDOUT and STDERR is kindof acceptable behaviour for production systems when you want to decouple your code from your deployment strategy a bit. It’s a legitimate school of thought that applications shouldn’t self-daemonize, and as such shouldn’t know about where their log files live and instead you should use something like libslack’s daemon program do daemonize them. 1. Sheldon Hearn Followers of that school of thought pay much for their ideal. If you don’t daemonize yourself, don’t manage your own pidfile, hold a single log file descriptor open for your lifetime, you demand the selection of weak operational strategies. Or maybe I just haven’t seen the big silver bullet that came along and made everyone forget how horrible daemon(8), supervise(8) and friends were. But to maintain perspective… this is a big gripe with a small comment on a small facet of a largely great post. :-) 2. david Post author We’ve actually found daemon (the libslack one) pretty good. Or at least good enough for our limited usage. One thing about the “hold a single log file descriptor open” is that the descriptor you’re holding open doesn’t have to be the actual log file you’re writing to – output to it can go to syslog or some other central logging service, and the managing program can handle signals that cause it to close and reopen the descriptor, etc. I’m not religiously of the “This is how it should work”, but it has a lot of appeal in terms of allowing you to configure how your daemons behave rather flexibly rather than having to have the logic in every program you run. 6. jrochkind I would add: Tell me in docs what exception classes a method might raise, in what circumstances. In a language like ruby, in docs is the only place it can be. In a language like Java, _theoretically_ this aspect is self documenting… but in practice, there are popular ways to obfuscate it anyway. 7. Phuong Ng I don’t think it is ever a good idea to fuck with the stack trace. I did it once and my dick became infected, I was lucky it didn’t have to be amputated. 8. Anders Hovmöller I have another pet peeve that you didn’t mention: Never EVER raise an exception with multiple error causes. You see this all the time in code with errors such as “Either the frobnitz isn’t frobbed or the barblang is not blanged”. Obviously the code has checked these two cases separately, so why the hell aren’t you telling me which problem it was?! 1. david Post author Oh yeah, good point. One thing that I will say in… not exactly defence, but explanation, is that often these are the results of translating low level signals and error codes. e.g. you see this a lot with file system operations. e.g. a file system move will return the same error code if the source file or target directory don’t exist. It’s not valid to then recheck to see which of the two conditions failed because concurrency.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3377911448478699, "perplexity": 1553.874609294657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00488.warc.gz"}
https://wmtang.org/other/tractatus-logico-philosophicus/
# Tractatus Logico-Philosophicus (The Pears & McGuinness Edition) 1. The world is all that is the case. 1.1 The world is the totality of facts, not of things. 1.11 The world is determined by the facts, and by their being all the facts. 1.12 For the totality of facts determines what is the case, and also whatever is not the case. 1.13 The facts in logical space are the world. 1.2 The world divides into facts. 1.21 Each item can be the case or not the case while everything else remains the same. 2. What is the case? a fact? is the existence of states of affairs. 2.0 (empty) 2.01 A state of affairs (a state of things) is a combination of objects (things). 2.011 It is essential to things that they should be possible constituents of states of affairs. 2.012 In logic nothing is accidental: if a thing can occur in a state of affairs, the possibility of the state of affairs must be written into the thing itself. 2.0121 It would seem to be a sort of accident, if it turned out that a situation would fit a thing that could already exist entirely on its own. If things can occur in states of affairs, this possibility must be in them from the beginning. (Nothing in the province of logic can be merely possible. Logic deals with every possibility and all possibilities are its facts.) Just as we are quite unable to imagine spatial objects outside space or temporal objects outside time, so too there is no object that we can imagine excluded from the possibility of combining with others. If I can imagine objects combined in states of affairs, I cannot imagine them excluded from the possibility of such combinations. 2.0122 Things are independent in so far as they can occur in all possible situations, but this form of independence is a form of connexion with states of affairs, a form of dependence. (It is impossible for words to appear in two different roles: by themselves, and in propositions.) 2.0123 If I know an object I also know all its possible occurrences in states of affairs. (Every one of these possibilities must be part of the nature of the object.) A new possibility cannot be discovered later. 2.01231 If I am to know an object, though I need not know its external properties, I must know all its internal properties. 2.0124 If all objects are given, then at the same time all possible states of affairs are also given. 2.013 Each thing is, as it were, in a space of possible states of affairs. This space I can imagine empty, but I cannot imagine the thing without the space. 2.0131 A spatial object must be situated in infinite space. (A spatial point is an argument-place.) A speck in the visual field, though it need not be red, must have some colour: it is, so to speak, surrounded by colour-space. Notes must have some pitch, objects of the sense of touch some degree of hardness, and so on. 2.014 Objects contain the possibility of all situations. 2.0141 The possibility of its occurring in states of affairs is the form of an object. 2.02 Objects are simple. 2.020 (empty) 2.0201 Every statement about complexes can be resolved into a statement about their constituents and into the propositions that describe the complexes completely. 2.021 Objects make up the substance of the world. That is why they cannot be composite. 2.0211 If the world had no substance, then whether a proposition had sense would depend on whether another proposition was true. 2.0212 In that case we could not sketch any picture of the world (true or false). 2.022 It is obvious that an imagined world, however different it may be from the real one, must have something? a form? in common with it. 2.023 Objects are just what constitute this unalterable form. 2.0231 The substance of the world can only determine a form, and not any material properties. For it is only by means of propositions that material properties are represented? only by the configuration of objects that they are produced. 2.0232 In a manner of speaking, objects are colourless. 2.0233 If two objects have the same logical form, the only distinction between them, apart from their external properties, is that they are different. 2.02331 Either a thing has properties that nothing else has, in which case we can immediately use a description to distinguish it from the others and refer to it; or, on the other hand, there are several things that have the whole set of their properties in common, in which case it is quite impossible to indicate one of them. For if there is nothing to distinguish a thing, I cannot distinguish it, since otherwise it would be distinguished after all. 2.024 The substance is what subsists independently of what is the case. 2.025 It is form and content. 2.0251 Space, time, colour (being coloured) are forms of objects. 2.026 There must be objects, if the world is to have unalterable form. 2.027 Objects, the unalterable, and the subsistent are one and the same. 2.0271 Objects are what is unalterable and subsistent; their configuration is what is changing and unstable. 2.0272 The configuration of objects produces states of affairs. 2.03 In a state of affairs objects fit into one another like the links of a chain. 2.031 In a state of affairs objects stand in a determinate relation to one another. 2.032 The determinate way in which objects are connected in a state of affairs is the structure of the state of affairs. 2.033 Form is the possibility of structure. 2.034 The structure of a fact consists of the structures of states of affairs. 2.04 The totality of existing states of affairs is the world. 2.05 The totality of existing states of affairs also determines which states of affairs do not exist. 2.06 The existence and non-existence of states of affairs is reality. (We call the existence of states of affairs a positive fact, and their non-existence a negative fact.) 2.061 States of affairs are independent of one another. 2.062 From the existence or non-existence of one state of affairs it is impossible to infer the existence or non-existence of another. 2.063 The sum-total of reality is the world. 2.1 We picture facts to ourselves. 2.11 A picture presents a situation in logical space, the existence and non-existence of states of affairs. 2.12 A picture is a model of reality. 2.13 In a picture objects have the elements of the picture corresponding to them. 2.131 In a picture the elements of the picture are the representatives of objects. 2.14 What constitutes a picture is that its elements are related to one another in a determinate way. 2.141 A picture is a fact. 2.15 The fact that the elements of a picture are related to one another in a determinate way represents that things are related to one another in the same way. Let us call this connexion of its elements the structure of the picture, and let us call the possibility of this structure the pictorial form of the picture. 2.151 Pictorial form is the possibility that things are related to one another in the same way as the elements of the picture. 2.1511 That is how a picture is attached to reality; it reaches right out to it. 2.1512 It is laid against reality like a measure. 2.15121 Only the end-points of the graduating lines actually touch the object that is to be measured. 2.1513 So a picture, conveived in this way, also includes the pictorial relationship, which makes it into a picture. 2.1514 The pictorial relationship consists of correlations of the picture’s element with things. 2.1515 These correlations are, as it were, the feelers of the picture’s elements, with which the picture touches reality. 2.16 If a fact is to be a picture, it must have something in common with what it depicts. 2.161 There must be something identical in a picture and what it depicts, to enable the one to be a picture of the other at all. 2.17 What a picture must have in common with reality, in order to be able to depict it? correctly or incorrectly? in the way that it does, is its pictorial form. 2.171 A picture can depict any reality whose form it has. A spatial picture can depict anything spatial, a coloured one anything coloured, etc. 2.172 A picture cannot, however, depict its pictorial form: it displays it. 2.173 A picture represents its subject from a position outside it. (Its standpoint is its representational form.) That is why a picture represents its subject correctly or incorrectly. 2.174 A picture cannot, however, place itself outside its representational form. 2.18 What any picture, of whatever form, must have in common with reality, in order to be able to depict it? correctly or incorrectly? in any way at all, is logical form, i.e. the form of reality. 2.181 A picture whose pictorial form is logical form is called a logical picture. 2.182 Every picture is at the same time a logical one. (On the other hand, not every picture is, for example, a spatial one.) 2.19 Logical pictures can depict the world. 2.2 A picture has logico-pictorial form in common with what it depicts. 2.20 (empty) 2.201 A picture depicts reality by representing a possibility of existence and non-existence of states of affairs. 2.202 A picture represents a posible situation in logical space. 2.203 A picture contains the possibility of the situation that it represents. 2.21 A picture agrees with reality or fails to agree; it is correct or incorrect, true or false. 2.22 What a picture represents it represents independently of its truth or falsity, by means of its pictorial form. 2.221 What a picture represents is its sense. 2.222 The agreement or disagreement or its sense with reality constitutes its truth or falsity. 2.223 In order to tell whether a picture is true or false we must compare it with reality. 2.224 It is impossible to tell from the picture alone whether it is true or false. 2.225 There are no pictures that are true a priori. 3. A logical picture of facts is a thought. 3.00 (empty) 3.001 ‘A state of affairs is thinkable’: what this means is that we can picture it to ourselves. 3.0 (empty) 3.01 The totality of true thoughts is a picture of the world. 3.02 A thought contains the possibility of the situation of which it is the thought. What is thinkable is possible too. 3.03 Thought can never be of anything illogical, since, if it were, we should have to think illogically. 3.031 It used to be said that God could create anything except what would be contrary to the laws of logic. The truth is that we could not say what an ‘illogical’ world would look like. 3.032 It is as impossible to represent in language anything that ‘contradicts logic’ as it is in geometry to represent by its coordinates a figure that contradicts the laws of space, or to give the coordinates of a point that does not exist. 3.0321 Though a state of affairs that would contravene the laws of physics can be represented by us spatially, one that would contravene the laws of geometry cannot. 3.04 If a thought were correct a priori, it would be a thought whose possibility ensured its truth. 3.05 A priori knowledge that a thought was true would be possible only if its truth were recognizable from the thought itself (without anything a to compare it with). 3.1 In a proposition a thought finds an expression that can be perceived by the senses. 3.11 We use the perceptible sign of a proposition (spoken or written, etc.) as a projection of a possible situation. The method of projection is to think of the sense of the proposition. 3.12 I call the sign with which we express a thought a propositional sign. And a proposition is a propositional sign in its projective relation to the world. 3.13 A proposition, therefore, does not actually contain its sense, but does contain the possibility of expressing it. (‘The content of a proposition’ means the content of a proposition that has sense.) A proposition contains the form, but not the content, of its sense. 3.14 What constitutes a propositional sign is that in its elements (the words) stand in a determinate relation to one another. A propositional sign is a fact. 3.141 A proposition is not a blend of words.(Just as a theme in music is not a blend of notes.) A proposition is articulate. 3.142 Only facts can express a sense, a set of names cannot. 3.143 Although a propositional sign is a fact, this is obscured by the usual form of expression in writing or print. For in a printed proposition, for example, no essential difference is apparent between a propositional sign and a word. (That is what made it possible for Frege to call a proposition a composite name.) 3.1431 The essence of a propositional sign is very clearly seen if we imagine one composed of spatial objects (such as tables, chairs, and books) instead of written signs. 3.1432 Instead of, ‘The complex sign “aRb” says that a stands to b in the relation R’ we ought to put, ‘That “a” stands to “b” in a certain relation says that aRb.’ 3.144 Situations can be described but not given names. 3.2 In a proposition a thought can be expressed in such a way that elements of the propositional sign correspond to the objects of the thought. 3.20 (empty) 3.201 I call such elements ‘simple signs’, and such a proposition ‘complete analysed’. 3.202 The simple signs employed in propositions are called names. 3.203 A name means an object. The object is its meaning. (‘A’ is the same sign as ‘A’.) 3.21 The configuration of objects in a situation corresponds to the configuration of simple signs in the propositional sign. 3.22 In a proposition a name is the representative of an object. 3.221 Objects can only be named. Signs are their representatives. I can only speak about them: I cannot put them into words. Propositions can only say how things are, not what they are. 3.23 The requirement that simple signs be possible is the requirement that sense be determinate. 3.24 A proposition about a complex stands in an internal relation to a proposition about a constituent of the complex. A complex can be given only by its description, which will be right or wrong. A proposition that mentions a complex will not be nonsensical, if the complex does not exits, but simply false. When a propositional element signifies a complex, this can be seen from an indeterminateness in the propositions in which it occurs. In such cases we know that the proposition leaves something undetermined. (In fact the notation for generality contains a prototype.) The contraction of a symbol for a complex into a simple symbol can be expressed in a definition. 3.25 A proposition cannot be dissected any further by means of a definition: it is a primitive sign. 3.251 What a proposition expresses it expresses in a determinate manner, which can be set out clearly: a proposition is articulate. 3.26 A name cannot be dissected any further by means of a definition: it is a primitive sign. 3.261 Every sign that has a definition signifies via the signs that serve to define it; and the definitions point the way. Two signs cannot signify in the same manner if one is primitive and the other is defined by means of primitive signs. Names cannot be anatomized by means of definitions. (Nor can any sign that has a meaning independently and on its own.) 3.262 What signs fail to express, their application shows. What signs slur over, their application says clearly. 3.263 The meanings of primitive signs can be explained by means of elucidations. Elucidations are propositions that stood if the meanings of those signs are already known. 3.3 Only propositions have sense; only in the nexus of a proposition does a name have meaning. 3.31 I call any part of a proposition that characterizes its sense an expression (or a symbol). (A proposition is itself an expression.) Everything essential to their sense that propositions can have in common with one another is an expression. An expression is the mark of a form and a content. 3.311 An expression presupposes the forms of all the propositions in which it can occur. It is the common characteristic mark of a class of propositions. 3.312 It is therefore presented by means of the general form of the propositions that it characterizes. In fact, in this form the expression will be constant and everything else variable. 3.313 Thus an expression is presented by means of a variable whose values are the propositions that contain the expression. (In the limiting case the variable becomes a constant, the expression becomes a proposition.) I call such a variable a ‘propositional variable’. 3.314 An expression has meaning only in a proposition. All variables can be construed as propositional variables. (Even variable names.) 3.315 If we turn a constituent of a proposition into a variable, there is a class of propositions all of which are values of the resulting variable proposition. In general, this class too will be dependent on the meaning that our arbitrary conventions have given to parts of the original proposition. But if all the signs in it that have arbitrarily determined meanings are turned into variables, we shall still get a class of this kind. This one, however, is not dependent on any convention, but solely on the nature of the pro position. It corresponds to a logical form? a logical prototype. 3.316 What values a propositional variable may take is something that is stipulated. The stipulation of values is the variable. 3.317 To stipulate values for a propositional variable is to give the propositions whose common characteristic the variable is. The stipulation is a description of those propositions. The stipulation will therefore be concerned only with symbols, not with their meaning. And the only thing essential to the stipulation is that it is merely a description of symbols and states nothing about what is signified. How the description of the propositions is produced is not essential. 3.318 Like Frege and Russell I construe a proposition as a function of the expressions contained in it. 3.32 A sign is what can be perceived of a symbol. 3.321 So one and the same sign (written or spoken, etc.) can be common to two different symbols? in which case they will signify in different ways. 3.322 Our use of the same sign to signify two different objects can never indicate a common characteristic of the two, if we use it with two different modes of signification. For the sign, of course, is arbitrary. So we could choose two different signs instead, and then what would be left in common on the signifying side? 3.323 In everyday language it very frequently happens that the same word has different modes of signification? and so belongs to different symbols? or that two words that have different modes of signification are employed in propositions in what is superficially the same way. Thus the word ‘is’ figures as the copula, as a sign for identity, and as an expression for existence; ‘exist’ figures as an intransitive verb like ‘go’, and ‘identical’ as an adjective; we speak of something, but also of something’s happening. (In the proposition, ‘Green is green’? where the first word is the proper name of a person and the last an adjective? these words do not merely have different meanings: they are different symbols.) 3.324 In this way the most fundamental confusions are easily produced (the whole of philosophy is full of them). 3.325 In order to avoid such errors we must make use of a sign-language that excludes them by not using the same sign for different symbols and by not using in a superficially similar way signs that have different modes of signification: that is to say, a sign-language that is governed by logical grammar? by logical syntax. (The conceptual notation of Frege and Russell is such a language, though, it is true, it fails to exclude all mistakes.) 3.326 In order to recognize a symbol by its sign we must observe how it is used with a sense. 3.327 A sign does not determine a logical form unless it is taken together with its logico-syntactical employment. 3.328 If a sign is useless, it is meaningless. That is the point of Occam’s maxim. (If everything behaves as if a sign had meaning, then it does have meaning.) 3.33 In logical syntax the meaning of a sign should never play a role. It must be possible to establish logical syntax without mentioning the meaning of a sign: only the description of expressions may be presupposed. 3.331 From this observation we turn to Russell’s ‘theory of types’. It can be seen that Russell must be wrong, because he had to mention the meaning of signs when establishing the rules for them. 3.332 No proposition can make a statement about itself, because a propositional sign cannot be contained in itself (that is the whole of the ‘theory of types’). 3.333 The reason why a function cannot be its own argument is that the sign for a function already contains the prototype of its argument, and it cannot contain itself. For let us suppose that the function F(fx) could be its own argument: in that case there would be a proposition ‘F(F(fx))’, in which the outer function F and the inner function F must have different meanings, since the inner one has the form O(f(x)) and the outer one has the form Y(O(fx)). Only the letter ‘F’ is common to the two functions, but the letter by itself signifies nothing. This immediately becomes clear if instead of ‘F(Fu)’ we write ‘(do): F(Ou). Ou = Fu’. That disposes of Russell’s paradox. 3.334 The rules of logical syntax must go without saying, once we know how each individual sign signifies. 3.34 A proposition possesses essential and accidental features. Accidental features are those that result from the particular way in which the propositional sign is produced. Essential features are those without which the proposition could not express its sense. 3.341 So what is essential in a proposition is what all propositions that can express the same sense have in common. And similarly, in general, what is essential in a symbol is what all symbols that can serve the same purpose have in common. 3.3411 So one could say that the real name of an object was what all symbols that signified it had in common. Thus, one by one, all kinds of composition would prove to be unessential to a name. 3.342 Although there is something arbitrary in our notations, this much is not arbitrary? that when we have determined one thing arbitrarily, something else is necessarily the case. (This derives from the essence of notation.) 3.3421 A particular mode of signifying may be unimportant but it is always important that it is a possible mode of signifying. And that is generally so in philosophy: again and again the individual case turns out to be unimportant, but the possibility of each individual case discloses something about the essence of the world. 3.343 Definitions are rules for translating from one language into another. Any correct sign-language must be translatable into any other in accordance with such rules: it is this that they all have in common. 3.344 What signifies in a symbol is what is common to all the symbols that the rules of logical syntax allow us to substitute for it. 3.3441 For instance, we can express what is common to all notations for truth-functions in the following way: they have in common that, for example, the notation that uses ‘Pp’ (‘not p’) and ‘p C g’ (‘p or g’) can be substituted for any of them. (This serves to characterize the way in which something general can be disclosed by the possibility of a specific notation.) 3.3442 Nor does analysis resolve the sign for a complex in an arbitrary way, so that it would have a different resolution every time that it was incorporated in a different proposition. 3.4 A proposition determines a place in logical space. The existence of this logical place is guaranteed by the mere existence of the constituents? by the existence of the proposition with a sense. 3.41 The propositional sign with logical coordinates? that is the logical place. 3.411 In geometry and logic alike a place is a possibility: something can exist in it. 3.42 A proposition can determine only one place in logical space: nevertheless the whole of logical space must already be given by it. (Otherwise negation, logical sum, logical product, etc., would introduce more and more new elements in co-ordination.) (The logical scaffolding surrounding a picture determines logical space. The force of a proposition reaches through the whole of logical space.) 3.5 A propositional sign, applied and thought out, is a thought. 4. A thought is a proposition with a sense. 4.00 (empty) 4.001 The totality of propositions is language. 4.002 Man possesses the ability to construct languages capable of expressing every sense, without having any idea how each word has meaning or what its meaning is? just as people speak without knowing how the individual sounds are produced. Everyday language is a part of the human organism and is no less complicated than it. It is not humanly possible to gather immediately from it what the logic of language is. Language disguises thought. So much so, that from the outward form of the clothing it is impossible to infer the form of the thought beneath it, because the outward form of the clothing is not designed to reveal the form of the body, but for entirely different purposes. The tacit conventions on which the understanding of everyday language depends are enormously complicated. 4.003 Most of the propositions and questions to be found in philosophical works are not false but nonsensical. Consequently we cannot give any answer to questions of this kind, but can only point out that they are nonsensical. Most of the propositions and questions of philosophers arise from our failure to understand the logic of our language. (They belong to the same class as the question whether the good is more or less identical than the beautiful.) And it is not surprising that the deepest problems are in fact not problems at all. 4.0031 All philosophy is a ‘critique of language’ (though not in Mauthner’s sense). It was Russell who performed the service of showing that the apparent logical form of a proposition need not be its real one. 4.0 (empty) 4.01 A proposition is a picture of reality. A proposition is a model of reality as we imagine it. 4.011 At first sight a proposition? one set out on the printed page, for example? does not seem to be a picture of the reality with which it is concerned. But neither do written notes seem at first sight to be a picture of a piece of music, nor our phonetic notation (the alphabet) to be a picture of our speech. And yet these sign-languages prove to be pictures, even in the ordinary sense, of what they represent. 4.012 It is obvious that a proposition of the form ‘aRb’ strikes us as a picture. In this case the sign is obviously a likeness of what is signified. 4.013 And if we penetrate to the essence of this pictorial character, we see that it is not impaired by apparent irregularities (such as the use [sharp] of and [flat] in musical notation). For even these irregularities depict what they are intended to express; only they do it in a different way. 4.014 A gramophone record, the musical idea, the written notes, and the sound-waves, all stand to one another in the same internal relation of depicting that holds between language and the world. They are all constructed according to a common logical pattern. (Like the two youths in the fairy-tale, their two horses, and their lilies. They are all in a certain sense one.) 4.0141 There is a general rule by means of which the musician can obtain the symphony from the score, and which makes it possible to derive the symphony from the groove on the gramophone record, and, using the first rule, to derive the score again. That is what constitutes the inner similarity between these things which seem to be constructed in such entirely different ways. And that rule is the law of projection which projects the symphony into the language of musical notation. It is the rule for translating this language into the language of gramophone records. 4.015 The possibility of all imagery, of all our pictorial modes of expression, is contained in the logic of depiction. 4.016 In order to understand the essential nature of a proposition, we should consider hieroglyphic script, which depicts the facts that it describes. And alphabetic script developed out of it without losing what was essential to depiction. 4.02 We can see this from the fact that we understand the sense of a propositional sign without its having been explained to us. 4.021 A proposition is a picture of reality: for if I understand a proposition, I know the situation that it represents. And I understand the proposition without having had its sense explained to me. 4.022 A proposition shows its sense. A proposition shows how things stand if it is true. And it says that they do so stand. 4.023 A proposition must restrict reality to two alternatives: yes or no. In order to do that, it must describe reality completely. A proposition is a description of a state of affairs. Just as a description of an object describes it by giving its external properties, so a proposition describes reality by its internal properties. A proposition constructs a world with the help of a logical scaffolding, so that one can actually see from the proposition how everything stands logically if it is true. One can draw inferences from a false proposition. 4.024 To understand a proposition means to know what is the case if it is true. (One can understand it, therefore, without knowing whether it is true.) It is understood by anyone who understands its constituents. 4.025 When translating one language into another, we do not proceed by translating each proposition of the one into a proposition of the other, but merely by translating the constituents of propositions. (And the dictionary translates not only substantives, but also verbs, adjectives, and conjunctions, etc.; and it treats them all in the same way.) 4.026 The meanings of simple signs (words) must be explained to us if we are to understand them. With propositions, however, we make ourselves understood. 4.027 It belongs to the essence of a proposition that it should be able to communicate a new sense to us. 4.03 A proposition must use old expressions to communicate a new sense. A proposition communicates a situation to us, and so it must be essentially connected with the situation. And the connexion is precisely that it is its logical picture. A proposition states something only in so far as it is a picture. 4.031 In a proposition a situation is, as it were, constructed by way of experiment. Instead of, ‘This proposition has such and such a sense, we can simply say, ‘This proposition represents such and such a situation’. 4.0311 One name stands for one thing, another for another thing, and they are combined with one another. In this way the whole group? like a tableau vivant? presents a state of affairs. 4.0312 The possibility of propositions is based on the principle that objects have signs as their representatives. My fundamental idea is that the ‘logical constants’ are not representatives; that there can be no representatives of the logic of facts. 4.032 It is only in so far as a proposition is logically articulated that it is a picture of a situation. (Even the proposition, ‘Ambulo’, is composite: for its stem with a different ending yields a different sense, and so does its ending with a different stem.) 4.04 In a proposition there must be exactly as many distinguishable parts as in the situation that it represents. The two must possess the same logical (mathematical) multiplicity. (Compare Hertz’s Mechanics on dynamical models.) 4.041 This mathematical multiplicity, of course, cannot itself be the subject of depiction. One cannot get away from it when depicting. 4.0411 If, for example, we wanted to express what we now write as ‘(x). fx’ by putting an affix in front of ‘fx’? for instance by writing ‘Gen. fx’? it would not be adequate: we should not know what was being generalized. If we wanted to signalize it with an affix ‘g’? for instance by writing ‘f(xg)’? that would not be adequate either: we should not know the scope of the generality-sign. If we were to try to do it by introducing a mark into the argument-places? for instance by writing ‘(G,G). F(G,G)’ ? it would not be adequate: we should not be able to establish the identity of the variables. And so on. All these modes of signifying are inadequate because they lack the necessary mathematical multiplicity. 4.0412 For the same reason the idealist’s appeal to ‘spatial spectacles’ is inadequate to explain the seeing of spatial relations, because it cannot explain the multiplicity of these relations. 4.05 Reality is compared with propositions. 4.06 A proposition can be true or false only in virtue of being a picture of reality. 4.061 It must not be overlooked that a proposition has a sense that is independent of the facts: otherwise one can easily suppose that true and false are relations of equal status between signs and what they signify. In that case one could say, for example, that ‘p’ signified in the true way what ‘Pp’ signified in the false way, etc. 4.062 Can we not make ourselves understood with false propositions just as we have done up till now with true ones? ? So long as it is known that they are meant to be false.? No! For a proposition is true if we use it to say that things stand in a certain way, and they do; and if by ‘p’ we mean Pp and things stand as we mean that they do, then, construed in the new way, ‘p’ is true and not false. 4.0621 But it is important that the signs ‘p’ and ‘Pp’ can say the same thing. For it shows that nothing in reality corresponds to the sign ‘P’. The occurrence of negation in a proposition is not enough to characterize its sense (PPp = p). The propositions ‘p’ and ‘Pp’ have opposite sense, but there corresponds to them one and the same reality. 4.063 An analogy to illustrate the concept of truth: imagine a black spot on white paper: you can describe the shape of the spot by saying, for each point on the sheet, whether it is black or white. To the fact that a point is black there corresponds a positive fact, and to the fact that a point is white (not black), a negative fact. If I designate a point on the sheet (a truth-value according to Frege), then this corresponds to the supposition that is put forward for judgement, etc. etc. But in order to be able to say that a point is black or white, I must first know when a point is called black, and when white: in order to be able to say,'”p” is true (or false)’, I must have determined in what circumstances I call ‘p’ true, and in so doing I determine the sense of the proposition. Now the point where the simile breaks down is this: we can indicate a point on the paper even if we do not know what black and white are, but if a proposition has no sense, nothing corresponds to it, since it does not designate a thing (a truth-value) which might have properties called ‘false’ or ‘true’. The verb of a proposition is not ‘is true’ or ‘is false’, as Frege thought: rather, that which ‘is true’ must already contain the verb. 4.064 Every proposition must already have a sense: it cannot be given a sense by affirmation. Indeed its sense is just what is affirmed. And the same applies to negation, etc. 4.0641 One could say that negation must be related to the logical place determined by the negated proposition. The negating proposition determines a logical place different from that of the negated proposition. The negating proposition determines a logical place with the help of the logical place of the negated proposition. For it describes it as lying outside the latter’s logical place. The negated proposition can be negated again, and this in itself shows that what is negated is already a proposition, and not merely something that is preliminary to a proposition. 4.1 Propositions represent the existence and non-existence of states of affairs. 4.11 The totality of true propositions is the whole of natural science (or the whole corpus of the natural sciences). 4.111 Philosophy is not one of the natural sciences. (The word ‘philosophy’ must mean something whose place is above or below the natural sciences, not beside them.) 4.112 Philosophy aims at the logical clarification of thoughts. Philosophy is not a body of doctrine but an activity. A philosophical work consists essentially of elucidations. Philosophy does not result in ‘philosophical propositions’, but rather in the clarification of propositions. Without philosophy thoughts are, as it were, cloudy and indistinct: its task is to make them clear and to give them sharp boundaries. 4.1121 Psychology is no more closely related to philosophy than any other natural science. Theory of knowledge is the philosophy of psychology. Does not my study of sign-language correspond to the study of thought-processes, which philosophers used to consider so essential to the philosophy of logic?  Only in most cases they got entangled in unessential psychological investigations, and with my method too there is an analogous risk. 4.1122 Darwin’s theory has no more to do with philosophy than any other hypothesis in natural science. 4.113 Philosophy sets limits to the much disputed sphere of natural science. 4.114 It must set limits to what can be thought; and, in doing so, to what cannot be thought. It must set limits to what cannot be thought by working outwards through what can be thought. 4.115 It will signify what cannot be said, by presenting clearly what can be said. 4.116 Everything that can be thought at all can be thought clearly. Everything that can be put into words can be put clearly. 4.12 Propositions can represent the whole of reality, but they cannot represent what they must have in common with reality in order to be able to represent it? logical form. In order to be able to represent logical form, we should have to be able to station ourselves with propositions somewhere outside logic, that is to say outside the world. 4.121 Propositions cannot represent logical form: it is mirrored in them. What finds its reflection in language, language cannot represent. What expresses itself in language, we cannot express by means of language. Propositions show the logical form of reality. They display it. 4.1211 Thus one proposition ‘fa’ shows that the object a occurs in its sense, two propositions ‘fa’ and ‘ga’ show that the same object is mentioned in both of them. If two propositions contradict one another, then their structure shows it; the same is true if one of them follows from the other. And so on. 4.1212 What can be shown, cannot be said. 4.1213 Now, too, we understand our feeling that once we have a sign-language in which everything is all right, we already have a correct logical point of view. 4.122 In a certain sense we can talk about formal properties of objects and states of affairs, or, in the case of facts, about structural properties: and in the same sense about formal relations and structural relations. (Instead of ‘structural property’ I also say ‘internal property’; instead of ‘structural relation’, ‘internal relation’. I introduce these expressions in order to indicate the source of the confusion between internal relations and relations proper (external relations), which is very widespread among philosophers.) It is impossible, however, to assert by means of propositions that such internal properties and relations obtain: rather, this makes itself manifest in the propositions that represent the relevant states of affairs and are concerned with the relevant objects. 4.1221 An internal property of a fact can also be bed a feature of that fact (in the sense in which we speak of facial features, for example). 4.123 A property is internal if it is unthinkable that its object should not possess it. (This shade of blue and that one stand, eo ipso, in the internal relation of lighter to darker. It is unthinkable that these two objects should not stand in this relation.) (Here the shifting use of the word ‘object’ corresponds to the shifting use of the words ‘property’ and ‘relation’.) 4.124 The existence of an internal property of a possible situation is not expressed by means of a proposition: rather, it expresses itself in the proposition representing the situation, by means of an internal property of that proposition. It would be just as nonsensical to assert that a proposition had a formal property as to deny it. 4.1241 It is impossible to distinguish forms from one another by saying that one has this property and another that property: for this presupposes that it makes sense to ascribe either property to either form. 4.125 The existence of an internal relation between possible situations expresses itself in language by means of an internal relation between the propositions representing them. 4.1251 Here we have the answer to the vexed question ‘whether all relations are internal or external’. 4.1252 I call a series that is ordered by an internal relation a series of forms. The order of the number-series is not governed by an external relation but by an internal relation. The same is true of the series of propositions ‘aRb’, ‘(d: c): aRx. xRb’, ‘(d x,y): aRx. xRy. yRb’, and so forth. (If b stands in one of these relations to a, I call b a successor of a.) 4.126 We can now talk about formal concepts, in the same sense that we speak of formal properties. (I introduce this expression in order to exhibit the source of the confusion between formal concepts and concepts proper, which pervades the whole of traditional logic.) When something falls under a formal concept as one of its objects, this cannot be expressed by means of a proposition. Instead it is shown in the very sign for this object. (A name shows that it signifies an object, a sign for a number that it signifies a number, etc.) Formal concepts cannot, in fact, be represented by means of a function, as concepts proper can. For their characteristics, formal properties, are not expressed by means of functions. The expression for a formal property is a feature of certain symbols. So the sign for the characteristics of a formal concept is a distinctive feature of all symbols whose meanings fall under the concept. So the expression for a formal concept is a propositional variable in which this distinctive feature alone is constant. 4.127 The propositional variable signifies the formal concept, and its values signify the objects that fall under the concept. 4.1271 Every variable is the sign for a formal concept. For every variable represents a constant form that all its values possess, and this can be regarded as a formal property of those values. 4.1272 Thus the variable name ‘x’ is the proper sign for the pseudo-concept object. Wherever the word ‘object’ (‘thing’, etc.) is correctly used, it is expressed in conceptual notation by a variable name. For example, in the proposition, ‘There are 2 objects which.. .’, it is expressed by ‘ (dx,y)… ‘. Wherever it is used in a different way, that is as a proper concept-word, nonsensical pseudo-propositions are the result. So one cannot say, for example, ‘There are objects’, as one might say, ‘There are books’. And it is just as impossible to say, ‘There are 100 objects’, or, ‘There are!0 objects’. And it is nonsensical to speak of the total number of objects. The same applies to the words ‘complex’, ‘fact’, ‘function’, ‘number’, etc. They all signify formal concepts, and are represented in conceptual notation by variables, not by functions or classes (as Frege and Russell believed). ‘1 is a number’, ‘There is only one zero’, and all similar expressions are nonsensical. (It is just as nonsensical to say, ‘There is only one 1’, as it would be to say, ‘2 + 2 at 3 o’clock equals 4’.) 4.12721 A formal concept is given immediately any object falling under it is given. It is not possible, therefore, to introduce as primitive ideas objects belonging to a formal concept and the formal concept itself. So it is impossible, for example, to introduce as primitive ideas both the concept of a function and specific functions, as Russell does; or the concept of a number and particular numbers. 4.1273 If we want to express in conceptual notation the general proposition, ‘b is a successor of a’, then we require an expression for the general term of the series of forms ‘aRb’, ‘(d: c): aRx. xRb’, ‘(d x,y) : aRx. xRy. yRb’,…, In order to express the general term of a series of forms, we must use a variable, because the concept ‘term of that series of forms’ is a formal concept. (This is what Frege and Russell overlooked: consequently the way in which they want to express general propositions like the one above is incorrect; it contains a vicious circle.) We can determine the general term of a series of forms by giving its first term and the general form of the operation that produces the next term out of the proposition that precedes it. 4.1274 To ask whether a formal concept exists is nonsensical. For no proposition can be the answer to such a question. (So, for example, the question, ‘Are there unanalysable subject-predicate propositions? ‘ cannot be asked.) 4.128 Logical forms are without number. Hence there are no pre-eminent numbers in logic, and hence there is no possibility of philosophical monism or dualism, etc. 4.2 The sense of a proposition is its agreement and disagreement with possibilities of existence and non-existence of states of affairs. 4.21 The simplest kind of proposition, an elementary proposition, asserts the existence of a state of affairs. 4.211 It is a sign of a proposition’s being elementary that there can be no elementary proposition contradicting it. 4.22 An elementary proposition consists of names. It is a nexus, a concatenation, of names. 4.221 It is obvious that the analysis of propositions must bring us to elementary propositions which consist of names in immediate combination. This raises the question how such combination into propositions comes about. 4.2211 Even if the world is infinitely complex, so that every fact consists of infinitely many states of affairs and every state of affairs is composed of infinitely many objects, there would still have to be objects and states of affairs. 4.23 It is only in the nexus of an elementary proposition that a name occurs in a proposition. 4.24 Names are the simple symbols: I indicate them by single letters (‘x’, ‘y’, ‘z’). I write elementary propositions as functions of names, so that they have the form ‘fx’, ‘O (x,y)’, etc. Or I indicate them by the letters ‘p’, ‘q’, ‘r’. 4.241 When I use two signs with one and the same meaning, I express this by putting the sign ‘=’ between them. So ‘a = b’ means that the sign ‘b’ can be substituted for the sign ‘a’. (If I use an equation to introduce a new sign ‘b’, laying down that it shall serve as a substitute for a sign a that is already known, then, like Russell, I write the equation? definition? in the form ‘a = b Def.’ A definition is a rule dealing with signs.) 4.242 Expressions of the form ‘a = b’ are, therefore, mere representational devices. They state nothing about the meaning of the signs ‘a’ and ‘b’. 4.243 Can we understand two names without knowing whether they signify the same thing or two different things? ? Can we understand a proposition in which two names occur without knowing whether their meaning is the same or different?  Suppose I know the meaning of an English word and of a German word that means the same: then it is impossible for me to be unaware that they do mean the same; I must be capable of translating each into the other. Expressions like ‘a = a’, and those derived from them, are neither elementary propositions nor is there any other way in which they have sense. (This will become evident later.) 4.25 If an elementary proposition is true, the state of affairs exists: if an elementary proposition is false, the state of affairs does not exist. 4.26 If all true elementary propositions are given, the result is a complete description of the world. The world is completely described by giving all elementary propositions, and adding which of them are true and which false. For n states of affairs, there are possibilities of existence and non-existence. Of these states of affairs any combination can exist and the remainder not exist. 4.27 For n states of affairs, there are Kn = Pn =0 ? ? n possibilities of existence and non-existence. Of these states of affairs any combination can exist and the remainder not exist. 4.28 There correspond to these combinations the same number of possibilities of truth? and falsity? for n elementary propositions. 4.3 Truth-possibilities of elementary propositions mean Possibilities of existence and non-existence of states of affairs. 4.31 We can represent truth-possibilities by schemata of the following kind (‘T’ means ‘true’, ‘F’ means ‘false’; the rows of ‘T’s’ and ‘F’s’ under the row of elementary propositions symbolize their truth-possibilities in a way that can easily be understood): 4.4 A proposition is an expression of agreement and disagreement with truth-possibilities of elementary propositions. 4.41 Truth-possibilities of elementary propositions are the conditions of the truth and falsity of propositions. 4.411 It immediately strikes one as probable that the introduction of elementary propositions provides the basis for understanding all other kinds of proposition. Indeed the understanding of general propositions palpably depends on the understanding of elementary propositions. 4.42 For n elementary propositions there are ways in which a proposition can agree and disagree with their truth possibilities. 4.43 We can express agreement with truth-possibilities by correlating the mark ‘T’ (true) with them in the schema. The absence of this mark means disagreement. 4.431 The expression of agreement and disagreement with the truth possibilities of elementary propositions expresses the truth-conditions of a proposition. A proposition is the expression of its truth-conditions. (Thus Frege was quite right to use them as a starting point when he explained the signs of his conceptual notation. But the explanation of the concept of truth that Frege gives is mistaken: if ‘the true’ and ‘the false’ were really objects, and were the arguments in Pp etc., then Frege’s method of determining the sense of ‘Pp’ would leave it absolutely undetermined.) 4.44 The sign that results from correlating the mark ‘I’ with truth-possibilities is a propositional sign. 4.441 It is clear that a complex of the signs ‘F’ and ‘T’ has no object (or complex of objects) corresponding to it, just as there is none corresponding to the horizontal and vertical lines or to the brackets.? There are no ‘logical objects’. Of course the same applies to all signs that express what the schemata of ‘T’s’ and ‘F’s’ express. 4.442 For example, the following is a propositional sign: (Frege’s ‘judgement stroke’ ‘|-‘ is logically quite meaningless: in the works of Frege (and Russell) it simply indicates that these authors hold the propositions marked with this sign to be true. Thus ‘|-‘ is no more a component part of a proposition than is, for instance, the proposition’s number. It is quite impossible for a proposition to state that it itself is true.) If the order or the truth-possibilities in a scheme is fixed once and for all by a combinatory rule, then the last column by itself will be an expression of the truth-conditions. If we now write this column as a row, the propositional sign will become ‘(TT-T) (p,q)’ or more explicitly ‘(TTFT) (p,q)’ (The number of places in the left-hand pair of brackets is determined by the number of terms in the right-hand pair.) 4.45 For n elementary propositions there are Ln possible groups of truth-conditions. The groups of truth-conditions that are obtainable from the truth-possibilities of a given number of elementary propositions can be arranged in a series. 4.46 Among the possible groups of truth-conditions there are two extreme cases. In one of these cases the proposition is true for all the truth-possibilities of the elementary propositions. We say that the truth-conditions are tautological. In the second case the proposition is false for all the truth-possibilities: the truth-conditions are contradictory. In the first case we call the proposition a tautology; in the second, a contradiction. 4.461 Propositions show what they say; tautologies and contradictions show that they say nothing. A tautology has no truth-conditions, since it is unconditionally true: and a contradiction is true on no condition. Tautologies and contradictions lack sense. (Like a point from which two arrows go out in opposite directions to one another.) (For example, I know nothing about the weather when I know that it is either raining or not raining.) 4.4611 Tautologies and contradictions are not, however, nonsensical. They are part of the symbolism, much as ‘0’ is part of the symbolism of arithmetic. 4.462 Tautologies and contradictions are not pictures of reality. They do not represent any possible situations. For the former admit all possible situations, and latter none. In a tautology the conditions of agreement with the world? the representational relations? cancel one another, so that it does not stand in any representational relation to reality. 4.463 The truth-conditions of a proposition determine the range that it leaves open to the facts. (A proposition, a picture, or a model is, in the negative sense, like a solid body that restricts the freedom of movement of others, and in the positive sense, like a space bounded by solid substance in which there is room for a body.) A tautology leaves open to reality the whole? the infinite whole? of logical space: a contradiction fills the whole of logical space leaving no point of it for reality. Thus neither of them can determine reality in any way. 4.464 A tautology’s truth is certain, a proposition’s possible, a contradiction’s impossible. (Certain, possible, impossible: here we have the first indication of the scale that we need in the theory of probability.) 4.465 The logical product of a tautology and a proposition says the same thing as the proposition. This product, therefore, is identical with the proposition. For it is impossible to alter what is essential to a symbol without altering its sense. 4.466 What corresponds to a determinate logical combination of signs is a determinate logical combination of their meanings. It is only to the uncombined signs that absolutely any combination corresponds. In other words, propositions that are true for every situation cannot be combinations of signs at all, since, if they were, only determinate combinations of objects could correspond to them. (And what is not a logical combination has no combination of objects corresponding to it.) Tautology and contradiction are the limiting cases? indeed the disintegration? of the combination of signs. 4.4661 Admittedly the signs are still combined with one another even in tautologies and contradictions? i.e. they stand in certain relations to one another: but these relations have no meaning, they are not essential to the symbol. 4.5 It now seems possible to give the most general propositional form: that is, to give a description of the propositions of any sign-language whatsoever in such a way that every possible sense can be expressed by a symbol satisfying the description, and every symbol satisfying the description can express a sense, provided that the meanings of the names are suitably chosen. It is clear that only what is essential to the most general propositional form may be included in its description? for otherwise it would not be the most general form. The existence of a general propositional form is proved by the fact that there cannot be a proposition whose form could not have been foreseen (i.e. constructed). The general form of a proposition is: This is how things stand. 4.51 Suppose that I am given all elementary propositions: then I can simply ask what propositions I can construct out of them. And there I have all propositions, and that fixes their limits. 4.52 Propositions comprise all that follows from the totality of all elementary propositions (and, of course, from its being the totality of them all ). (Thus, in a certain sense, it could be said that all propositions were generalizations of elementary propositions.) 4.53 The general propositional form is a variable. 5. A proposition is a truth-function of elementary propositions. (An elementary proposition is a truth-function of itself.) 5.0 (empty) 5.01 Elementary propositions are the truth-arguments of propositions. 5.02 The arguments of functions are readily confused with the affixes of names. For both arguments and affixes enable me to recognize the meaning of the signs containing them. For example, when Russell writes ‘+c’, the ‘c’ is an affix which indicates that the sign as a whole is the addition-sign for cardinal numbers. But the use of this sign is the result of arbitrary convention and it would be quite possible to choose a simple sign instead of ‘+c’; in ‘Pp’ however, ‘p’ is not an affix but an argument: the sense of ‘Pp’ cannot be understood unless the sense of ‘p’ has been understood already. (In the name Julius Caesar ‘Julius’ is an affix. An affix is always part of a description of the object to whose name we attach it: e.g. the Caesar of the Julian gens.) If I am not mistaken, Frege’s theory about the meaning of propositions and functions is based on the confusion between an argument and an affix. Frege regarded the propositions of logic as names, and their arguments as the affixes of those names. 5.1 Truth-functions can be arranged in series. That is the foundation of the theory of probability. 5.10 (empty) 5.101 The truth-functions of a given number of elementary propositions can always be set out in a schema of the following kind: (TTTT) (p, q) Tautology (If p then p, and if q then q.) (p z p. q z q) (FTTT) (p, q) In words: Not both p and q. (P(p. q)) (TFTT) (p, q) “: If q then p. (q z p) (TTFT) (p, q) “: If p then q. (p z q) (TTTF) (p, q) “: p or q. (p C q) (FFTT) (p, q) “: Not g. (Pq) (FTFT) (p, q) “: Not p. (Pp) (FTTF) (p, q) ” : p or q, but not both. (p. Pq: C: q. Pp) (TFFT) (p, q) “: If p then p, and if q then p. (p + q) (TFTF) (p, q) “: p (TTFF) (p, q) “: q (FFFT) (p, q) “: Neither p nor q. (Pp. Pq or p | q) (FFTF) (p, q) “: p and not q. (p. Pq) (FTFF) (p, q) “: q and not p. (q. Pp) (TFFF) (p,q) “: q and p. (q. p) (FFFF) (p, q) Contradiction (p and not p, and q and not q.) (p. Pp. q. Pq) I will give the name truth-grounds of a proposition to those truth-possibilities of its truth-arguments that make it true. 5.11 If all the truth-grounds that are common to a number of propositions are at the same time truth-grounds of a certain proposition, then we say that the truth of that proposition follows from the truth of the others. 5.12 In particular, the truth of a proposition ‘p’ follows from the truth of another proposition ‘q’ is all the truth-grounds of the latter are truth-grounds of the former. 5.121 The truth-grounds of the one are contained in those of the other: p follows from q. 5.122 If p follows from q, the sense of ‘p’ is contained in the sense of ‘q’. 5.123 If a god creates a world in which certain propositions are true, then by that very act he also creates a world in which all the propositions that follow from them come true. And similarly he could not create a world in which the proposition ‘p’ was true without creating all its objects. 5.124 A proposition affirms every proposition that follows from it. 5.1241 ‘p. q’ is one of the propositions that affirm ‘p’ and at the same time one of the propositions that affirm ‘q’. Two propositions are opposed to one another if there is no proposition with a sense, that affirms them both. Every proposition that contradicts another negate it. 5.13 When the truth of one proposition follows from the truth of others, we can see this from the structure of the proposition. 5.131 If the truth of one proposition follows from the truth of others, this finds expression in relations in which the forms of the propositions stand to one another: nor is it necessary for us to set up these relations between them, by combining them with one another in a single proposition; on the contrary, the relations are internal, and their existence is an immediate result of the existence of the propositions. 5.1311 When we infer q from p C q and Pp, the relation between the propositional forms of ‘p C q’ and ‘Pp’ is masked, in this case, by our mode of signifying. But if instead of ‘p C q’ we write, for example, ‘p|q. |. p|q’, and instead of ‘Pp’, ‘p|p’ (p|q = neither p nor q), then the inner connexion becomes obvious. (The possibility of inference from (x). fx to fa shows that the symbol (x). fx itself has generality in it.) 5.132 If p follows from q, I can make an inference from q to p, deduce p from q. The nature of the inference can be gathered only from the two propositions. They themselves are the only possible justification of the inference. ‘Laws of inference’, which are supposed to justify inferences, as in the works of Frege and Russell, have no sense, and would be superfluous. 5.133 All deductions are made a priori. 5.134 One elementary proposition cannot be deduced form another. 5.135 There is no possible way of making an inference form the existence of one situation to the existence of another, entirely different situation. 5.136 There is no causal nexus to justify such an inference. 5.1361 We cannot infer the events of the future from those of the present. Belief in the causal nexus is superstition. 5.1362 The freedom of the will consists in the impossibility of knowing actions that still lie in the future. We could know them only if causality were an inner necessity like that of logical inference.? The connexion between knowledge and what is known is that of logical necessity. (‘A knows that p is the case’, has no sense if p is a tautology.) 5.1363 If the truth of a proposition does not follow from the fact that it is self-evident to us, then its self-evidence in no way justifies our belief in its truth. 5.14 If one proposition follows from another, then the latter says more than the former, and the former less than the latter. 5.141 If p follows from q and q from p, then they are one and same proposition. 5.142 A tautology follows from all propositions: it says nothing. 5.143 Contradiction is that common factor of propositions which no proposition has in common with another. Tautology is the common factor of all propositions that have nothing in common with one another. Contradiction, one might say, vanishes outside all propositions: tautology vanishes inside them. Contradiction is the outer limit of propositions: tautology is the unsubstantial point at their centre. 5.15 If Tr is the number of the truth-grounds of a proposition ‘r’, and if Trs is the number of the truth-grounds of a proposition ‘s’ that are at the same time truth-grounds of ‘r’, then we call the ratio Trs: Tr the degree of probability that the proposition ‘r’ gives to the proposition ‘s’. 5.151 In a schema like the one above in 5.101, let Tr be the number of ‘T’s’ in the proposition r, and let Trs, be the number of ‘T’s’ in the proposition s that stand in columns in which the proposition r has ‘T’s’. Then the proposition r gives to the proposition s the probability Trs: Tr. 5.1511 There is no special object peculiar to probability propositions. 5.152 When propositions have no truth-arguments in common with one another, we call them independent of one another. Two elementary propositions give one another the probability 1/ 2. If p follows from q, then the proposition ‘q’ gives to the proposition ‘p’ the probability 1. The certainty of logical inference is a limiting case of probability. (Application of this to tautology and contradiction.) 5.153 In itself, a proposition is neither probable nor improbable. Either an event occurs or it does not: there is no middle way. 5.154 Suppose that an urn contains black and white balls in equal numbers (and none of any other kind). I draw one ball after another, putting them back into the urn. By this experiment I can establish that the number of black balls drawn and the number of white balls drawn approximate to one another as the draw continues. So this is not a mathematical truth. Now, if I say, ‘The probability of my drawing a white ball is equal to the probability of my drawing a black one’, this means that all the circumstances that I know of (including the laws of nature assumed as hypotheses) give no more probability to the occurrence of the one event than to that of the other. That is to say, they give each the probability 1/2 as can easily be gathered from the above definitions. What I confirm by the experiment is that the occurrence of the two events is independent of the circumstances of which I have no more detailed knowledge. 5.155 The minimal unit for a probability proposition is this: The circumstances? of which I have no further knowledge? give such and such a degree of probability to the occurrence of a particular event. 5.156 It is in this way that probability is a generalization. It involves a general description of a propositional form. We use probability only in default of certainty? if our knowledge of a fact is not indeed complete, but we do know something about its form. (A proposition may well be an incomplete picture of a certain situation, but it is always a complete picture of something.) A probability proposition is a sort of excerpt from other propositions. 5.2 The structures of propositions stand in internal relations to one another. 5.21 In order to give prominence to these internal relations we can adopt the following mode of expression: we can represent a proposition as the result of an operation that produces it out of other propositions (which are the bases of the operation). 5.22 An operation is the expression of a relation between the structures of its result and of its bases. 5.23 The operation is what has to be done to the one proposition in order to make the other out of it. 5.231 And that will, of course, depend on their formal properties, on the internal similarity of their forms. 5.232 The internal relation by which a series is ordered is equivalent to the operation that produces one term from another. 5.233 Operations cannot make their appearance before the point at which one proposition is generated out of another in a logically meaningful way; i.e. the point at which the logical construction of propositions begins. 5.234 Truth-functions of elementary propositions are results of operations with elementary propositions as bases. (These operations I call truth-operations.) 5.2341 The sense of a truth-function of p is a function of the sense of p. Negation, logical addition, logical multiplication, etc. etc. are operations. (Negation reverses the sense of a proposition.) 5.24 An operation manifests itself in a variable; it shows how we can get from one form of proposition to another. It gives expression to the difference between the forms. (And what the bases of an operation and its result have in common is just the bases themselves.) 5.241 An operation is not the mark of a form, but only of a difference between forms. 5.242 The operation that produces ‘q’ from ‘p’ also produces ‘r’ from ‘q’, and so on. There is only one way of expressing this: ‘p’, ‘q’, ‘r’, etc. have to be variables that give expression in a general way to certain formal relations. 5.25 The occurrence of an operation does not characterize the sense of a proposition. Indeed, no statement is made by an operation, but only by its result, and this depends on the bases of the operation. (Operations and functions must not be confused with each other.) 5.251 A function cannot be its own argument, whereas an operation can take one of its own results as its base. 5.252 It is only in this way that the step from one term of a series of forms to another is possible (from one type to another in the hierarchies of Russell and Whitehead). (Russell and Whitehead did not admit the possibility of such steps, but repeatedly availed themselves of it.) 5.2521 If an operation is applied repeatedly to its own results, I speak of successive applications of it. (‘O’O’O’a’ is the result of three successive applications of the operation ‘O’E’ to ‘a’.) In a similar sense I speak of successive applications of more than one operation to a number of propositions. 5.2522 Accordingly I use the sign ‘[a, x, O’x]’ for the general term of the series of forms a, O’a, O’O’a,…. This bracketed expression is a variable: the first term of the bracketed expression is the beginning of the series of forms, the second is the form of a term x arbitrarily selected from the series, and the third is the form of the term that immediately follows x in the series. 5.2523 The concept of successive applications of an operation is equivalent to the concept ‘and so on’. 5.253 One operation can counteract the effect of another. Operations can cancel one another. 5.254 An operation can vanish (e.g. negation in ‘PPp’: PPp = p). 5.3 All propositions are results of truth-operations on elementary propositions. A truth-operation is the way in which a truth-function is produced out of elementary propositions. It is of the essence of truth-operations that, just as elementary propositions yield a truth-function of themselves, so too in the same way truth-functions yield a further truth-function. When a truth-operation is applied to truth-functions of elementary propositions, it always generates another truth-function of elementary propositions, another proposition. When a truth-operation is applied to the results of truth-operations on elementary propositions, there is always a single operation on elementary propositions that has the same result. Every proposition is the result of truth-operations on elementary propositions. 5.31 The schemata in 4.31 have a meaning even when ‘p’, ‘q’, ‘r’, etc. are not elementary propositions. And it is easy to see that the propositional sign in 4.442 expresses a single truth-function of elementary propositions even when ‘p’ and ‘q’ are truth-functions of elementary propositions. 5.32 All truth-functions are results of successive applications to elementary propositions of a finite number of truth-operations. 5.4 At this point it becomes manifest that there are no ‘logical objects’ or ‘logical constants’ (in Frege’s and Russell’s sense). 5.41 The reason is that the results of truth-operations on truth-functions are always identical whenever they are one and the same truth-function of elementary propositions. 5.42 It is self-evident that C, z, etc. are not relations in the sense in which right and left etc. are relations. The interdefinability of Frege’s and Russell’s ‘primitive signs’ of logic is enough to show that they are not primitive signs, still less signs for relations. And it is obvious that the ‘z’ defined by means of ‘P’ and ‘C’ is identical with the one that figures with ‘P’ in the definition of ‘C’; and that the second ‘C’ is identical with the first one; and so on. 5.43 Even at first sight it seems scarcely credible that there should follow from one fact p infinitely many others, namely PPp, PPPPp, etc. And it is no less remarkable that the infinite number of propositions of logic (mathematics) follow from half a dozen ‘primitive propositions’. But in fact all the propositions of logic say the same thing, to wit nothing. 5.44 Truth-functions are not material functions. For example, an affirmation can be produced by double negation: in such a case does it follow that in some sense negation is contained in affirmation?  Does ‘PPp’ negate Pp, or does it affirm p? or both?  The proposition ‘PPp’ is not about negation, as if negation were an object: on the other hand, the possibility of negation is already written into affirmation. And if there were an object called ‘P’, it would follow that ‘PPp’ said something different from what ‘p’ said, just because the one proposition would then be about P and the other would not. 5.441 This vanishing of the apparent logical constants also occurs in the case of ‘P(dx). Pfx’, which says the same as ‘(x). fx’, and in the case of ‘(dx). fx. x = a’, which says the same as ‘fa’. 5.442 If we are given a proposition, then with it we are also given the results of all truth-operations that have it as their base. 5.45 If there are primitive logical signs, then any logic that fails to show clearly how they are placed relatively to one another and to justify their existence will be incorrect. The construction of logic out of its primitive signs must be made clear. 5.451 If logic has primitive ideas, they must be independent of one another. If a primitive idea has been introduced, it must have been introduced in all the combinations in which it ever occurs. It cannot, therefore, be introduced first for one combination and later reintroduced for another. For example, once negation has been introduced, we must understand it both in propositions of the form ‘Pp’ and in propositions like ‘P(p C q)’, ‘(dx). Pfx’, etc. We must not introduce it first for the one class of cases and then for the other, since it would then be left in doubt whether its meaning were the same in both cases, and no reason would have been given for combining the signs in the same way in both cases. (In short, Frege’s remarks about introducing signs by means of definitions (in The Fundamental Laws of Arithmetic ) also apply, mutatis mutandis, to the introduction of primitive signs.) 5.452 The introduction of any new device into the symbolism of logic is necessarily a momentous event. In logic a new device should not be introduced in brackets or in a footnote with what one might call a completely innocent air. (Thus in Russell and Whitehead’s Principia Mathematica there occur definitions and primitive propositions expressed in words. Why this sudden appearance of words?  It would require a justification, but none is given, or could be given, since the procedure is in fact illicit.) But if the introduction of a new device has proved necessary at a certain point, we must immediately ask ourselves, ‘At what points is the employment of this device now unavoidable? ‘ and its place in logic must be made clear. 5.453 All numbers in logic stand in need of justification. Or rather, it must become evident that there are no numbers in logic. There are no pre-eminent numbers. 5.454 In logic there is no co-ordinate status, and there can be no classification. In logic there can be no distinction between the general and the specific. 5.4541 The solutions of the problems of logic must be simple, since they set the standard of simplicity. Men have always had a presentiment that there must be a realm in which the answers to questions are symmetrically combined? a priori? to form a self-contained system. A realm subject to the law: Simplex sigillum veri. 5.46 If we introduced logical signs properly, then we should also have introduced at the same time the sense of all combinations of them; i.e. not only ‘p C q’ but ‘P(p C q)’ as well, etc. etc. We should also have introduced at the same time the effect of all possible combinations of brackets. And thus it would have been made clear that the real general primitive signs are not ‘p C q’, ‘(dx). fx’, etc. but the most general form of their combinations. 5.461 Though it seems unimportant, it is in fact significant that the pseudo-relations of logic, such as C and z, need brackets? unlike real relations. Indeed, the use of brackets with these apparently primitive signs is itself an indication that they are not primitive signs. And surely no one is going to believe brackets have an independent meaning. 5.4611 Signs for logical operations are punctuation-marks. 5.47 It is clear that whatever we can say in advance about the form of all propositions, we must be able to say all at once. An elementary proposition really contains all logical operations in itself. For ‘fa’ says the same thing as ‘(dx). fx. x = a’ Wherever there is compositeness, argument and function are present, and where these are present, we already have all the logical constants. One could say that the sole logical constant was what all propositions, by their very nature, had in common with one another. But that is the general propositional form. 5.471 The general propositional form is the essence of a proposition. 5.4711 To give the essence of a proposition means to give the essence of all description, and thus the essence of the world. 5.472 The description of the most general propositional form is the description of the one and only general primitive sign in logic. 5.473 Logic must look after itself. If a sign is possible, then it is also capable of signifying. Whatever is possible in logic is also permitted. (The reason why ‘Socrates is identical’ means nothing is that there is no property called ‘identical’. The proposition is nonsensical because we have failed to make an arbitrary determination, and not because the symbol, in itself, would be illegitimate.) In a certain sense, we cannot make mistakes in logic. 5.4731 Self-evidence, which Russell talked about so much, can become dispensable in logic, only because language itself prevents every logical mistake.? What makes logic a priori is the impossibility of illogical thought. 5.4732 We cannot give a sign the wrong sense. 5.47321 Occam’s maxim is, of course, not an arbitrary rule, nor one that is justified by its success in practice: its point is that unnecessary units in a sign-language mean nothing. Signs that serve one purpose are logically equivalent, and signs that serve none are logically meaningless. 5.4733 Frege says that any legitimately constructed proposition must have a sense. And I say that any possible proposition is legitimately constructed, and, if it has no sense, that can only be because we have failed to give a meaning to some of its constituents. (Even if we think that we have done so.) Thus the reason why ‘Socrates is identical’ says nothing is that we have not given any adjectival meaning to the word ‘identical’. For when it appears as a sign for identity, it symbolizes in an entirely different way? the signifying relation is a different one? therefore the symbols also are entirely different in the two cases: the two symbols have only the sign in common, and that is an accident. 5.474 The number of fundamental operations that are necessary depends solely on our notation. 5.475 All that is required is that we should construct a system of signs with a particular number of dimensions? with a particular mathematical multiplicity. 5.476 It is clear that this is not a question of a number of primitive ideas that have to be signified, but rather of the expression of a rule. 5.5 Every truth-function is a result of successive applications to elementary propositions of the operation ‘(-? ? T)(E,….)’. This operation negates all the propositions in the right-hand pair of brackets, and I call it the negation of those propositions. 5.50 (empty) 5.501 When a bracketed expression has propositions as its terms? and the order of the terms inside the brackets is indifferent? then I indicate it by a sign of the form ‘(E)’. ‘(E)’ is a variable whose values are terms of the bracketed expression and the bar over the variable indicates that it is the representative of all its values in the brackets. (E.g. if E has the three values P,Q, R, then (E) = (P, Q, R). ) What the values of the variable are is something that is stipulated. The stipulation is a description of the propositions that have the variable as their representative. How the description of the terms of the bracketed expression is produced is not essential. We can distinguish three kinds of description: 1. Direct enumeration, in which case we can simply substitute for the variable the constants that are its values; 2. Giving a function fx whose values for all values of x are the propositions to be described; 3. Giving a formal law that governs the construction of the propositions, in which case the bracketed expression has as its members all the terms of a series of forms. 5.502 So instead of ‘(-? ? T)(E,….)’, I write ‘N(E)’. N(E) is the negation of all the values of the propositional variable E. 5.503 It is obvious that we can easily express how propositions may be constructed with this operation, and how they may not be constructed with it; so it must be possible to find an exact expression for this. 5.51 If E has only one value, then N(E) = Pp (not p); if it has two values, then N(E) = Pp. Pq. (neither p nor g). 5.511 How can logic? all-embracing logic, which mirrors the world? use such peculiar crotchets and contrivances?  Only because they are all connected with one another in an infinitely fine network, the great mirror. 5.512 ‘Pp’ is true if ‘p’ is false. Therefore, in the proposition ‘Pp’, when it is true, ‘p’ is a false proposition. How then can the stroke ‘P’ make it agree with reality?  But in ‘Pp’ it is not ‘P’ that negates, it is rather what is common to all the signs of this notation that negate p. That is to say the common rule that governs the construction of ‘Pp’, ‘PPPp’, ‘Pp C Pp’, ‘Pp. Pp’, etc. etc. (ad inf.). And this common factor mirrors negation. 5.513 We might say that what is common to all symbols that affirm both p and q is the proposition ‘p. q’; and that what is common to all symbols that affirm either p or q is the proposition ‘p C q’. And similarly we can say that two propositions are opposed to one another if they have nothing in common with one another, and that every proposition has only one negative, since there is only one proposition that lies completely outside it. Thus in Russell’s notation too it is manifest that ‘q: p C Pp’ says the same thing as ‘q’, that ‘p C Pq’ says nothing. 5.514 Once a notation has been established, there will be in it a rule governing the construction of all propositions that negate p, a rule governing the construction of all propositions that affirm p, and a rule governing the construction of all propositions that affirm p or q; and so on. These rules are equivalent to the symbols; and in them their sense is mirrored. 5.515 It must be manifest in our symbols that it can only be propositions that are combined with one another by ‘C’, ‘.’, etc. And this is indeed the case, since the symbol in ‘p’ and ‘q’ itself presupposes ‘C’, ‘P’, etc. If the sign ‘p’ in ‘p C q’ does not stand for a complex sign, then it cannot have sense by itself: but in that case the signs ‘p C p’, ‘p. p’, etc., which have the same sense as p, must also lack sense. But if ‘p C p’ has no sense, then ‘p C q’ cannot have a sense either. 5.5151 Must the sign of a negative proposition be constructed with that of the positive proposition?  Why should it not be possible to express a negative proposition by means of a negative fact?  (E.g. suppose that “a’ does not stand in a certain relation to ‘b’; then this might be used to say that aRb was not the case.) But really even in this case the negative proposition is constructed by an indirect use of the positive. The positive proposition necessarily presupposes the existence of the negative proposition and vice versa. 5.52 If E has as its values all the values of a function fx for all values of x, then N(E) = P(dx). fx. 5.521 I dissociate the concept all from truth-functions. Frege and Russell introduced generality in association with logical productor logical sum. This made it difficult to understand the propositions ‘(dx). fx’ and ‘(x) . fx’, in which both ideas are embedded. 5.522 What is peculiar to the generality-sign is first, that it indicates a logical prototype, and secondly, that it gives prominence to constants. 5.523 The generality-sign occurs as an argument. 5.524 If objects are given, then at the same time we are given all objects. If elementary propositions are given, then at the same time all elementary propositions are given. 5.525 It is incorrect to render the proposition ‘(dx). fx’ in the words, ‘fx is possible’ as Russell does. The certainty, possibility, or impossibility of a situation is not expressed by a proposition, but by an expression’s being a tautology, a proposition with a sense, or a contradiction. The precedent to which we are constantly inclined to appeal must reside in the symbol itself. 5.526 We can describe the world completely by means of fully generalized propositions, i.e. without first correlating any name with a particular object. 5.5261 A fully generalized proposition, like every other proposition, is composite. (This is shown by the fact that in ‘(dx, O). Ox’ we have to mention ‘O’ and ‘s’ separately. They both, independently, stand in signifying relations to the world, just as is the case in ungeneralized propositions.) It is a mark of a composite symbol that it has something in common with other symbols. 5.5262 The truth or falsity of every proposition does make some alteration in the general construction of the world. And the range that the totality of elementary propositions leaves open for its construction is exactly the same as that which is delimited by entirely general propositions. (If an elementary proposition is true, that means, at any rate, one more true elementary proposition.) 5.53 Identity of object I express by identity of sign, and not by using a sign for identity. Difference of objects I express by difference of signs. 5.530 (empty) 5.5301 It is self-evident that identity is not a relation between objects. This becomes very clear if one considers, for example, the proposition ‘(x) : fx. z. x = a’. What this proposition says is simply that only a satisfies the function f, and not that only things that have a certain relation to a satisfy the function, Of course, it might then be said that only a did have this relation to a; but in order to express that, we should need the identity-sign itself. 5.5302 Russell’s definition of ‘=’ is inadequate, because according to it we cannot say that two objects have all their properties in common. (Even if this proposition is never correct, it still has sense.) 5.5303 Roughly speaking, to say of two things that they are identical is nonsense, and to say of one thing that it is identical with itself is to say nothing at all. 5.531 Thus I do not write ‘f(a, b). a = b’, but ‘f(a, a)’ (or ‘f(b, b)); and not ‘f(a,b). Pa = b’, but ‘f(a, b)’. 5.532 And analogously I do not write ‘(dx, y). f(x, y). x = y’, but ‘(dx) . f(x, x)’; and not ‘(dx, y). f(x, y). Px = y’, but ‘(dx, y). f(x, y)’. 5.5321 Thus, for example, instead of ‘(x): fx z x = a’ we write ‘(dx). fx . z: (dx, y). fx. fy’. And the proposition, ‘Only one x satisfies f( )’, will read ‘(dx). fx: P(dx, y). fx. fy’. 5.533 The identity-sign, therefore, is not an essential constituent of conceptual notation. 5.534 And now we see that in a correct conceptual notation pseudo-propositions like ‘a = a’, ‘a = b. b = c. z a = c’, ‘(x). x = x’, ‘(dx). x = a’, etc. cannot even be written down. 5.535 This also disposes of all the problems that were connected with such pseudo-propositions. All the problems that Russell’s ‘axiom of infinity’ brings with it can be solved at this point. What the axiom of infinity is intended to say would express itself in language through the existence of infinitely many names with different meanings. 5.5351 There are certain cases in which one is tempted to use expressions of the form ‘a = a’ or ‘p z p’ and the like. In fact, this happens when one wants to talk about prototypes, e.g. about proposition, thing, etc. Thus in Russell’s Principles of Mathematics ‘p is a proposition’? which is nonsense? was given the symbolic rendering ‘p z p’ and placed as an hypothesis in front of certain propositions in order to exclude from their argument-places everything but propositions. (It is nonsense to place the hypothesis ‘p z p’ in front of a proposition, in order to ensure that its arguments shall have the right form, if only because with a non-proposition as argument the hypothesis becomes not false but nonsensical, and because arguments of the wrong kind make the proposition itself nonsensical, so that it preserves itself from wrong arguments just as well, or as badly, as the hypothesis without sense that was appended for that purpose.) 5.5352 In the same way people have wanted to express, ‘There are no things ‘, by writing ‘P(dx). x = x’. But even if this were a proposition, would it not be equally true if in fact ‘there were things’ but they were not identical with themselves? 5.54 In the general propositional form propositions occur in other propositions only as bases of truth-operations. 5.541 At first sight it looks as if it were also possible for one proposition to occur in another in a different way. Particularly with certain forms of proposition in psychology, such as ‘A believes that p is the case’ and A has the thought p’, etc. For if these are considered superficially, it looks as if the proposition p stood in some kind of relation to an object A. (And in modern theory of knowledge (Russell, Moore, etc.) these propositions have actually been construed in this way.) 5.542 It is clear, however, that ‘A believes that p’, ‘A has the thought p’, and ‘A says p’ are of the form ‘”p” says p’: and this does not involve a correlation of a fact with an object, but rather the correlation of facts by means of the correlation of their objects. 5.5421 This shows too that there is no such thing as the soul? the subject, etc.? as it is conceived in the superficial psychology of the present day. Indeed a composite soul would no longer be a soul. 5.5422 The correct explanation of the form of the proposition, ‘A makes the judgement p’, must show that it is impossible for a judgement to be a piece of nonsense. (Russell’s theory does not satisfy this requirement.) 5.5423 To perceive a complex means to perceive that its constituents are related to one another in such and such a way. This no doubt also explains why there are two possible ways of seeing the figure as a cube; and all similar phenomena. For we really see two different facts. (If I look in the first place at the corners marked a and only glance at the b’s, then the a’s appear to be in front, and vice versa). 5.55 We now have to answer a priori the question about all the possible forms of elementary propositions. Elementary propositions consist of names. Since, however, we are unable to give the number of names with different meanings, we are also unable to give the composition of elementary propositions. 5.551 Our fundamental principle is that whenever a question can be decided by logic at all it must be possible to decide it without more ado. (And if we get into a position where we have to look at the world for an answer to such a problem, that shows that we are on a completely wrong track.) 5.552 The ‘experience’ that we need in order to understand logic is not that something or other is the state of things, but that something is: that, however, is not an experience. Logic is prior to every experience? that something is so. It is prior to the question ‘How? ‘ not prior to the question ‘What? ‘ 5.5521 And if this were not so, how could we apply logic?  We might put it in this way: if there would be a logic even if there were no world, how then could there be a logic given that there is a world? 5.553 Russell said that there were simple relations between different numbers of things (individuals). But between what numbers?  And how is this supposed to be decided? ? By experience?  (There is no pre-eminent number.) 5.554 It would be completely arbitrary to give any specific form. 5.5541 It is supposed to be possible to answer a priori the question whether I can get into a position in which I need the sign for a 27-termed relation in order to signify something. 5.5542 But is it really legitimate even to ask such a question?  Can we set up a form of sign without knowing whether anything can correspond to it?  Does it make sense to ask what there must be in order that something can be the case? 5.555 Clearly we have some concept of elementary propositions quite apart from their particular logical forms. But when there is a system by which we can create symbols, the system is what is important for logic and not the individual symbols. And anyway, is it really possible that in logic I should have to deal with forms that I can invent?  What I have to deal with must be that which makes it possible for me to invent them. 5.556 There cannot be a hierarchy of the forms of elementary propositions. We can foresee only what we ourselves construct. 5.5561 Empirical reality is limited by the totality of objects. The limit also makes itself manifest in the totality of elementary propositions. Hierarchies are and must be independent of reality. 5.5562 If we know on purely logical grounds that there must be elementary propositions, then everyone who understands propositions in their C form must know It. 5.5563 In fact, all the propositions of our everyday language, just as they stand, are in perfect logical order.? That utterly simple thing, which we have to formulate here, is not a likeness of the truth, but the truth itself in its entirety. (Our problems are not abstract, but perhaps the most concrete that there are.) 5.557 The application of logic decides what elementary propositions there are. What belongs to its application, logic cannot anticipate. It is clear that logic must not clash with its application. But logic has to be in contact with its application. Therefore logic and its application must not overlap. 5.5571 If I cannot say a priori what elementary propositions there are, then the attempt to do so must lead to obvious nonsense. 5.6 The limits of my language mean the limits of my world. 5.61 Logic pervades the world: the limits of the world are also its limits. So we cannot say in logic, ‘The world has this in it, and this, but not that.’ For that would appear to presuppose that we were excluding certain possibilities, and this cannot be the case, since it would require that logic should go beyond the limits of the world; for only in that way could it view those limits from the other side as well. We cannot think what we cannot think; so what we cannot think we cannot say either. 5.62 This remark provides the key to the problem, how much truth there is in solipsism. For what the solipsist means is quite correct; only it cannot be said, but makes itself manifest. The world is my world: this is manifest in the fact that the limits of language (of that language which alone I understand) mean the limits of my world. 5.621 The world and life are one. 5.63 I am my world. (The microcosm.) 5.631 There is no such thing as the subject that thinks or entertains ideas. If I wrote a book called The World as l found it, I should have to include a report on my body, and should have to say which parts were subordinate to my will, and which were not, etc., this being a method of isolating the subject, or rather of showing that in an important sense there is no subject; for it alone could not be mentioned in that book.? 5.632 The subject does not belong to the world: rather, it is a limit of the world. 5.633 Where in the world is a metaphysical subject to be found?  You will say that this is exactly like the case of the eye and the visual field. But really you do not see the eye. And nothing in the visual field allows you to infer that it is seen by an eye. 5.6331 For the form of the visual field is surely not like this 5.634 This is connected with the fact that no part of our experience is at the same time a priori. Whatever we see could be other than it is. Whatever we can describe at all could be other than it is. There is no a priori order of things. 5.64 Here it can be seen that solipsism, when its implications are followed out strictly, coincides with pure realism. The self of solipsism shrinks to a point without extension, and there remains the reality co-ordinated with it. 5.641 Thus there really is a sense in which philosophy can talk about the self in a non-psychological way. What brings the self into philosophy is the fact that ‘the world is my world’. The philosophical self is not the human being, not the human body, or the human soul, with which psychology deals, but rather the metaphysical subject, the limit of the world? not a part of it. 6. The general form of a truth-function is [p, E, N(E)]. This is the general form of a proposition. 6.00 (empty) 6.001 What this says is just that every proposition is a result of successive applications to elementary propositions of the operation N(E) 6.002 If we are given the general form according to which propositions are constructed, then with it we are also given the general form according to which one proposition can be generated out of another by means of an operation. 6.0 (empty) 6.01 Therefore the general form of an operation /'(n) is [E, N(E)]’ (n) ( = [n, E, N(E)]). This is the most general form of transition from one proposition to another. 6.02 And this is how we arrive at numbers. I give the following definitions x = /0x Def., /’/v’x = /v+1’x Def. So, in accordance with these rules, which deal with signs, we write the series x, /’x, /’/’x, /’/’/’x,…, in the following way /0’x, /0+1’x, /0+1+1’x, /0+1+1+1’x,…. Therefore, instead of ‘[x, E, /’E]’, I write ‘[/0’x, /v’x, /v+1’x]’. And I give the following definitions 0 + 1 = 1 Def., 0 + 1 + 1 = 2 Def., 0 + 1 + 1 +1 = 3 Def., (and so on). 6.021 A number is the exponent of an operation. 6.022 The concept of number is simply what is common to all numbers, the general form of a number. The concept of number is the variable number. And the concept of numerical equality is the general form of all particular cases of numerical equality. 6.03 The general form of an integer is [0, E, E +1]. 6.031 The theory of classes is completely superfluous in mathematics. This is connected with the fact that the generality required in mathematics is not accidental generality. 6.1 The propositions of logic are tautologies. 6.11 Therefore the propositions of logic say nothing. (They are the analytic propositions.) 6.111 All theories that make a proposition of logic appear to have content are false. One might think, for example, that the words ‘true’ and ‘false’ signified two properties among other properties, and then it would seem to be a remarkable fact that every proposition possessed one of these properties. On this theory it seems to be anything but obvious, just as, for instance, the proposition, ‘All roses are either yellow or red’, would not sound obvious even if it were true. Indeed, the logical proposition acquires all the characteristics of a proposition of natural science and this is the sure sign that it has been construed wrongly. 6.112 The correct explanation of the propositions of logic must assign to them a unique status among all propositions. 6.113 It is the peculiar mark of logical propositions that one can recognize that they are true from the symbol alone, and this fact contains in itself the whole philosophy of logic. And so too it is a very important fact that the truth or falsity of non-logical propositions cannot be recognized from the propositions alone. 6.12 The fact that the propositions of logic are tautologies shows the formal? logical? properties of language and the world. The fact that a tautology is yielded by this particular way of connecting its constituents characterizes the logic of its constituents. If propositions are to yield a tautology when they are connected in a certain way, they must have certain structural properties. So their yielding a tautology when combined in this shows that they possess these structural properties. 6.120 (empty) 6.1201 For example, the fact that the propositions ‘p’ and ‘Pp’ in the combination ‘(p. Pp)’ yield a tautology shows that they contradict one another. The fact that the propositions ‘p z q’, ‘p’, and ‘q’, combined with one another in the form ‘(p z q). (p):z: (q)’, yield a tautology shows that q follows from p and p z q. The fact that ‘(x). fxx:z: fa’ is a tautology shows that fa follows from (x). fx. Etc. etc. 6.1202 It is clear that one could achieve the same purpose by using contradictions instead of tautologies. 6.1203 In order to recognize an expression as a tautology, in cases where no generality-sign occurs in it, one can employ the following intuitive method: instead of ‘p’, ‘q’, ‘r’, etc. I write ‘TpF’, ‘TqF’, ‘TrF’, etc. Truth-combinations I express by means of brackets, e.g. and I use lines to express the correlation of the truth or falsity of the whole proposition with the truth-combinations of its truth-arguments, in the following way So this sign, for instance, would represent the proposition p z q. Now, by way of example, I wish to examine the proposition P(p.Pp) (the law of contradiction) in order to determine whether it is a tautology. In our notation the form ‘PE’ is written as and the form ‘E. n’ as Hence the proposition P(p. Pp). reads as follows If we here substitute ‘p’ for ‘q’ and examine how the outermost T and F are connected with the innermost ones, the result will be that the truth of the whole proposition is correlated with all the truth-combinations of its argument, and its falsity with none of the truth-combinations. 6.121 The propositions of logic demonstrate the logical properties of propositions by combining them so as to form propositions that say nothing. This method could also be called a zero-method. In a logical proposition, propositions are brought into equilibrium with one another, and the state of equilibrium then indicates what the logical constitution of these propositions must be. 6.122 It follows from this that we can actually do without logical propositions; for in a suitable notation we can in fact recognize the formal properties of propositions by mere inspection of the propositions themselves. 6.1221 If, for example, two propositions ‘p’ and ‘q’ in the combination ‘p z q’ yield a tautology, then it is clear that q follows from p. For example, we see from the two propositions themselves that ‘q’ follows from ‘p z q. p’, but it is also possible to show it in this way: we combine them to form ‘p z q. p:z: q’, and then show that this is a tautology. 6.1222 This throws some light on the question why logical propositions cannot be confirmed by experience any more than they can be refuted by it. Not only must a proposition of logic be irrefutable by any possible experience, but it must also be unconfirmable by any possible experience. 6.1223 Now it becomes clear why people have often felt as if it were for us to ‘postulate’ the ‘truths of logic’. The reason is that we can postulate them in so far as we can postulate an adequate notation. 6.1224 It also becomes clear now why logic was called the theory of forms and of inference. 6.123 Clearly the laws of logic cannot in their turn be subject to laws of logic. (There is not, as Russell thought, a special law of contradiction for each ‘type’; one law is enough, since it is not applied to itself.) 6.1231 The mark of a logical proposition is not general validity. To be general means no more than to be accidentally valid for all things. An ungeneralized proposition can be tautological just as well as a generalized one. 6.1232 The general validity of logic might be called essential, in contrast with the accidental general validity of such propositions as ‘All men are mortal’. Propositions like Russell’s ‘axiom of reducibility’ are not logical propositions, and this explains our feeling that, even if they were true, their truth could only be the result of a fortunate accident. 6.1233 It is possible to imagine a world in which the axiom of reducibility is not valid. It is clear, however, that logic has nothing to do with the question whether our world really is like that or not. 6.124 The propositions of logic describe the scaffolding of the world, or rather they represent it. They have no ‘subject-matter’. They presuppose that names have meaning and elementary propositions sense; and that is their connexion with the world. It is clear that something about the world must be indicated by the fact that certain combinations of symbols? whose essence involves the possession of a determinate character? are tautologies. This contains the decisive point. We have said that some things are arbitrary in the symbols that we use and that some things are not. In logic it is only the latter that express: but that means that logic is not a field in which we express what we wish with the help of signs, but rather one in which the nature of the absolutely necessary signs speaks for itself. If we know the logical syntax of any sign-language, then we have already been given all the propositions of logic. 6.125 It is possible? indeed possible even according to the old conception of logic? to give in advance a description of all ‘true’ logical propositions. 6.1251 Hence there can never be surprises in logic. 6.126 One can calculate whether a proposition belongs to logic, by calculating the logical properties of the symbol. And this is what we do when we ‘prove’ a logical proposition. For, without bothering about sense or meaning, we construct the logical proposition out of others using only rules that deal with signs. The proof of logical propositions consists in the following process: we produce them out of other logical propositions by successively applying certain operations that always generate further tautologies out of the initial ones. (And in fact only tautologies follow from a tautology.) Of course this way of showing that the propositions of logic are tautologies is not at all essential to logic, if only because the propositions from which the proof starts must show without any proof that they are tautologies. 6.1261 In logic process and result are equivalent. (Hence the absence of surprise.) 6.1262 Proof in logic is merely a mechanical expedient to facilitate the recognition of tautologies in complicated cases. 6.1263 Indeed, it would be altogether too remarkable if a proposition that had sense could be proved logically from others, and so too could a logical proposition. It is clear from the start that a logical proof of a proposition that has sense and a proof in logic must be two entirely different things. 6.1264 A proposition that has sense states something, which is shown by its proof to be so. In logic every proposition is the form of a proof. Every proposition of logic is a modus ponens represented in signs. (And one cannot express the modus ponens by means of a proposition.) 6.1265 It is always possible to construe logic in such a way that every proposition is its own proof. 6.127 All the propositions of logic are of equal status: it is not the case that some of them are essentially derived propositions. Every tautology itself shows that it is a tautology. 6.1271 It is clear that the number of the ‘primitive propositions of logic’ is arbitrary, since one could derive logic from a single primitive proposition, e.g. by simply constructing the logical product of Frege’s primitive propositions. (Frege would perhaps say that we should then no longer have an immediately self-evident primitive proposition. But it is remarkable that a thinker as rigorous as Frege appealed to the degree of self-evidence as the criterion of a logical proposition.) 6.13 Logic is not a body of doctrine, but a mirror-image of the world. Logic is transcendental. 6.2 Mathematics is a logical method. The propositions of mathematics are equations, and therefore pseudo-propositions. 6.21 A proposition of mathematics does not express a thought. 6.211 Indeed in real life a mathematical proposition is never what we want. Rather, we make use of mathematical propositions only in inferences from propositions that do not belong to mathematics to others that likewise do not belong to mathematics. (In philosophy the question, ‘What do we actually use this word or this proposition for? ‘ repeatedly leads to valuable insights.) 6.22 The logic of the world, which is shown in tautologies by the propositions of logic, is shown in equations by mathematics. 6.23 If two expressions are combined by means of the sign of equality, that means that they can be substituted for one another. But it must be manifest in the two expressions themselves whether this is the case or not. When two expressions can be substituted for one another, that characterizes their logical form. 6.231 It is a property of affirmation that it can be construed as double negation. It is a property of ‘1 + 1 + 1 + 1’ that it can be construed as ‘(1 + 1) + (1 + 1)’. 6.232 Frege says that the two expressions have the same meaning but different senses. But the essential point about an equation is that it is not necessary in order to show that the two expressions connected by the sign of equality have the same meaning, since this can be seen from the two expressions themselves. 6.2321 And the possibility of proving the propositions of mathematics means simply that their correctness can be perceived without its being necessary that what they express should itself be compared with the facts in order to determine its correctness. 6.2322 It is impossible to assert the identity of meaning of two expressions. For in order to be able to assert anything about their meaning, I must know their meaning, and I cannot know their meaning without knowing whether what they mean is the same or different. 6.2323 An equation merely marks the point of view from which I consider the two expressions: it marks their equivalence in meaning. 6.233 The question whether intuition is needed for the solution of mathematical problems must be given the answer that in this case language itself provides the necessary intuition. 6.2331 The process of calculating serves to bring about that intuition. Calculation is not an experiment. 6.234 Mathematics is a method of logic. 6.2341 It is the essential characteristic of mathematical method that it employs equations. For it is because of this method that every proposition of mathematics must go without saying. 6.24 The method by which mathematics arrives at its equations is the method of substitution. For equations express the substitutability of two expressions and, starting from a number of equations, we advance to new equations by substituting different expressions in accordance with the equations. 6.241 Thus the proof of the proposition 2 t 2 = 4 runs as follows: (/v)n’x = /v x u’x Def., /2 x 2’x = (/2)2’x = (/2)1 + 1’x = /2′ /2’x = /1 + 1’/1 + 1’x = (/’/)'(/’/)’x =/’/’/’/’x = /1 + 1 + 1 + 1’x = /4’x. 6.3 The exploration of logic means the exploration of everything that is subject to law. And outside logic everything is accidental. 6.31 The so-called law of induction cannot possibly be a law of logic, since it is obviously a proposition with sense.-? Nor, therefore, can it be an a priori law. 6.32 The law of causality is not a law but the form of a law. 6.321 ‘Law of causality’? that is a general name. And just as in mechanics, for example, there are ‘minimum-principles’, such as the law of least action, so too in physics there are causal laws, laws of the causal form. 6.3211 Indeed people even surmised that there must be a ‘law of least action’ before they knew exactly how it went. (Here, as always, what is certain a priori proves to be something purely logical.) 6.33 We do not have an a priori belief in a law of conservation, but rather a priori knowledge of the possibility of a logical form. 6.34 All such propositions, including the principle of sufficient reason, tile laws of continuity in nature and of least effort in nature, etc. etc.? all these are a priori insights about the forms in which the propositions of science can be cast. 6.341 Newtonian mechanics, for example, imposes a unified form on the description of the world. Let us imagine a white surface with irregular black spots on it. We then say that whatever kind of picture these make, I can always approximate as closely as I wish to the description of it by covering the surface with a sufficiently fine square mesh, and then saying of every square whether it is black or white. In this way I shall have imposed a unified form on the description of the surface. The form is optional, since I could have achieved the same result by using a net with a triangular or hexagonal mesh. Possibly the use of a triangular mesh would have made the description simpler: that is to say, it might be that we could describe the surface more accurately with a coarse triangular mesh than with a fine square mesh (or conversely), and so on. The different nets correspond to different systems for describing the world. Mechanics determines one form of description of the world by saying that all propositions used in the description of the world must be obtained in a given way from a given set of propositions? the axioms of mechanics. It thus supplies the bricks for building the edifice of science, and it says, ‘Any building that you want to erect, whatever it may be, must somehow be constructed with these bricks, and with these alone.’ (Just as with the number-system we must be able to write down any number we wish, so with the system of mechanics we must be able to write down any proposition of physics that we wish.) 6.342 And now we can see the relative position of logic and mechanics. (The net might also consist of more than one kind of mesh: e.g. we could use both triangles and hexagons.) The possibility of describing a picture like the one mentioned above with a net of a given form tells us nothing about the picture. (For that is true of all such pictures.) But what does characterize the picture is that it can be described completely by a particular net with a particular size of mesh. Similarly the possibility of describing the world by means of Newtonian mechanics tells us nothing about the world: but what does tell us something about it is the precise way in which it is possible to describe it by these means. We are also told something about the world by the fact that it can be described more simply with one system of mechanics than with another. 6.343 Mechanics is an attempt to construct according to a single plan all the true propositions that we need for the description of the world. 6.3431 The laws of physics, with all their logical apparatus, still speak, however indirectly, about the objects of the world. 6.3432 We ought not to forget that any description of the world by means of mechanics will be of the completely general kind. For example, it will never mention particular point-masses: it will only talk about any point-masses whatsoever. 6.35 Although the spots in our picture are geometrical figures, nevertheless geometry can obviously say nothing at all about their actual form and position. The network, however, is purely geometrical; all its properties can be given a priori. Laws like the principle of sufficient reason, etc. are about the net and not about what the net describes. 6.36 If there were a law of causality, it might be put in the following way: There are laws of nature. But of course that cannot be said: it makes itself manifest. 6.361 One might say, using Hertt:’s terminology, that only connexions that are subject to law are thinkable. 6.3611 We cannot compare a process with ‘the passage of time’? there is no such thing? but only with another process (such as the working of a chronometer). Hence we can describe the lapse of time only by relying on some other process. Something exactly analogous applies to space: e.g. when people say that neither of two events (which exclude one another) can occur, because there is nothing to cause the one to occur rather than the other, it is really a matter of our being unable to describe one of the two events unless there is some sort of asymmetry to be found. And if such an asymmetry is to be found, we can regard it as the cause of the occurrence of the one and the non-occurrence of the other. 6.36111 Kant’s problem about the right hand and the left hand, which cannot be made to coincide, exists even in two dimensions. Indeed, it exists in one-dimensional space in which the two congruent figures, a and b, cannot be made to coincide unless they are moved out of this space. The right hand and the left hand are in fact completely congruent. It is quite irrelevant that they cannot be made to coincide. A right-hand glove could be put on the left hand, if it could be turned round in four-dimensional space. 6.362 What can be described can happen too: and what the law of causality is meant to exclude cannot even be described. 6.363 The procedure of induction consists in accepting as true the simplest law that can be reconciled with our experiences. 6.3631 This procedure, however, has no logical justification but only a psychological one. It is clear that there are no grounds for believing that the simplest eventuality will in fact be realized. 6.36311 It is an hypothesis that the sun will rise tomorrow: and this means that we do not know whether it will rise. 6.37 There is no compulsion making one thing happen because another has happened. The only necessity that exists is logical necessity. 6.371 The whole modern conception of the world is founded on the illusion that the so-called laws of nature are the explanations of natural phenomena. 6.372 Thus people today stop at the laws of nature, treating them as something inviolable, just as God and Fate were treated in past ages. And in fact both are right and both wrong: though the view of the ancients is clearer in so far as they have a clear and acknowledged terminus, while the modern system tries to make it look as if everything were explained. 6.373 The world is independent of my will. 6.374 Even if all that we wish for were to happen, still this would only be a favour granted by fate, so to speak: for there is no logical connexion between the will and the world, which would guarantee it, and the supposed physical connexion itself is surely not something that we could will. 6.375 Just as the only necessity that exists is logical necessity, so too the only impossibility that exists is logical impossibility. 6.3751 For example, the simultaneous presence of two colours at the same place in the visual field is impossible, in fact logically impossible, since it is ruled out by the logical structure of colour. Let us think how this contradiction appears in physics: more or less as follows? a particle cannot have two velocities at the same time; that is to say, it cannot be in two places at the same time; that is to say, particles that are in different places at the same time cannot be identical. (It is clear that the logical product of two elementary propositions can neither be a tautology nor a contradiction. The statement that a point in the visual field has two different colours at the same time is a contradiction.) 6.4 All propositions are of equal value. 6.41 The sense of the world must lie outside the world. In the world everything is as it is, and everything happens as it does happen: in it no value exists? and if it did exist, it would have no value. If there is any value that does have value, it must lie outside the whole sphere of what happens and is the case. For all that happens and is the case is accidental. What makes it non-accidental cannot lie within the world, since if it did it would itself be accidental. It must lie outside the world. 6.42 So too it is impossible for there to be propositions of ethics. Propositions can express nothing that is higher. 6.421 It is clear that ethics cannot be put into words. Ethics is transcendental. (Ethics and aesthetics are one and the same.) 6.422 When an ethical law of the form, ‘Thou shalt…’ is laid down, one’s first thought is, ‘And what if I do, not do it? ‘ It is clear, however, that ethics has nothing to do with punishment and reward in the usual sense of the terms. So our question about the consequences of an action must be unimportant.? At least those consequences should not be events. For there must be something right about the question we posed. There must indeed be some kind of ethical reward and ethical punishment, but they must reside in the action itself. (And it is also clear that the reward must be something pleasant and the punishment something unpleasant.) 6.423 It is impossible to speak about the will in so far as it is the subject of ethical attributes. And the will as a phenomenon is of interest only to psychology. 6.43 If the good or bad exercise of the will does alter the world, it can alter only the limits of the world, not the facts? not what can be expressed by means of language. In short the effect must be that it becomes an altogether different world. It must, so to speak, wax and wane as a whole. The world of the happy man is a different one from that of the unhappy man. 6.431 So too at death the world does not alter, but comes to an end. 6.4311 Death is not an event in life: we do not live to experience death. If we take eternity to mean not infinite temporal duration but timelessness, then eternal life belongs to those who live in the present. Our life has no end in just the way in which our visual field has no limits. 6.4312 Not only is there no guarantee of the temporal immortality of the human soul, that is to say of its eternal survival after death; but, in any case, this assumption completely fails to accomplish the purpose for which it has always been intended. Or is some riddle solved by my surviving for ever?  Is not this eternal life itself as much of a riddle as our present life?  The solution of the riddle of life in space and time lies outside space and time. (It is certainly not the solution of any problems of natural science that is required.) 6.432 How things are in the world is a matter of complete indifference for what is higher. God does not reveal himself in the world. 6.4321 The facts all contribute only to setting the problem, not to its solution. 6.44 It is not how things are in the world that is mystical, but that it exists. 6.45 To view the world sub specie aeterni is to view it as a whole? a limited whole. Feeling the world as a limited whole? it is this that is mystical. 6.5 When the answer cannot be put into words, neither can the question be put into words. The riddle does not exist. If a question can be framed at all, it is also possible to answer it. 6.51 Scepticism is not irrefutable, but obviously nonsensical, when it tries to raise doubts where no questions can be asked. For doubt can exist only where a question exists, a question only where an answer exists, and an answer only where something can be said. 6.52 We feel that even when all possible scientific questions have been answered, the problems of life remain completely untouched. Of course there are then no questions left, and this itself is the answer. 6.521 The solution of the problem of life is seen in the vanishing of the problem. (Is not this the reason why those who have found after a long period of doubt that the sense of life became clear to them have then been unable to say what constituted that sense? ) 6.522 There are, indeed, things that cannot be put into words. They make themselves manifest. They are what is mystical. 6.53 The correct method in philosophy would really be the following: to say nothing except what can be said, i.e. propositions of natural science? i.e. something that has nothing to do with philosophy? and then, whenever someone else wanted to say something metaphysical, to demonstrate to him that he had failed to give a meaning to certain signs in his propositions. Although it would not be satisfying to the other person? he would not have the feeling that we were teaching him philosophy? this method would be the only strictly correct one. 6.54 My propositions are elucidatory in this way: he who understands me finally recognizes them as senseless, when he has climbed out through them, on them, over them. (He must so to speak throw away the ladder, after he has climbed up on it.) He must transcend these propositions, and then he will see the world aright. 7. What we cannot speak about we must pass over in silence. Ludwig Wittgenstein, 1921. (Translated by David Pears and Brian McGuinness, 1961)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.81511390209198, "perplexity": 720.3548021941084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889173.38/warc/CC-MAIN-20201025125131-20201025155131-00471.warc.gz"}
https://link.springer.com/chapter/10.1007%2F978-3-030-43120-4_21
# Edge-Critical Equimatchable Bipartite Graphs • Yasemin Büyükçolak • Didem Gözüpek • Sibel Özkan Conference paper Part of the Lecture Notes in Computer Science book series (LNCS, volume 11989) ## Abstract A graph is called equimatchable if all of its maximal matchings have the same size. Lesk et al. [6] provided a characterization of equimatchable bipartite graphs. Since this characterization is not structural, Frendrup et al. [4] also provided a structural characterization for equimatchable graphs with girth at least five; in particular, a characterization for equimatchable bipartite graphs with girth at least six. In this work, we extend the partial characterization of Frendrup et al. [4] to equimatchable bipartite graphs without any restriction on girth. For an equimatchable graph, an edge is said to be critical-edge if the graph obtained by removal of this edge is not equimatchable. An equimatchable graph is called edge-critical if every edge is critical. Reducing the characterization of equimatchable bipartite graphs to the characterization of edge-critical equimatchable bipartite graphs, we give two characterizations of edge-critical equimatchable bipartite graphs. ## Keywords Equimatchable Bipartite graphs Edge-critical ## References 1. 1. Akbari, S., Ghodrati, A.H., Hosseinzadeh, M.A., Iranmanesh, A.: Equimatchable regular graphs. J. Graph Theory 87, 35–45 (2018) 2. 2. Deniz, Z., Ekim, T.: Critical equimatchable graphs. Preprint Google Scholar 3. 3. Deniz, Z., Ekim, T.: Edge-stable equimatchable graphs. Discrete Appl. Math. 261, 136–147 (2019) 4. 4. Frendrup, A., Hartnell, B., Preben, D.: A note on equimatchable graphs. Australas. J. Comb. 46, 185–190 (2010) 5. 5. Grünbaum, B.: Matchings in polytopal graphs. Networks 4, 175–190 (1974) 6. 6. Lesk, M., Plummer, M.D., Pulleyblank, W.R.: Equi-matchable graphs. In: Graph Theory and Combinatorics (Cambridge, 1983), pp. 239–254. Academic Press, London (1984)Google Scholar 7. 7. Lewin, M.: M-perfect and cover-perfect graphs. Israel J. Math. 18, 345–347 (1974) 8. 8. Lovász, L., Plummer, M.D.: Matching Theory, vol. 29, Annals of Discrete Mathematics edn. North-Holland, Amsterdam (1986) 9. 9. Meng, D.H.-C.: Matchings and coverings for graphs. Ph.D. thesis. Michigan State University, East Lansing, MI (1974)Google Scholar 10. 10. Sumner, D.P.: Randomly matchable graphs. J. Graph Theory 3, 183–186 (1979)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8529283404350281, "perplexity": 6700.514259425474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00408.warc.gz"}
https://forum.math.toronto.edu/index.php?PHPSESSID=0f6jdnc8eanf2pk5uf3ta0o7l0&topic=1271.msg4576
### Author Topic: Q1: TUT 0201, TUT 5101 and TUT 5102  (Read 4331 times) #### Victor Ivrii • Elder Member • Posts: 2599 • Karma: 0 ##### Q1: TUT 0201, TUT 5101 and TUT 5102 « on: September 28, 2018, 03:28:20 PM » Find the general solution of the given differential equation, and use it to determine how solutions behave as $t\to \infty$: \begin{equation*} ty' + 2y = \sin (t) \end{equation*} « Last Edit: September 28, 2018, 03:32:44 PM by Victor Ivrii » #### Qin Wang • Newbie • Posts: 2 • Karma: 1 ##### Re: Q1: TUT 0201, TUT 5101 and TUT 5102 « Reply #1 on: September 28, 2018, 04:33:53 PM » Solution in the following file. #### Victor Ivrii • Elder Member • Posts: 2599 • Karma: 0 ##### Re: Q1: TUT 0201, TUT 5101 and TUT 5102 « Reply #2 on: September 29, 2018, 03:00:26 PM » Waiting for a typed solution. #### Tzu-Ching Yen • Full Member • Posts: 31 • Karma: 22 ##### Re: Q1: TUT 0201, TUT 5101 and TUT 5102 « Reply #3 on: September 29, 2018, 09:23:56 PM » Rephrase equation $y' + \frac{2}{t}y = \frac{sin(t)}{t}$ Find integrating factor $u(t) = e^{\int \frac{2}{t}} = t^2$ the constant from integration is chosen to be zero. Now $y = \frac{1}{u(t)}\int u(t)\frac{sin(t)}{t}$ Use int by parts, $y = -\frac{cos(t)}{t} + \frac{sin(t)}{t^2} + \frac{c_1}{t^2}$ Since $t$ is in denominator and the nominators are bounded, as $t \to \infty$, $y \to 0$ #### Wei Cui • Full Member • Posts: 16 • Karma: 11 ##### Re: Q1: TUT 0201, TUT 5101 and TUT 5102 « Reply #4 on: September 29, 2018, 09:51:28 PM » Question: $ty^{'}+2y=sin(t)$,   $t>0$ standard equation form: $y^{'}+\frac{2}{t}y=\frac{sin(t)}{t}$ $p(t) = \frac{2}{t}$,  $g(t) = \frac{sin(t)}{t}$ $u = e^{\int p(t)}dt = e^{2\int \frac{1}{t}dt} = t^2$, then we multiply both sides with $u$, and we get: $t^2y^{'} + 2ty = tsin(t)$ $(t^2y)^{'}=tsin(t)$ $d(t^2y) = tsin(t)dt$ $t^2y = \int tsin(t) dt$ (integrating by parts, $u=t \implies du =dt$ and $dv=sin(t)$ and $v=-cos(t)$ $\int tsin(t)dt = uv - \int vdu$ $=-tcos(t)-\int (-cos(t))dt$ $=-tcos(t) +\int cos(t)dt$ $=-tcos(t) +sin(t) + C$) Therefore, $t^2y=-tcos(t)+sin(t) + C$ $y=\frac{-tcos(t)+sin(t)+C}{t^2}$. Since $t>0$, and when $t \rightarrow \infty, y \rightarrow 0$ « Last Edit: September 29, 2018, 09:56:03 PM by Wei Cui » #### Boyu Zheng • Jr. Member • Posts: 12 • Karma: 8 ##### Re: Q1: TUT 0201, TUT 5101 and TUT 5102 « Reply #5 on: September 30, 2018, 12:10:17 AM » Solution for TUT5101 Question: Find the solution of the given initial value problem y'-2y = e^2t , y(0)=2 let p(t)=-2 and set u=e^(integral p(t)dt) then you get u = e^(-2t) then you multiply u on each side of the standard form equation and you get e^(-2t)y'-2e^(-2t)y = 1 then you can find the LHS is equal to (e^(-2t)y)' = 1 integral each side you get (e^-2t)y = t +C rearrange the equation you get y=e^2t(t+C) y(t)=(t+C)e^2t plug in y(0)=2 and you get C=2 y can be written as y=e^2t(t+2)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.988105058670044, "perplexity": 25935.37743236194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662627464.60/warc/CC-MAIN-20220526224902-20220527014902-00278.warc.gz"}
https://popl22.sigplan.org/details/POPL-2022-popl-research-papers/44/Fully-Abstract-Models-for-Effectful-Calculi-via-Category-Theoretic-Logical-Relation
Fri 21 Jan 2022 11:10 - 11:35 at Salon III - Semantics 1 Chair(s): Alan Jeffrey We present a construction which, under suitable assumptions, takes a model of Moggi’s computational $\lambda$-calculus with sum types, effect operations and primitives, and yields a model that is adequate and fully abstract. The construction, which uses the theory of fibrations, categorical glueing, $\top\top$-lifting, and $\top\top$-closure, takes inspiration from O’Hearn & Riecke’s fully abstract model for PCF. Our construction can be applied in the category of sets and functions, as well as the category of diffeological spaces and smooth maps and the category of quasi-Borel spaces, which have been studied as semantics for differentiable and probabilistic programming. Fri 21 JanDisplayed time zone: Eastern Time (US & Canada) change 10:20 - 12:00 Semantics 1POPL at Salon III Chair(s): Alan Jeffrey Roblox 10:2025mResearch paper From Enhanced Coinduction towards Enhanced InductionRemotePOPLDavide Sangiorgi University of Bologna; Inria DOI Media Attached 10:4525mResearch paper A Fine-Grained Computational Interpretation of Girard’s Intuitionistic Proof-NetsInPersonPOPLDelia Kesner Université de Paris; CNRS; IRIF; Institut Universitaire de France DOI Media Attached 11:1025mResearch paper Fully Abstract Models for Effectful λ-Calculi via Category-Theoretic Logical RelationsRemotePOPLOhad Kammar University of Edinburgh, Shin-ya Katsumata National Institute of Informatics, Philip Saville University of Oxford DOI Media Attached 11:3525mResearch paper Layered and Object-Based Game SemanticsInPersonPOPLArthur Oliveira Vale Yale University, Paul-André Melliès CNRS; Université de Paris, Zhong Shao Yale University, Jérémie Koenig Yale University, Leo Stefanesco IRIF, University Paris Diderot & CNRS DOI Media Attached
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42949873208999634, "perplexity": 13261.689530898504}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520936.24/warc/CC-MAIN-20220517225809-20220518015809-00349.warc.gz"}
https://economics.stackexchange.com/questions/20728/exports-how-can-a-small-amount-of-export-make-really-high-income/20751
# Exports: How can a small amount of export make really high income I'm currently researching for a school project about Thailand. I found the wonderful Observatory of Economic Complexity Website. There are two visualizations I don't quite understand: https://atlas.media.mit.edu/en/visualize/tree_map/hs92/export/tha/all/show/2015/ https://atlas.media.mit.edu/en/visualize/stacked/hs92/export/tha/all/show/1995.2016/ How can it be that Animal products (light yellow) are only 1.3% of total exports but make the most money. Maybe I didn't understand the Stacked Visualization right. So maybe that's my mistake. EDIT: OK; I just found out that I misread the chart. I was a little bit confused obviously. These stacked line charts are completely unlogical to me • How much does this category of exports make? – london Feb 23 '18 at 14:20 • @london 230 Billion USD (If I understood the visualizations right) – Féileacán Feb 23 '18 at 14:22 • is this the sum of all animal products? – london Feb 23 '18 at 14:25 What you're asking1 stems from a more general question "How can such a small amount of some export $Q$ make the most money out of all the exports?" To that the answer is to simply recall the standard formula for Total Revenue. i.e. $$TR=P \times Q$$ Even though the quantity of export $Q$ is small it could be that the price for $Q$ (be that for whatever reason) be the cause for why it makes more revenue relative to the countries other exports.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2095174342393875, "perplexity": 1723.1159343455424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487643703.56/warc/CC-MAIN-20210619051239-20210619081239-00284.warc.gz"}
https://matheuscmss.wordpress.com/2013/07/31/furstenbergs-theorem-on-the-poisson-boundaries-of-lattices-of-slnr-part-ii/
Posted by: matheuscmss | July 31, 2013 Furstenberg’s theorem on the Poisson boundaries of lattices of SL(n,R) (part II) Last time we introduced Poisson boundaries hoping to use them to distinguish between lattices of ${SL(2,\mathbb{R})}$ and ${SL(n,\mathbb{R})}$, ${n\geq 3}$. More precisely, we followed Furstenberg to define and construct Poisson boundaries as a tool that would allow to prove the following statement: Theorem 1 (Furstenberg (1967)) A lattice of ${SL(2,\mathbb{R})}$ can’t be realized as a lattice in ${SL(n,\mathbb{R})}$ for ${n\geq 3}$ (or, in the language introduced in the previous post, ${SL(n,\mathbb{R})}$, ${n\geq 3}$, can’t envelope a discrete group enveloped by ${SL(2,\mathbb{R})}$). Here, we recall that, very roughly speaking, these Poisson boundaries ${(B,\nu)}$ were certain “maximal” topological objects attached to locally compact groups with probability measures ${(G,\mu)}$ in such a way that the points in the boundary ${B}$ were almost sure limits of ${\mu}$-random walks on ${G}$ and the probability measure ${\nu}$ was a ${\mu}$-stationary measure giving the distribution at which ${\mu}$-random walks hit the boundary. For this second (final) post, we will discuss (below the fold) some examples of Poisson boundaries and, after that, we will sketch the proof of Theorem 1. Remark 1 The basic references for this post are the same ones from the previous post, namely, Furstenberg’s survey, his original articles and A. Furman’s survey. 1. Some examples of Poisson boundaries 1.1. Abelian groups The Poisson boundary of a locally compact Abelian group ${G}$ with respect to any probability measure ${\mu}$ is trivial. Indeed, in terms of the characterization of Poisson boundaries via ${\mu}$-harmonic functions, this amounts to say that bounded ${\mu}$-harmonic functions on locally compact Abelian groups are constant, a fact proved by Choquet-Deny here. More generally, there is a natural notion of random walk entropy ${h(G,\mu)}$ (cf. Furman’s survey) allowing to characterize the triviality of Poisson boundary: more precisely, Kaimanovich-Vershik showed here that a discrete countable group ${G}$ equipped with a probability measure ${\mu}$ has trivial Poisson boundary if and only if ${h(G,\mu)=0}$. Remark 2 In addition to these results, it is worth to mention that: • Furstenberg showed that any probability measure ${\mu}$ on a locally compact non-amenable group ${G}$ whose support ${\textrm{supp}(\mu)}$ generates ${G}$ admits bounded non-constant ${\mu}$-harmonic functions; • Kaimanovich-Vershik and Rosenblatt showed that given a locally compact amenable group ${G}$, there exists a (symmetric) probability measure ${\mu}$ with full support ${\textrm{supp}(\mu)=G}$ such that all bounded ${\mu}$-harmonic functions on ${G}$ are constant. See Furman’s survey for more comments on these results (as well as related ones). 1.2. Free group ${F_2}$ on 2 generators Let ${F_2}$ be the free group on ${2}$ generators ${a}$ and ${b}$ equipped with the Bernoulli probability measure ${\mu(\{\ast\})=1/4}$ for ${\ast\in\{a, a^{-1}, b, b^{-1}\}}$. Last time, during the naive description of the Poisson boundary, we saw that the space $\displaystyle \Omega_2=\{w_1\dots w_n\dots: w_{i}w_{i+1}\neq 1 \textrm{ and } w_i\in\{a, a^{-1}, b, b^{-1}\}\}$ of infinite words ${w_1w_2\dots}$ on ${a, a^{-1}, b, b^{-1}}$ satisfy the non-cancellation condition ${w_iw_{i+1}\neq 1}$ for all ${i\in\mathbb{N}}$ was a natural candidate for a boundary of ${(F_2,\mu)}$ (as far as ${\mu}$-random walks were concerned). 1.2.1. Computation of the Poisson boundary of ${(F_2,\mu)}$ As the reader can imagine, ${\Omega_2}$ equipped with an adequate measure ${\nu}$ is the Poisson boundary of ${(F_2,\mu)}$. The proof of this fact goes along the following lines. Firstly, let us show that ${\Omega_2}$ is a ${F_2}$-space, i.e., Lemma 2 ${F_2}$ acts continuously on ${\Omega_2}$ (for the topology of pointwise convergence). Proof: The action of ${F_2}$ on ${\Omega_2}$ is defined by continuity: given ${g\in F_2}$ and ${g_n\in F_2}$, ${g_n\rightarrow\omega\in\Omega_2}$, let ${g\omega:=\lim\limits_{n\rightarrow\infty} gg_n}$. In other words, if ${g=w_1\dots w_l}$, then ${g\omega}$ is the infinite word obtained by concatenation of ${g}$ and ${\omega}$, and then collapsing if necessary: e.g., if ${\omega=abab\dots}$, then • ${a\omega=aabab\dots}$, ${b\omega=babab\dots}$ and ${b^{-1}\omega=b^{-1}abab\dots}$, and • ${a^{-1}\omega=a^{-1}abab\dots=bab\dots}$ This proves the lemma. $\Box$ Next, let us consider ${x_n}$ a stationary sequence of independent ${F_2}$-valued random variables with distribution ${\mu}$ (so that ${x_n\in\{a,a^{-1},b,b^{-1}\}}$). We claim that ${x_1\dots x_n}$ converges to a point ${z_1}$ of ${\Omega_2}$ with probability ${1}$: Lemma 3 ${x_1\dots x_n\rightarrow z_1\in\Omega_2}$ with probability ${1}$. Proof: The basic idea to get convergence is simple: we write the product ${x_1x_2\dots x_nx_{n+1}\dots}$, we delete all pairs ${x_ix_{i+1}}$ canceling out (i.e., ${x_ix_{i+1}=1}$), and we continue to delete “canceling pairs” until we get a point ${z_1}$ in ${\Omega_2}$. Of course, this procedure fails to produce a limit point ${z_1}$ only if the product ${x_1\dots x_n\dots}$ keeps degenerating indefinitely. However, this possibility occurs only with zero probability thanks to the transience of the random walk ${x_1\dots x_n}$ on the free group ${F_2}$ (i.e, the fact that this random walk wanders to ${\infty}$ with full probability; cf., e.g., this classical paper of Kesten). Equivalently, letting ${\ell(g)}$ denote the length of an element ${g\in F_2}$ (i.e., the minimal length of products expressing ${g}$ in terms of the generators ${\{a, a^{-1}, b, b^{-1}\}}$), the transience of ${x_1\dots x_n}$ means that ${\ell(x_1\dots x_n)\rightarrow\infty}$ as ${n\rightarrow\infty}$ with probability ${1}$. In particular, with probability ${1}$, for each ${k}$, we can select ${n_k\in\mathbb{N}}$ the largest value of ${n}$ such that ${\ell(x_1\dots x_n)=k}$, so that the right multiplication of ${x_1\dots x_{n_k}}$ by ${x_{n_k+1}\dots}$ will not change the first ${k}$ letters of the product ${x_1\dots x_n\dots}$, and, a fortiori, we get (with probability ${1}$) a well-defined limit word that we can denote without any ambiguity $\displaystyle z_1=x_1\dots x_n\dots$ This proves the lemma. $\Box$ An immediate corollary of the proof of this lemma (and the definition of pointwise convergence) is: Corollary 4 For any ${\omega\in\Omega_2}$, ${x_1x_2\dots x_n\omega}$ converges to ${z_1}$. In particular, given any probability measure ${\nu}$ on ${\Omega_2}$, one has $\displaystyle (x_1x_2\dots x_n)_*(\nu)\rightarrow\delta_{z_1}$ Another way of phrasing this corollary is: ${\Omega_2}$ carries only one ${\mu}$-stationary measure ${\nu}$. Indeed, from the limit point ${z_1}$ provided by the previous lemma, we can define a sequence ${\{z_k\}}$ of ${\Omega_2}$-valued random variables by setting ${z_k=x_k x_{k+1}\dots}$ and it is not hard to check that ${\{z_k\}}$ is a ${\mu}$-process (as defined in the previous post). Therefore, if ${\nu}$ is a ${\mu}$-stationary probability measure on ${\Omega_2}$, the convergence asserted in the corollary and Proposition 7 of the previous post says that ${\nu}$ is the distribution of a ${\mu}$-process. Furthermore, Proposition 5 of the previous post asserts that this ${\mu}$-process is exactly ${\{z_k\}}$, that is, the unique ${\mu}$-stationary measure ${\nu}$ on ${\Omega_2}$ is the distribution of the ${\mu}$-process ${\{z_k\}}$. In summary, denoting by ${\nu}$ the unique ${\mu}$-stationary on ${\Omega_2}$ (namely, the distribution of ${\{z_k\}}$), our discussion so far shows that ${(\Omega_2,\nu)}$ is a boundary of ${(F_2,\mu)}$. In fact, one can make this boundary entirely explicit because it is not hard to guess ${\nu}$: Lemma 5 Let ${\nu}$ be the probability measure on ${\Omega_2}$ giving the maximal independence between the coordinates, i.e., ${\omega_1}$ takes one of the four values ${\{a,a^{-1},b,b^{-1}\}}$ with equal probability, ${\omega_2}$ takes one of the three possible values ${\{a,a^{-1},b,b^{-1}\}-\{\omega_1^{-1}\}}$ with equal probability, etc., so that the ${\nu}$-measure of a cylinder obtained by prescribing the first ${n}$-entries is ${1/(4\cdot 3^{n-1})}$, i.e., $\displaystyle \nu(\{\omega=\omega_1\omega_2\dots\in\Omega_2: \omega_1=\omega_1^*, \dots, \omega_n=\omega_n^*\})=\frac{1}{4\cdot 3^{n-1}}$ Proof: From our previous discussion, it is sufficient to check that ${\nu}$ is ${\mu}$-stationary, i.e., ${\mu\ast\nu=\nu}$. This fact is not hard to check: we have just to verify that the ${\mu\ast\nu}$-integral and the ${\nu}$-integral of characteristic functions of cylinders coincide, i.e., we have to compute (and show the equality of) the ${\mu\ast\nu}$-measure and the ${\nu}$-measure of any given cylinder $\displaystyle \Sigma=\{\omega=\omega_1\omega_2\dots\in\Omega_2: \omega_1=\omega_1^*, \dots, \omega_n=\omega_n^*\}.$ In this direction, note that, by definition, $\displaystyle \mu\ast\nu(\Sigma)=\frac{1}{4}(\nu(a\Sigma)+\nu(a^{-1}\Sigma)+\nu(b\Sigma)+\nu(b^{-1}\Sigma)).$ On the other hand, for ${g\in\{a,a^{-1},b,b^{-1}\}}$, we have that ${g\Sigma}$ is a cylinder of size ${n+1}$ (i.e., corresponding to the prescription of ${n+1}$ entries) unless ${g=(\omega_1^*)^{-1}}$ where ${g\Sigma}$ is a cylinder of size ${n-1}$. In particular, $\displaystyle \nu(a\Sigma)+\nu(a^{-1}\Sigma)+\nu(b\Sigma)+\nu(b^{-1}\Sigma) = 3\left(\frac{1}{4\cdot 3^{n}}\right)+\frac{1}{4\cdot 3^{n-2}} = \frac{1}{3^{n-1}}.$ By plugging this into the previous equation, we deduce that $\displaystyle \mu\ast\nu(\Sigma)=\frac{1}{4\cdot 3^{n-1}}=\nu(\Sigma)$ as desired. This proves the lemma. $\Box$ At this point, it remains only to check that ${(\Omega_2,\nu)}$ is the Poisson boundary, i.e., all boundaries of ${(F_2,\mu)}$ are equivariant images of ${(\Omega_2,\nu)}$ and the Poisson formula for ${\mu}$-harmonic functions. We will skip the proof of this last fact because it would lead us far from the scope of this post. Instead, we refer to Dynkin-Maljutov paper where it is shown that ${(\Omega_2,\nu)}$ is the Martin boundary (and, a fortiori, the Poisson boundary). 1.2.2. A law of large numbers for ${(F_2,\mu)}$ Using our knowledge of the Poisson boundary ${(\Omega_2,\nu)}$ of ${(F_2,\mu)}$, let us sketch a proof of the fact that $\displaystyle \ell(x_1\dots x_n)\sim n/2$ as ${n\rightarrow\infty}$ with probability ${1}$ (where ${\ell(g)}$ is the length of ${g\in F_2}$). Firstly, let us observe that ${g\nu}$ is absolutely continuous with respect to ${\nu}$ for each ${g\in F_2}$. Indeed, since ${\nu}$ is ${\mu}$-stationary, i.e., $\displaystyle \nu=\frac{1}{4}(a\nu+a^{-1}\nu+b\nu+b^{-1}\nu)$ our claim is true for the generators ${a, a^{-1}, b, b^{-1}}$ of ${F_2}$, and, a fortiori, it is also true for all ${g\in F_2}$. Actually, the Radon-Nikodym density ${dg\nu/d\nu}$ is not hard to compute: if ${\omega\in\Omega_2}$ is the limit of the sequence ${g_n\in F_2}$, then one can check by induction on ${\ell(g)}$ that $\displaystyle \frac{dg\nu}{d\nu}(\omega)=3^{\ell(g_n)-\ell(g^{-1}g_n)}$ for all ${n}$ sufficiently large. In other words, $\displaystyle -\log\left(\frac{dg\nu}{d\nu}(\omega)\right)=\lim\limits_{n\rightarrow\infty}(\log 3)(\ell(g^{-1}g_n)-\ell(g_n)). \ \ \ \ \ (1)$ Consider now the quantity $\displaystyle \frac{1}{n}\ell(x_1\dots x_n) = \frac{1}{n}\sum\limits_{i=1}^n (\ell(x_i x_{i+1}\dots x_n)-\ell(x_{i+1}\dots x_n))$ For each ${i}$, we know that ${(\log 3)(\ell(x_i x_{i+1}\dots x_n)-\ell(x_{i+1}\dots x_n))\rightarrow-\log \frac{dx_i^{-1}\nu}{d\nu}(z_{i+1})}$. From this, it is possible to prove (after some work [related to the fact that we want to let ${i}$ and ${n}$ vary…]) that $\displaystyle (\log 3)\lim\limits_{n\rightarrow\infty}\frac{1}{n}\ell(x_1\dots x_n) = \lim\limits_{n\rightarrow\infty}-\frac{1}{n}\sum\limits_{i=1}^n \log \frac{dx_i^{-1}\nu}{d\nu}(z_{i+1})$ By Birkhoff’s ergodic theorem, the time-average on the right-hand side of this equality converges to the spatial-average with probability ${1}$, i.e., $\displaystyle \lim\limits_{n\rightarrow\infty}-\frac{1}{n}\sum\limits_{i=1}^n \log \frac{dx_i^{-1}\nu}{d\nu}(z_{i+1})=-\int_{F_2}\int_{\Omega_2}\log\frac{dg^{-1}\nu}{d\nu}(\omega) d\nu(\omega)d\mu(g).$ Using (1), we can compute this integral as follows. By definition of ${\mu}$, $\displaystyle \int_{F_2}\int_{\Omega_2}-\log\frac{dg^{-1}\nu}{d\nu}(\omega) d\nu(\omega)d\mu(g)=\frac{1}{4}\sum\limits_{g\in\{a, a^{-1}, b, b^{-1}\}}\int_{\Omega_2}-\log\frac{dg^{-1}\nu}{d\nu}(\omega)d\nu(\omega)$ Now, for each ${\omega=\omega_1\omega_2=\lim\limits_{n\rightarrow\infty}g_n\dots\in\Omega_2}$ and ${g\in\{a,a^{-1},b,b^{-1}\}}$, we have that ${\ell(gg_n)-\ell(g_n)=1}$ unless ${g=\omega_1^{-1}}$ in which case ${\ell(gg_n)-\ell(g)=-1}$. In particular, from (1), we get that $\displaystyle \sum\limits_{g\in\{a,a^{-1},b,b^{-1}\}}-\log\frac{dg^{-1}\nu}{d\nu}(\omega) = 2\log3$ for each ${\omega\in\Omega_2}$. By plugging this into the previous equation, we obtain that $\displaystyle \int_{F_2}\int_{\Omega_2}-\log\frac{dg^{-1}\nu}{d\nu}(\omega) d\nu(\omega)d\mu(g)=\frac{1}{2}\log 3$ and, hence, $\displaystyle \lim\limits_{n\rightarrow\infty}\frac{1}{n}\ell(x_1\dots x_n) = 1/2,$ i.e., ${\ell(x_1\dots x_n)\sim n/2}$ as ${n\rightarrow\infty}$ with probability ${1}$. 1.2.3. A random walk on the free group ${F_{\infty}}$ on ${\infty}$ generators The arguments of the previous subsubsections also apply to the free groups ${F_r}$ on ${r\in\mathbb{N}}$ generators equipped with the Bernoulli probability assigning equal probabilities to the ${2r}$ singletons consisting of the generators and their inverses. However, the simple-minded extension of this discussion to the free group ${F_{\infty}}$ on countably many generators fails because we do not have a Bernoulli measure. Nevertheless, it is possible to equip ${F_{\infty}}$ with some probability measure ${\mu_{\infty}}$ such that the Poisson boundary of ${(F_{\infty},\mu_{\infty})}$ coincides with the Poisson boundary ${(\Omega_2,\nu)}$ of ${(F_2,\mu)}$. In fact, this is so because ${F_{\infty}}$ “behaves like a lattice” of ${F_2}$. More concretely, the important fact about ${F_{\infty}}$ is that it is isomorphic to the commutator subgroup ${[F_2,F_2]}$ of ${F_2}$. Using this fact, we can formalize the idea that ${F_{\infty}}$ is a lattice of ${F_2}$ through the following lemma: Lemma 6 ${F_{\infty}}$ (or, more precisely, the commutator subgroup ${[F_2,F_2]}$) is a recurrence set for the random walk ${u_n=x_1\dots x_n}$ on ${F_2}$ in the sense that ${u_n}$ meets ${F_{\infty}\simeq[F_2,F_2]}$ infinitely often with probability ${1}$. Proof: The quotient group ${F_2/[F_2, F_2]}$ is the free Abelian group on ${2}$ generators, i.e., ${F_2/[F_2, F_2]\simeq\mathbb{Z}^2}$. Now, by projecting to ${F_2/[F_2, F_2]\simeq\mathbb{Z}^2}$ the random walk ${u_n=x_1\dots x_n}$ on ${F_2}$, we obtain the simple random walk ${v_n}$ on ${\mathbb{Z}^2}$. In this notation, the assertion that ${[F_2, F_2]}$ is a recurrence set of ${u_n}$ corresponds to the well-known fact that the simple random walk ${v_n}$ on ${\mathbb{Z}^2}$ returns infinitely often to the origin. $\Box$ From this lemma, we can construct (at least heuristically) a probability measure ${\mu_{\infty}}$ on ${F_{\infty}}$ whose Poisson boundary is the same of ${(F_2,\mu)}$ (namely, ${(\Omega_2,\nu)}$). In fact, letting ${u_n=x_1\dots x_n}$ be the random walk on ${F_2}$, we know that (with probability ${1}$) there is a subsequence ${u_{n_k}}$ hitting ${F_{\infty}}$. On the other hand, ${u_n}$ converges to a limit point ${z_1\in\Omega_2}$. In particular, it follows that the boundary points ${z_1\in\Omega_2}$ are also limits of the ${F_{\infty}}$-valued random variables ${\{u_{n_k}\}}$. Thus, if one can show that ${\{u_{n_k}\}}$ are independent random variables with a fixed distribution ${\mu_{\infty}}$, then ${(F_{\infty},\mu_{\infty})}$ has Poisson boundary ${(\Omega_2,\nu)}$. Here, the keyword is the strong Markov property of ${x_n}$: more precisely, we notice that from ${u_{n_k}}$ to ${u_{n_{k+1}}}$ we multiply (to the right) ${x_{n_k+1}x_{n_k+2}\dots x_{n_{k+1}}}$; since ${u_{n_k}}$ and ${u_{n_{k+1}}}$ belong to ${F_{\infty}}$, we have that ${x_{n_k+1}x_{n_k+2}\dots x_{n_{k+1}}\in F_{\infty}}$ and, in particular, this suggests that ${\mu_{\infty}}$ is the distribution of the position of ${u_n}$ at the first time that ${u_n=x_1\dots x_n}$ enters ${F_{\infty}}$; of course, to formalize this, we need to know that the the entrance times ${n_k}$ of ${u_n}$ on ${F_{\infty}}$ are themselves random variables (or rather that ${x_{n_k+1}\dots x_{n_{k+1}}}$ ${x_{n_{k+1}+1}\dots x_{n_{k+2}}}$ are still independent random variables) and this is precisely a consequence of the strong Markov property of ${\{x_n\}}$. Alternatively, one can show that ${(F_{\infty},\mu_{\infty})}$ has Poisson boundary ${(\Omega_2,\nu)}$ by working with harmonic functions. More precisely, our task consists into showing that the restrictions to ${F_{\infty}}$ of ${\mu}$-harmonic functions on ${F_2}$ are ${\mu_{\infty}}$-harmonic and all ${\mu_{\infty}}$-harmonic functions on ${F_{\infty}}$ arise in this way (by restriction of ${\mu}$-harmonic functions on ${F_2}$). In this direction, one recalls that, for each ${g\in F_{\infty}}$, the quantity ${\mu_{\infty}(g)}$ is the probability that ${x_1\dots x_n}$ takes value ${g}$ at the first time it enters ${F_{\infty}}$, and then one shows the desired facts about ${\mu}$-harmonic functions versus ${\mu_{\infty}}$-harmonic functions with the aid of the following abstract lemma: Lemma 7 Let ${G}$ be a discrete, ${\mu}$ a probability measure on ${G}$ and ${\{x_n\}}$ a stationary sequence of independent random variables with distribution ${\mu}$. Suppose that ${R\subset G}$ is a recurrence set of ${u_n=x_1\dots x_n}$ and let ${h=h(g)}$ be a ${\mu}$-harmonic function. Then, $\displaystyle h(g)=\sum\limits_{g^*\in R} \theta_g(g^*) h(g^*)$ where ${\theta_g}$ is the distribution of the first point of ${R}$ hit by ${gx_1\dots x_n}$. We refer to pages 31 and 32 of Furstenberg’s survey for a (short) proof of this lemma and its application to the computation of the Poisson boundary of ${(F_{\infty},\mu_{\infty})}$. 1.3. ${SL(n,\mathbb{R})}$, ${n\geq 2}$ Let ${G=G_n=SL(n,\mathbb{R})}$, ${n\geq 2}$, and denote by ${K=K_n}$ the (maximal compact) subgroup of orthogonal matrices in ${G}$ and ${P=P_n}$ the (Borel or minimal parabolic) subgroup upper triangular matrices in ${G}$. From the Gram-Schmidt orthogonalization process, we know that ${G=KP}$. Definition 8 A probability measure ${\mu}$ on ${G}$ is called spherical if • ${\mu}$ is absolutely continuous with respect to Haar measure on ${G}$ and • ${\mu}$ is ${K}$-bi-invariant, i.e., ${k\mu=\mu k=\mu}$ for all ${k\in K}$, or, equivalently, ${\int_G f(k_1 g k_2)d\mu(g)=\int_G f(g)d\mu(g)}$ for all ${k_1, k_2\in K}$. Let us now consider the homogenous space ${B=B_n=G/P}$ (of complete flags in ${\mathbb{R}^n}$), i.e., a space where ${G}$ acts continuously and transitively (by left multiplication ${g(hP)=(gh)P}$ on cosets ${hP\in G/P}$, of course). Since ${K}$ acts transitively on ${B}$, there exists an unique ${K}$-invariant measure on ${B}$ that we will denote ${m_B}$ (philosophically, this is comparable to the fact that the unique translation-invariant measure on the real line is Lebesgue). Now, given any probability measure ${\nu}$ on ${B}$ and any spherical measure ${\mu}$ on ${G}$, note that ${\mu\ast\nu}$ is a ${K}$-invariant probability measure on ${B}$, and, a fortiori, ${\mu\ast\nu=m_B}$. In particular, ${m_B}$ is the unique ${\mu}$-stationary measure on ${B}$. In this context, Furstenberg proved the following theorem: Theorem 9 (Furstenberg) ${(B,m_B)}$ is the Poisson boundary of ${(G,\mu)}$ whenever ${\mu}$ is a spherical measure. We will not comment on the proof of this theorem here: instead, we refer the reader to the original article of Furstenberg for more details (and, in fact, a more general result valid for all semi-simple Lie groups ${G}$). An interesting consequence of this theorem is the following fact: Corollary 10 The class of ${\mu}$-harmonic functions on ${G}$ is the same for all spherical measures ${\mu}$. Proof: By the definition of the Poisson boundary, the fact (ensured by Furstenberg’s theorem) that ${(B,m_B)}$ is the Poisson boundary of ${(G,\mu)}$ means that we have a Poisson formula $\displaystyle h(g)=\int_B\hat{h}(g\xi)dm_B(\xi)$ giving all ${\mu}$-harmonic functions ${h}$ on ${G}$ as integrals of bounded measurable functions ${\hat{h}}$ on ${B}$. Since the right-hand side of this equation does not depend on ${\mu}$ (only on ${m_B}$), the corollary follows. $\Box$ From this corollary, it is natural to define a harmonic function on ${G_n=SL(n,\mathbb{R})}$ as a ${\mu}$-harmonic function with respect to any spherical measure ${\mu}$. For later use, let us describe more geometrically the Poisson boundaries ${B_2}$ and ${B_3}$ (resp.) of ${SL(2,\mathbb{R})}$ and ${SL(3,\mathbb{R})}$ (resp.). In fact, we already hinted that, in general, ${B=B_n}$ is the complete flag variety of ${\mathbb{R}^n}$, but we will discuss the particular cases of ${B_2}$ and ${B_3}$ because our plan is to show Theorem 1 in the context of ${SL(2,\mathbb{R})}$ and ${SL(3,\mathbb{R})}$ only. 1.3.1. Poisson boundary of ${SL(2,\mathbb{R})}$ Consider the usual action of ${G=SL(2,\mathbb{R})}$ on ${\mathbb{R}^2}$ and the corresponding induced action on the projective space ${\mathbb{P}^1}$ of directions/lines ${\overline{v}=\mathbb{R}\cdot v}$ in ${\mathbb{R}^2}$ associated to non-zero vectors ${v\in\mathbb{R}^2}$. By definition, the subgroup ${P}$ of upper triangular matrices in ${SL(2,\mathbb{R})}$ is the stabilizer of the direction ${\overline{e_1}=\mathbb{R}\cdot e_1}$ generated by the vector ${e_1=(1,0)}$. In particular, since ${G}$ acts transitively on ${\mathbb{P}^1}$, we deduce that ${B_2=G/P\simeq\mathbb{P}^1}$. Let us now try to understand how ${B_2}$ is attached to ${G=G_2=SL(2,\mathbb{R})}$ in terms of the measure topology described in the previous post. By definition, an element ${g\in G_2}$ is close to ${\xi\in B_2}$ whenever the measure ${gm_B}$ is close to ${\delta_{\xi}}$. On the other hand, a “large” element $\displaystyle g=\left(\begin{array}{cc}a & b \\ c & d \end{array}\right)\in SL(2,\mathbb{R}),$ i.e., a matrix ${g}$ with large operator norm ${\|g\|}$ has at least one large of the column vector, either ${ge_1=\left(\begin{array}{c}a \\ c\end{array}\right)}$ or ${ge_2=\left(\begin{array}{c}b \\ d\end{array}\right)}$, and, if both column vectors are large, then their directions ${\overline{ge_1}}$ and ${\overline{ge_2}}$ are close by unimodularity of ${g}$ (indeed, if they are both large and they form a rectangle of area ${1}$, then their angle must be small). In particular, we conclude that, for large elements ${g\in SL(2,\mathbb{R})}$, most directions ${\overline{u}}$ are close to the larger of ${\overline{ge_1}}$, ${\overline{ge_2}}$ except when ${\overline{u}}$ is approximately orthogonal to ${{}^t ge_1=(a,b)}$, ${{}^t ge_2=(c,d)}$ (where ${{}^t g}$ is the transpose of ${g}$). In summary, we just proved the following lemma: Lemma 11 Given ${\varepsilon>0}$, there exists a compact subset ${C_{\varepsilon}\subset SL(2,\mathbb{R})}$ such that, if ${g\notin C_{\varepsilon}}$, then there are two intervals ${I, J\subset\mathbb{P}^1}$ of sizes ${<\varepsilon}$ with ${g(\mathbb{P}^1-I)\subset J}$. An interesting consequence of this lemma concerns the action of large elements of ${SL(2,\mathbb{R})}$ on non-atomic measures on ${\mathbb{P}^1}$. Indeed, given ${\nu}$ an arbitrary non-atomic probability measure on ${\mathbb{P}^1}$, we have that for each ${\delta>0}$ there exists ${\varepsilon>0}$ such that any interval ${L\subset\mathbb{P}^1}$ of size ${<\varepsilon}$ has ${\nu}$-measure ${\nu(L)<\varepsilon}$ (because any point has zero ${\nu}$-measure by assumption). In particular, by combining this information with the previous lemma, we deduce that any large element ${g\notin C_{\varepsilon}}$ of ${SL(2,\mathbb{R})}$ moves most of the mass of ${\nu}$ to a small interval. More precisely, denoting by ${I}$ and ${J}$ the intervals of sizes ${<\varepsilon}$ provided by the lemma, we have that ${\nu(I)<\delta}$, i.e., ${\nu(\mathbb{P}^1-I)>1-\delta}$. Therefore, ${g\nu(J)=\nu(g^{-1}(J))\geq \nu(\mathbb{P}^1-I)>1-\delta}$ (since ${g(\mathbb{P}^1-I)\subset J}$), i.e., most of the mass of ${g\nu}$ is concentrated on ${J}$. In other words, we proved the following crucial lemma: Lemma 12 Let ${\nu}$ be a non-atomic probability measure on ${\mathbb{P}^1}$. Then, as ${g\rightarrow\infty}$, ${g\nu}$ converges to a Dirac mass. An alternative way of phrasing this lemma is: as ${g\rightarrow\infty}$, it approaches a definite point of ${B_2}$. In particular, ${G_2\cup B_2}$ is compact, that is, we get a compactification of ${G_2}$ by attaching ${B_2}$! (This result is comparable to Corollary 4 in the context of free groups.) As we will see below, this compacteness property is no longer true for ${G_3\cup B_3}$ (i.e., in the context of ${SL(3,\mathbb{R})}$), and this is one of the main differences between the Poisson boundaries of ${SL(2,\mathbb{R})}$ and ${SL(3,\mathbb{R})}$. As the reader might imagine, this observation will be important for the proof of Theorem 1 (that is, at the moment of distinguishing between the lattices of ${SL(2,\mathbb{R})}$ and ${SL(3,\mathbb{R})}$). Note that, from this lemma, we can show directly that ${(\mathbb{P}^1, m_B)}$ is a boundary of ${(G_2,\mu)}$ whenever ${\mu}$ is spherical (without appealing to Furstenberg’s theorem above). Indeed, denoting by ${x_n}$ the random walk with respect to ${\mu}$, we have that, by definition, ${(\mathbb{P}^1,m_B)}$ is a boundary of ${(G_2,\mu)}$ if and only if ${x_1\dots x_n m_B}$ converges to a Dirac mass with probability ${1}$. Since ${\mathbb{P}^1}$ is a compact space, we know that ${x_1\dots x_n m_B}$ converges to some measure with probability ${1}$ (cf. Corollary 11 of the previous post) and the previous lemma says that this measure is a Dirac mass unless the products ${x_1\dots x_n}$ stay bounded (with positive probability). Now, a random product of elements in a topological group remains bounded (with positive probability) only if the distribution is supported on a compact subgroup (by Birkhoff’s ergodic theorem and the fact that a compact semigroup of a group is a subgroup), but this is not the case as ${\mu}$ is spherical (and thus absolutely continuous with respect to Haar). Finally, once we know that ${(\mathbb{P}^1,m_B)}$ is a boundary of ${(G_2,\mu)}$, i.e., ${x_1\dots x_n m_B}$ converges to a Dirac mass ${\delta_{z_1}}$ where ${z_k}$ is a ${\mu}$-process on ${\mathbb{P}^1}$, we can obtain the following interesting consequences. Firstly, ${x_1\dots x_n\rightarrow\infty}$, i.e., the random walk is transient. Moreover, the direction of the larger of the column vectors of ${x_1\dots x_n}$ converges with probability ${1}$ to a direction ${z_1}$, and, in fact, it can be shown that both column vectors become large. In particular, by unimodularity, it follows that, with probability ${1}$, both column vectors of ${x_1\dots x_n}$ converge to a random point (direction) of ${\mathbb{P}^1}$. 1.3.2. Poisson boundary of ${SL(3,\mathbb{R})}$ By Furstenberg’s theorem, ${B_3=G_3/P_3}$ is the space underlying the Poisson boundary of ${G_3=SL(3,\mathbb{R})}$ equipped with any spherical measure ${\mu}$. Here, we recall that ${P_3}$ is the subgroup of upper triangular ${3\times 3}$ matrices in ${G_3}$. We proceed as before, i.e., let us consider the usual action of ${G_3}$ on ${\mathbb{R}^3}$ and the induced action on the space ${F_3}$ of pairs ${(\overline{u},\overline{v})}$ of points in the projective space ${\mathbb{P}^2}$ corresponding to orthogonal directions ${\overline{u}}$ and ${\overline{v}}$ in ${\mathbb{R}^3}$. Note that ${G_3}$ acts naturally on ${F_3}$ via ${g(\overline{u},\overline{v})=(g\overline{u},{}^t g^{-1}\overline{v})}$ (our choice of ${{}^t g^{-1}}$ is natural because the orthogonality condition is preserved and ${g\mapsto {}^t g^{-1}}$ is an automorphism of ${G_3}$ [in particular, our matrices are multiplied in the “correct order”]). Moreover, this action is transitive, so that ${F_3}$ is the quotient of ${G_3}$ by the stabilizer of a point of ${F_3}$. For sake of concreteness, let us consider the point ${(\overline{e_1}, \overline{e_3})\in F_3}$ (where ${e_1, e_2, e_3}$ are the vectors of the canonical basis of ${\mathbb{R}^3}$) and let us determine its stabilizer in ${G_3}$. By definition, if ${g\in G_3}$, say $\displaystyle g=\left(\begin{array}{ccc}g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \end{array} \right)$ stabilizes ${(\overline{e_1}, \overline{e_3})}$, then ${g\overline{e_1}=\overline{e_1}}$ and ${{}^t g\overline{e_3}=\overline{e_3}}$, i.e., $\displaystyle g_{21}=g_{31}=0$ and $\displaystyle {}^t g_{13} = g_{31}=0=g_{32}={}^tg_{23}$ that is, ${g}$ is upper triangular. In other words, ${P_3}$ is the stabilizer of ${(\overline{e_1}, \overline{e_3})}$ and, thus, ${F_3\simeq G_3/P_3=B_3}$. Next, let us again try to understand how ${B_3}$ is attached to ${G_3}$ in terms of the measure topology. Once more, by performing the random walk ${x_1\dots x_n}$ with distribution ${\mu}$, one can check that the column vectors of ${x_1\dots x_n}$ converge to a limit vector ${u_1}$ in ${\mathbb{P}^2}$. In terms of the boundary ${B_3}$, we can recover ${u_1}$ by letting ${z_1=(u_1, v_1)}$ be the point (${\mu}$-process) of ${B_3\simeq F_3}$ obtained as the almost sure limit of ${x_1\dots x_n}$. Of course, in this description, the role of ${v_1}$ remains somewhat mysterious, and so let us try to uncover it now. By definition, ${v_1}$ is the limit point of the column vectors of ${{}^t x_1^{-1}\dots {}^t x_n^{-1}}$, i.e., the random walk obtained from ${x_1\dots x_n}$ by applying the automorphism ${g\mapsto {}^t g^{-1}}$. On the other hand, if ${r_n=U_n(e_1), s_n=U_n(e_2), t_n=U_n(e_3)}$ are the column vectors of ${U_n=x_1\dots x_n}$, then the vectors ${s_n\times t_n}$, ${t_n\times r_n}$ and ${r_n\times s_n}$ are the column vectors of ${{}^t x_1^{-1}\dots {}^t x_n^{-1}}$ (where ${u\times v}$ is the vector product of ${u, v\in\mathbb{R}^3}$). In particular, the fact that these vectors converge to ${v_1}$ means that the perpendicular directions to the 3 planes passing through the vectors ${r_n, s_n, t_n}$ converge to ${v_1}$, i.e., all of these 3 planes converge to the plane perpendicular to ${v_1}$. Finally, let us make more precise the observation from the previous subsubsection that the boundary behaviors of elements in ${G_2=SL(2,\mathbb{R})}$ and ${G_3=SL(3,\mathbb{R})}$ are different as this is one of the main points in the proof of Theorem 1. Once more, let us point out that Lemma 12 above says that any element ${g\in G_2}$ approaches the boundary (i.e., becomes large) by getting close to a specific point ${\xi\in B_2}$. On the other hand, this is no longer true for elements ${g\in G_3}$: for instance, the sequence $\displaystyle g_n=\left(\begin{array}{ccc}n&0&0 \\ 0&n&0 \\ 0&0&1/n^2\end{array}\right)\in SL(3,\mathbb{R})$ goes to infinity without getting close to any specific point of ${B_3}$ because its larger column vectors ${g_ne_1}$ and ${g_ne_2}$ do not converge together to a single direction in ${\mathbb{P}^2}$. Instead, it is not hard to convince oneself (by playing with the definitions) that the sequence of measures ${g_n m_B}$ converges to a measure supported on the great circle ${\{(\overline{u},e_3): \overline{u}\perp e_3\}\subset B_3}$. In general, a great circle is a fiber of one of the two natural fibrations ${B_3\simeq F_3\rightarrow\mathbb{P}^2}$ and it is possible to show that the measures ${gm_B}$ can approach measures supported on any given great circle. Anyhow, the basic point is that ${G_2\cup B_2}$ is compact but ${G_3\cup B_3}$ is not compact. For sake of completeness, let us point out what are the measure that we must add to ${B_3}$ to get a compactification of ${G_3}$. It can be proven that any convergent sequence ${g_nm_B}$ tends to a Dirac mass, a circle measure (i.e., a measure supported on a great circle) or an absolutely continuous measure with respect to ${m_B}$ (this last case occurring only if ${g_n}$ is bounded in ${G_3}$). Moreover, in the case of getting a circle measure as a limit, we get a very specific object: by identifying the great circle with ${\mathbb{P}^1}$ and denoting by ${m_{\mathbb{P}^1}}$ the Lebesgue measure, one has that all circle measure have the form ${gm_{\mathbb{P}^1}}$ where ${g\in G_2=SL(2,\mathbb{R})}$ (and ${SL(2,\mathbb{R})}$ acts on its boundary ${\mathbb{P}^1}$). 1.3.3. Harmonic functions on ${SL(2,\mathbb{R})}$ Closing this quick discussion on the Poisson boundaries of ${SL(n,\mathbb{R})}$, let us quickly comment on the relationship between ${\mu}$-harmonic functions on ${SL(2,\mathbb{R})}$ and classical harmonic functions on Poincaré’s disk. Let ${\mu}$ be a spherical measure on ${G_2=SL(2,\mathbb{R})}$. For the sake of simplicity of notation, we will say that a random walk ${U_n=gx_1\dots x_n}$ with respect to a spherical measure is a Brownian motion. By definition of sphericity of ${\mu}$, the transition from ${g}$ to ${gg'}$ has the same probability as the transition from ${gk}$ to ${gg'}$, and from ${g}$ to ${gg'k}$ for each ${k\in K_2=SO(2,\mathbb{R})}$. In particular, the Brownian motion ${U_n}$ can be transferred to a Brownian motion ${W_n=gx_1\dots x_n K_2}$ on the symmetric space ${G_2/K_2}$ of ${G_2}$. Now, we recall that ${G_2/K_2}$ is Poincaré’s disk ${\mathbb{D}}$/hyperbolic upper-half plane ${\mathbb{H}}$: more concretely, by letting ${G_2}$ act on the upper-half plane ${\mathbb{H}}$ by Möebius transformations, i.e., isometries of ${\mathbb{H}}$ equipped with the hyperbolic metric, $\displaystyle g(z)=(az+b)/(cz+d),$ for ${g=\left(\begin{array}{cc}a&b\\ c&d\end{array}\right)\in SL(2,\mathbb{R})}$, we see that ${K_2}$ is the stabilizer of ${i\in\mathbb{H}}$. In particular, ${G_2/K_2}$ is naturally identified with ${\mathbb{H}}$ via ${gK_2\mapsto g(i)}$, and, hence, we can also think of ${G_2/K_2}$ as the Poincaré’s disk after considering the fractional linear transformation sending ${\mathbb{H}}$ to ${\mathbb{D}}$ in such a way that ${i}$ is sent to the origin ${0}$ and ${\infty}$ is sent to ${1}$. In summary, we can think of the Brownian motion ${U_n}$ as a Brownian motion ${W_n=gx_1\dots x_n(0)}$ on Poincaré’s disk. Here, it is worth to point out that the transitions of ${W_n}$ are not given by group multiplication as ${G_2}$ acts on the left and the ${x_j}$‘s are multiplied from the right. Finally, if we transfer the measure topology on ${G_2\cup B_2}$ to ${\mathbb{D}\cup\partial\mathbb{D}=\overline{\mathbb{D}}}$, we get the usual Euclidean topology on the closed unit disk ${\overline{\mathbb{D}}}$. Indeed, suppose that ${g(0)=z}$ with ${|z|\sim 1}$. Then, ${g}$ transfers most of the mass of ${m_B}$ to a point of ${\partial\mathbb{D}}$ close to ${z}$. In particular, if ${g_n(0)=z_n\rightarrow\xi\in\partial\mathbb{D}}$, then ${g_nm_B\rightarrow\delta_{\xi}}$, that is, ${g_n\in G_2}$ converges to ${\xi\in\mathbb{P}^1}$, and, a fortiori, the Euclidean topology on ${\overline{\mathbb{D}}}$ is the topology we get after transferring the measure topology. Finally, note that the value of a harmonic function ${h(g)}$ (with respect to any spherical measure) depends only on the coset ${gK_2}$ (by sphericity and the mean value property of harmonic functions). Thus, ${h}$ induces a function ${\widetilde{h}}$ on ${G_2/K_2}$. By Poisson’s formula (in the definition of Poisson boundary), we have that $\displaystyle \widetilde{h}(g(0)) = h(g) = \int\hat{h}(g\xi) dm_B(\xi) = \int\hat{h}(\xi)\frac{dgm_B}{dm_B}(\xi)dm_B(\xi).$ On the other hand, by computing the density ${dgm_B/dm_B}$ (using that ${g}$ acts via Möebius transformations and ${m_B}$ is the Lebesgue measure), we can show that $\displaystyle \frac{dgm_B}{dm_B}(\xi)=P(g(0),\xi)$ where ${P(z,\xi)}$ is the classical Poisson kernel on the unit disc. In other words, by letting ${z=g(0)}$, we obtain that $\displaystyle \widetilde{h}(z)=\int\hat{h}(\xi) P(z,\xi) dm_B(\xi)$ and, hence, the function ${\widetilde{h}}$ is harmonic in the classical sense. In summary, the two notions of harmonic’ are the same. Probabilistically speaking, the formula above says that the value ${\widetilde{h}(z)}$ is obtained by integrating the boundary values ${\hat{h}(\xi)}$ with respect to the hitting measure ${gm_B}$ on the boundary starting at ${z}$‘ (as ${gm_B}$ is the distribution of the limit of ${W_n=gx_1\dots x_n}$ [because ${m_B}$ is the distribution of the limit of ${x_1\dots x_n}$ and by invariance of the Brownian motion under the group]). 1.4. Mapping class group and Teichmüller space As it is “customary”, the mapping class groups and Teichmüller spaces are very close to lattices in Lie groups and homogenous spaces. In particular, this partly motivates these two articles of Kaimanovich and Masur about the Poisson boundary of the mapping class group and Teichmüller spaces, where it is shown that it is the Thurston compactification (via projective measure foliations) equipped with a natural harmonic measure. Of course, it is out of the scope of this post to comment on this subject and we refer the curious reader to (very well-written) papers of Kaimanovich-Masur. 2. Poisson boundary of lattices of ${SL(n,\mathbb{R})}$ After this series of examples of Poisson boundaries, let us come back to the proof of Theorem 1. At this stage, we know that ${G_n=SL(n,\mathbb{R})}$ equipped with any spherical measure has Poisson boundary ${(B_n, m_{B_n})}$ and now we want to distinguish between lattices of ${G_n}$. As we mentioned in the previous post, the basic idea is that a `nice’ random walk in a lattice ${\Gamma}$ of ${G_n}$ should see the whole boundary of ${G_n}$. In fact, this statement should be compared with the results in Subsubsection 1.2.3 above where we saw that an adequate random walk in the free group ${F_{\infty}}$ in ${\infty}$ generators sees the whole boundary of the free group ${F_2}$ in two generators because ${F_{\infty}}$ behaved like a lattice in ${F_2}$, or, more accurately, it was a recurrence set for the symmetric random walk in ${F_2}$. However, this heuristic for free groups can not be applied ipsis-literis to lattices ${\Gamma}$ of ${G_n}$ because a countable set ${\Gamma}$ can not be a recurrence set for a Brownian motion in ${G_n}$. Nevertheless, one still has the following result: Theorem 13 (Furstenberg) If ${\Gamma}$ is a lattice of ${G_n= SL(n,\mathbb{R})}$, then there exists a probability measure ${\mu}$ on ${\Gamma}$ such that the Poisson boundary of ${(\Gamma,\mu)}$ coincides with the Poisson boundary ${(B_n,m_{B_n})}$ of ${G_n}$ (with respect to any spherical measure). In order to simplify the exposition, we will restrict our attention to the low dimensional cases. More concretely, we will sketch the construction of ${\mu}$ in the case of cocompact lattices in ${G_2=SL(2,\mathbb{R})}$ and we will show that ${(\Gamma,\mu)}$ has ${(B_3,m_{B_3})}$ as a boundary. However, we will not enter into the details of showing that ${(B_n,m_{B_n})}$ is the Poisson boundary of ${(\Gamma,\mu)}$: instead, we refer the reader to Furstenberg’s survey for a proof in the case of ${n=2}$ (i.e., ${SL(2,\mathbb{R})}$) and his original article for the general case. 2.1. Construction of ${\mu}$ in the case ${n=2}$ Consider again the symmetric space ${G_n/K_n}$ associated to ${G_n}$. This space has a natural ${G_n}$-invariant metric ${d(gK_n,g'K_n)}$ and, using this metric, we have a function $\displaystyle \Lambda(g)=d(gK,K)$ measuring the distance to the origin. Note that, in the particular case ${n=2}$, the function ${\Lambda}$ has a very simple expression: $\displaystyle \Lambda(g)=\log\frac{1+|g(0)|}{1-|g(0)|}$ for ${g\in SL(2,\mathbb{R})=G_2}$. Proposition 14 If ${\Gamma}$ is a cocompact subgroup of ${G_n}$, then there exists a probability measure ${\mu}$ on ${\Gamma}$ such that • (a) ${\textrm{supp}(\mu)=\Gamma}$, i.e., ${\mu(\{\gamma\})>0}$ for all ${\gamma\in\Gamma}$, • (b) ${m_B=m_{B_n}}$ is ${\mu}$-stationary, i.e., ${\mu\ast m_B = m_B}$, • (c) the restriction of the function ${\Lambda}$ to ${\Gamma}$ is ${\mu}$-integrable, i.e., $\displaystyle \sum\limits_{\gamma\in\Gamma} \mu(\gamma)\Lambda(\gamma)<\infty.$ Remark 3 The condition (b) above implies that the restriction to ${\Gamma}$ of an arbitrary harmonic function on ${G_n}$ is ${\mu}$-harmonic. In particular, this means that a harmonic function on ${G_n}$ satisfies plenty (at least one per cocompact lattice of ${G_n}$) of discrete mean-value equalities. For the proof of this proposition, we will focus on the construction of a measure satisfying item (b) and then we will adjust it to satisfy items (a) and (c). Also, we will discuss only the case of cocompact lattices ${\Gamma}$ in ${G_2=SL(2,\mathbb{R})}$. In this direction, let us re-interpret item (b) in terms of the Brownian motion on Poincaré’s disk ${\mathbb{D}}$ (that is, the symmetric space ${G_2/K_2}$ of ${G_2}$). As we saw above, ${gm_B}$ is the hitting distribution on ${\partial\mathbb{D}}$ of the Brownian motion starting at ${g(0)}$. In particular, the stationarity condition in item (b) says that the hitting probability ${m_B}$ on ${\partial\mathbb{D}}$ starting at the origin ${0}$ is a convex linear combination of the hitting probabilities ${\gamma m_B}$ on ${\partial\mathbb{D}}$ starting at the points ${\gamma(0)}$ (for ${\gamma\in\Gamma}$). This hints how we must show item (b): the measure ${\mu}$ will correspond to the weights ${\mu(\gamma)}$ used to write ${m_B}$ is a linear convex combination of ${\gamma m_B}$, ${\gamma\in\Gamma}$. Keeping this goal in mind, it is clear that the following lemma will help us with our task: Lemma 15 Let ${\Gamma}$ be a cocompact lattice of ${G_2=SL(2,\mathbb{R})}$ and denote ${m_z=gm_B}$ the hitting probability on ${\partial\mathbb{D}}$ of a Brownian motion starting at ${z=g(0)}$. Then, there are two constants ${0 and ${\delta>0}$ such that, for any ${z_0\in\mathbb{D}}$, one has $\displaystyle m_{z_0} = \sum\limits_{j}p_j m_{\gamma_j(0)}+\int m_{\zeta}d\lambda(\zeta)$ where ${p_j>0}$, the points ${\gamma_j(0)}$ and ${\zeta}$ are within a hyperbolic distance ${L}$ of ${z_0}$, and ${\lambda}$ is a non-negative measure of total mass ${<1-\delta}$. Proof: Since ${\Gamma}$ is cocompact, it has a compact fundamental domain. In particular, we can find a large constant ${L<\infty}$ such that the hyperbolic ball ${B(z_0, L)}$ of radius ${L}$ around any ${z_0\in\mathbb{D}}$ contains in its interior at least one point of the form ${\gamma(0)}$ with ${\gamma\in\Gamma}$. For sake of simplicity of the exposition, during this proof, we will replace the discrete-time Brownian motion ${gU_n=gx_1\dots x_n}$ by its continuous-time analog for a technical reason that we discuss now. For each ${z_0\in\mathbb{D}}$ and ${z}$ in the interior of ${B(z_0, L)}$, the hitting measure ${m_z}$ on ${\partial\mathbb{D}}$ starting at ${z}$ can be computed by noticing that a continuous-time Brownian motion emanating from ${z}$ must hit the circle ${C(z_0, L):=\partial B(z_0, L)}$ before heading towards ${\partial\mathbb{D}}$. Of course, the same is not true for the discrete-time Brownian motion (as we can jump across ${C(z_0, L)}$), but we could have overcome this small difficulty by considering an annulus around ${\partial C(z_0, L)}$. However, we will stick to the continuous-time Brownian motion to simplify matters. Anyhow, by combining this observation with the strong Markov property of the Brownian motion, one has that $\displaystyle m_z=\int_{C(z_0, L)} m_{\zeta} m(z, C(z_0,L); d\zeta)$ where ${m(z, C(z_0, L); d\zeta)}$ is the hitting distribution of ${\zeta}$ for a Brownian motion starting at ${z}$ (i.e., for each interval ${J\subset C(z_0, L)}$, the measure ${m(z, C(z_0,L); J)}$ is the probability that a Brownian motion starting at ${z}$ first hits ${C(z_0, L)}$ at a point in ${J}$. Next, let us consider the points of the form ${\gamma_j(0)}$, ${\gamma_j\in\Gamma}$, inside (the interior of) ${B(z_0,L)}$, choose positive numbers ${p_j>0}$ and consider the measure $\displaystyle m_{z_0}-\sum p_j m_{\gamma_j(0)}$ By the previous formula for the hitting measures ${m_z}$, we can rewrite the measure above as $\displaystyle \int_{C(z_0,L)} m_{\zeta}\left(m(z_0, C(z_0,L); d\zeta)-\sum p_j m(\gamma_j(0), C(z_0,L); d\zeta)\right) \ \ \ \ \ (2)$ Pictorially, this integral represents weighted contributions from the following Brownian motions: At this point, we observe that the measures ${m(z, C(z_0, L);.)}$ are absolutely continuous with respect to the Lebesgue measure on ${C(z_0,L)}$ whenever ${z}$ belongs to the interior of ${B(z_0, L)}$ (as our Brownian motion is guided by a spherical measure [by definition]). Therefore, by taking ${p_j}$ small (depending on how close ${\gamma_j(0)}$ is to the circle ${C(z_0, L)}$), we can ensure that the measure $\displaystyle \lambda=m(z_0, C(z_0,L); d\zeta)-\sum p_j m(\gamma_j(0), C(z_0,L); d\zeta) \ \ \ \ \ (3)$ appearing in the right-hand side of (2) is positive. Furthermore, note that this scenario is ${\Gamma}$invariant: if we replace ${z_0}$ by ${\gamma z_0}$ for ${\gamma\in\Gamma}$, the circle ${C(z_0, L)}$ is replaced by ${\gamma(C(z_0,L))=C(\gamma z_0, L)}$ and the elements ${\gamma_j\in\Gamma}$ are replaced by ${\gamma\gamma_j}$, but we can keep the same values of ${p_j}$. In other words, the values of ${p_j}$ (making the measure in (3) positive), and, a fortiori, the quantity ${\sum p_j}$, depends only on the coset ${\Gamma g_0}$ where ${g_0\in G_2}$ satisfies ${g_0(0)=z_0}$. Therefore, by the compactness of ${G/\Gamma}$, we can find some ${\delta>0}$ such that, for all ${z_0\in\mathbb{D}}$, the values of ${p_j}$ (making (3) positive) can be chosen so that $\displaystyle \sum p_j>\delta$ In particular, it follows that ${\lambda}$ is a positive measure of total mass ${<1-\delta}$. In summary, we managed to write $\displaystyle m_{z_0}=\sum p_j m_{\gamma_j(0)} + \int m_{\zeta} d\lambda(\zeta)$ where ${\lambda}$ is a positive measure of total mass ${<1-\delta}$, as desired. $\Box$ By taking ${z_0=0}$ in this lemma, we know that one can write ${m_B=m_{0}}$ as a convex linear combination of a “main contribution” coming from ${m_{\gamma_j(0)}}$‘s (where the distance from ${\gamma_j(0)}$‘s to ${0}$ are ${) and a “boundary contribution” coming from an integral of ${m_{\zeta}}$ with respect to a measure ${\lambda}$ of total mass ${<1-\delta}$. From this point, the idea to construct a probability measure ${\mu}$ on ${\Gamma}$ satisfying item (b) of Proposition 14 is very simple: we repeatedly apply the lemma to the ${m_{\zeta}}$‘s appearing in the “boundary contribution” in order to push it away to infinity; here, the convergence of this procedure is ensured by the fact that the boundary measure ${\lambda}$ loses a definite factor (of ${1-\delta}$) of its mass at each step. More concretely, this idea can be formalized as follows. By induction, assume that, at the ${n}$th step, we wrote $\displaystyle m_B = m_0 = \sum p_{\gamma}^{(n)} m_{\gamma(0)} + \int m_{\eta} d\lambda^{(n)}(\eta) \ \ \ \ \ (4)$ where ${p_{\gamma}^{(n)}>0}$ only for ${\gamma\in\Gamma}$ such that the distance ${d(\gamma(0), 0)}$ between ${\gamma(0)}$ and ${0}$ is ${ and ${\lambda^{(n)}}$ is a positive measure on the hyperbolic ball ${B(0, nL)}$ (of radius ${nL}$ and center ${0}$) of total mass ${<(1-\delta)^n}$. By applying the lemma to ${m_{\eta}}$, we also have $\displaystyle m_{\eta}=\sum p_{\gamma}(\eta)m_{\gamma(0)} + \int m_{\zeta} d\lambda_{\eta}(\zeta)$ where ${p_{\gamma}(\eta)>0}$ only for ${d(\gamma(0),\eta(0))\leq L}$, and ${\lambda_{\eta}}$ is supported in ${B(\eta(0), L)}$ and it has total mass ${<(1-\delta)}$. By combining these equations, we deduce that $\displaystyle \begin{array}{rcl} m_0 & = & \sum\left(p_{\gamma}^{(n)}+\int p_{\gamma}(\eta)d\lambda^{(n)}(\eta)\right)m_{\gamma(0)}+\int\int m_{\zeta}d\lambda_{\eta}(\zeta)d\lambda^{(n)}(\eta) \\ & := & \sum p_{\gamma}^{(n+1)} m_{\gamma(0)} + \int m_{\eta} d\lambda^{(n+1)}(\eta) \end{array}$ where ${p_{\gamma}^{(n+1)}>0}$ only for ${d(\gamma(0), 0)\leq d(\gamma(0),\eta(0))+d(\eta(0),0)<(n+1)L}$, and ${\lambda^{(n+1)}}$ is a positive measure supported on ${B(0, (n+1)L)}$ whose total mass is $\displaystyle \int \lambda_{\eta}(\mathbb{D}) d\lambda^{(n)}(\eta)\leq (1-\delta)^{n+1}.$ In particular, by setting ${\mu(\gamma)=\lim\limits_{n\rightarrow\infty} p_{\gamma}^{(n)}}$, we find that $\displaystyle m_B=m_0=\sum\limits_{\gamma\in\Gamma}\mu(\gamma)m_{\gamma(0)}=\mu\ast m_0=\mu\ast m_B,$ that is, the stationarity condition of item (b) of Proposition 14 is proved. Now, we claim that item (c) of Proposition 14 is satisfied by ${\mu}$. Indeed, by construction, for the elements ${\gamma\in\Gamma}$ with ${\Lambda(\gamma):=d(\gamma(0),0)>nL}$, the quantity ${\mu(\gamma)}$ comes from the ${\lambda^{(n)}}$-combination of the contributions of the measures ${m_{\eta}}$ in the right-hand side of (4). Since the measure ${\lambda^{(n)}}$ has total mass ${<(1-\delta)^n}$, we deduce that $\displaystyle \sum\limits_{\substack{\gamma\in\Gamma, \\ \Lambda(\gamma)>nL}}\mu(\gamma)<(1-\delta)^n$ and, therefore, $\displaystyle \sum\limits_{n\in\mathbb{N}}\sum\limits_{\substack{\gamma\in\Gamma, \\ \Lambda(\gamma)>nL}}\mu(\gamma)<\infty.$ In particular, given that any element ${\gamma\in\Gamma}$ with ${kL<\Lambda(\gamma)\leq (k+1)L}$ appears ${k}$ times in the sum above, we conclude that $\displaystyle \sum\limits_{\gamma\in\Gamma}\mu(\gamma)\Lambda(\gamma)<\infty,$ that is, the ${\mu}$-integrability condition on ${\Lambda}$ in item (c) of Proposition 14 is verified. Finally, the full support condition in item (a) of Proposition 14 might not be true for the probability measure ${\mu}$ constructed above. However, it is not hard to fulfill this condition by slightly changing the construction above. Indeed, it suffices to add all points ${\gamma(0)}$ at distance ${ from ${0}$ in the ${n}$th step of the construction of ${\mu}$ and then assign to them some tiny but positive probabilities so that the measure ${\lambda^{(n)}}$ in the right-hand side of (4) is still positive. In this way, we are sure that in the end of the construction of ${\mu}$, all ${\gamma}$‘s in ${\Gamma}$ were assigned some non-trivial mass. This completes the sketch of the proof of Proposition 14 in the case ${n=2}$ (i.e., cocompact lattices in ${SL(2,\mathbb{R})=G_2}$). After constructing ${\mu}$, let us show that the Poisson boundary ${(B_n, m_{B_n})}$ of ${G_n=SL(n,\mathbb{R})}$ equipped with a spherical measure is a boundary of ${(\Gamma,\mu)}$. 2.2. ${(B_n,m_{B_n})}$ is the Poisson boundary of ${(\Gamma,\mu)}$ A reasonably detailed proof that ${(B_n,m_{B_n})}$ is the Poisson boundary of ${(\Gamma,\mu)}$ is somewhat lengthy because the verification of the maximality property (i.e., any boundary of ${(\Gamma,\mu)}$ is an equivariant image of ${(B_n, m_{B_n})}$) needs a certain amount of computation (in fact, we might come to this point later in a future post, but, for now, let us skip this point). In particular, we will content ourselves with checking only that ${(B_n, m_{B_n})}$ is a boundary of ${(\Gamma, \mu)}$ in the cases ${n=2}$ and ${3}$. As it turns out, the fact that ${(B_2, m_{B_2})}$ is a boundary of ${(\Gamma,\mu)}$ (in the case ${n=2}$) is not hard. By definition, we have to show that a stationary sequence ${y_n}$ of independent random variables with distribution ${\mu}$ has the property that ${y_1\dots y_n m_{B_2}}$ converges to a Dirac mass with probability ${1}$. On the other hand, by Corollary 11 of the previous post and the compactness of ${B_2\simeq\mathbb{P}^1}$, we know that ${y_1\dots y_n m_{B_2}}$ converges to some measure with probability ${1}$, and, by Lemma 12, this limit measure is a Dirac mass if the elements ${y_1\dots y_n}$ are unbounded. Now, it is clear that these elements are unbounded because ${y_n}$ has distribution ${\mu}$ and, by construction (cf. item (a) of Proposition 14), ${\mu}$ is fully supported on a lattice of ${SL(2,\mathbb{R})}$ (and, thus, its support is not confined in a compact subgroup). Next, let us show that ${(B_3, m_{B_3})}$ is a boundary of ${(\Gamma, \mu)}$ (in the case ${n=3}$). In this direction, we will need the following lemma playing the role of an analog of Lemma 12 in the context of ${SL(3,\mathbb{R})}$: Lemma 16 Let ${\mu}$ be a probability measure on ${G_3=SL(3,\mathbb{R})}$ such that: • (i) ${\mu}$ has a rich support: the support of ${\mu}$ is not confined to a compact or reducible subgroup of ${G_3}$; • (ii) the norm function is ${\mu}$${\log}$-integrable: ${\int \log(\|g\|\cdot\|g^{-1}\|)d\mu(g)<\infty}$; • (iii) ${m_{B_3}}$ is ${\mu}$-stationary: ${\mu\ast m_{B_3}=m_{B_3}}$ Then, for any stationary sequence ${\{y_n\}}$ of independent random variables with distribution ${\mu}$, the sequence of measures ${(y_1\dots y_n)_* m_{B_3}}$ converges to a Dirac mass on ${B_3}$ with probability ${1}$. Before proving this lemma, let us see how it allows to prove that ${(B_3, m_{B_3})}$ is a boundary of ${(\Gamma,\mu)}$ for the measure ${\mu}$ constructed in Proposition 14, thus completing our sketch of proof of Furstenberg’s theorem 13. By definition of boundary, it suffices to check that the measure ${\mu}$ provided by Proposition 14 fits the assumptions of Lemma 16. Now, by item (a) of Proposition 14, ${\mu}$ is fully supported on the lattice ${\Gamma}$ of ${G_3}$. Since a lattice of ${SL(n,\mathbb{R})}$ is Zariski dense (by Borel’s density theorem), ${\textrm{supp}(\mu)=\Gamma}$ is not a compact subgroup nor reducible subgroup of ${G_3}$, that is, ${\mu}$ satisfies item (i) of the lemma above. Next, we notice that a computation shows that the distance function to origin ${\Lambda(g)=d(gK_3, K_3)}$ in the symmetric space ${G_3/K_3}$ satisfies ${\log\|g\|=O(\Lambda(g))}$ and ${\log\|g^{-1}\|=O(\Lambda(g))}$. In particular, the integrability condition in item (c) of Proposition 14 implies that ${\mu}$ satisfies item (ii) of the lemma above. Finally, the item (b) of Proposition 14 is precisely the stationarity condition in item (iii) of the lemma above. So, let us complete the discussion in this section by sketching the proof of Lemma 16. Proof: By the compactness of ${B_3}$ (and Corollary 11 of the previous post), we know that ${(y_1\dots y_n)_* m_{B_3}}$ converges to some measure with probability ${1}$. This allows us to consider the sequence $\displaystyle z_k=\lim\limits_{n\rightarrow\infty} (y_k\dots y_{k+n})_* m_{B_3}$ of measure-valued random variables. Our task consists into showing that ${z_k}$ are Dirac masses. Keeping this goal in mind, note that ${y_k z_{k+1} = z_k}$ and ${y_k}$ is independent of ${z_{k+j}}$ for ${j\geq 1}$. Also, let us observe that we can extend the sequence ${\{y_k, z_k: k\in\mathbb{N}\}}$ can be extended to non-positive indices ${k\leq 0}$ by relabeling ${y_1, z_1}$ as ${y_{-n}, z_{-n}}$ and shifting the remaining variables. Here, by stationarity of ${\{y_k\}_{k\in\mathbb{N}}}$ (by definition) and ${\{z_k\}_{k\in\mathbb{N}}}$ (by item (iii)), the variables ${y_k, z_k}$ with positive indices ${k\in\mathbb{N}}$ are probabilistically isomorphic to the original sequence (before shifting). In other words, we can assume that our sequence to ${y_k, z_k}$ is defined for all integer indices ${k\in\mathbb{Z}}$. (In terms of Dynamical Systems, this is analog to taking the natural extension ${\hat{f}:(X,\mu)^{\mathbb{Z}}\rightarrow (X,\mu)^{\mathbb{Z}}}$, ${f((x_i)_{i\in\mathbb{Z}})=(x_{i+1})_{i\in\mathbb{Z}}}$ of the unilateral shift ${f:(X,\mu)^{\mathbb{N}}\rightarrow (X,\mu)^{\mathbb{N}}}$, ${f((x_i)_{i\in\mathbb{N}})=(x_{i+1})_{i\in\mathbb{N}}}$). In any case, we can write ${z_{-k}=y_{-k}\dots y_{-1} z_0}$ where ${y_{-i}}$ is a stationary sequence of independent random variables and ${z_0}$ is independent of ${y_{-i}}$‘s. At this point, let us recall the discussion of Subsubsection 1.3.2 on the Poisson boundary of ${G_3=SL(3,\mathbb{R})}$. In this subsubsection we saw that there are only three possibilities for any limit of the measures ${gm_{B_3}}$, ${g\in G_3}$ such as ${z_k}$: it is either a Dirac mass, a circle measure or an absolutely continuous (w.r.t. ${m_{B_3}}$) measure, the latter case occurring only when ${g}$ stays bounded in ${G_3}$. By ergodicity (of the shift dynamics underlying the sequence ${y_k}$), we have that only one of these possibilities for ${z_k}$ can occur with positive probability. Now, ${z_k}$ can not be absolutely continuous w.r.t. ${m_{B_3}}$ because this would mean that ${y_1\dots y_n}$ is bounded (with positive probability) and, a fortiori, ${y_i}$ is confined to a compact subgroup of ${B_3}$, a contradiction with our assumption in item (i) about the distribution ${\mu}$ of ${y_i}$‘s. Therefore, our task is reduced to show that ${z_{-k}}$‘s are not circle measures with probability ${1}$. For sake of concreteness, let us fix ${z_0}$ by assuming that ${z_0}$ is a circle measure supported on our “preferred” circle ${\{(\overline{u}, \overline{e_3}): \overline{u}\perp\overline{e_3}\}\subset B_3}$. In order to show that ${z_{-k}=y_{-k}\dots y_{-1}m_{B_3}}$ are Dirac masses, it suffices to check that the angles between the column vectors of the matrices ${y_{-k}\dots y_{-1}}$ converge to ${0}$ as ${k\rightarrow\infty}$. In other terms, denoting by ${u_k}$, ${v_k}$, ${w_k}$ the column vectors of ${y_{-k}\dots y_{-1}}$, and by noticing that ${u_k}$, ${v_k}$, ${w_k}$ play symmetric roles, we want to check that $\displaystyle \frac{|u_k\times v_k|}{|u_k|\cdot |v_k|}\rightarrow 0$ as ${k\rightarrow\infty}$. The idea to show this is based on the simplicity of the Lyapunov spectra of random products of matrices in ${G_3}$ with a distribution law ${\mu}$ that is not confined to compact or reducible subgroups. More concretely, we will show that the column vectors ${u_k}$ and ${v_k}$ have a definite exponential growth $\displaystyle \log|u_k|\sim k\alpha \quad \textrm{and} \quad \log|v_k|\sim k\alpha$ with ${\alpha>0}$ (the top Lyapunov exponent) and, similarly, the column vector ${u_k\times v_k}$ of the matrix ${{}^t(y_k\dots y_{-1})^{-1}}$ has a definite exponential growth $\displaystyle \log|u_k\times v_k|\sim k\beta$ where ${\beta}$ (the sum of the two largest Lyapunov exponents) satisfies ${\beta<2\alpha}$ (i.e., the top Lyapunov exponent is simple, i.e., it does not coincide with the second largest Lyapunov exponent). Of course, if we show these properties, then ${|u_k\times v_k|/(|u_k|\cdot|v_k|)}$ converges exponentially to ${0}$ as ${k\rightarrow\infty}$ (since ${\beta<2\alpha}$). Let us briefly sketch the proof of these exponential growth properties. We write ${\log|u_k|}$ as a “Birkhoff sum” $\displaystyle \log|u_k|=\sum\limits_{i=0}^{k-1}\frac{|y_{-(i+1)}y_{-i}\dots y_{-1} e_1|}{|y_{-i}\dots y_{-1} e_1|}=\sum\limits_{i=0}^{k-1}\rho(y_{-(i+1)}, \overline{t_i})$ where ${\rho(g,\overline{t})=\log(|gt|/|t|)}$ for ${g\in G_3}$ and ${\overline{t}\in\mathbb{P}^2}$. As it turns out, the sequence ${(y_{-(i+1)}, \overline{t}_i)}$ is not stationary, but it is almost stationary, so that Birkhoff’s ergodic theorem says that the time averages converge to the spatial average (with probability ${1}$): $\displaystyle \frac{1}{k}\log|u_k|=\frac{1}{k}\sum\limits_{i=0}^{k-1}\rho(y_{-(i+1)},\overline{t_i})\rightarrow \alpha = \int \rho(g,\overline{t}) d\mu(g) dm'(\overline{t})$ where ${m'}$ is the rotation-invariant distribution of ${\overline{t_i}}$‘s. Logically, this application of the ergodic theorem is valid only if we check that the observable ${\rho(g,\overline{t})}$ is integrable. However, this is not hard to verify: by definition, ${|\rho(g,\overline{t})|\leq \max\{\log\|g\|, \log\|g^{-1}\|\}}$, so that the desired integrability is a consequence of the integrability requirement in item (ii). A similar argument shows that $\displaystyle \frac{1}{k}\log|v_k|\sim \alpha \quad \textrm{and} \quad \frac{1}{k}\log|u_k\times v_k|\sim \beta=\int \widetilde{\rho}(g,\overline{t}) d\mu(g)dm'(\overline{t})$ where ${\widetilde{\rho}(g,\overline{t})=\log(|{}^tg^{-1}(t)|/|t|)}$. So, it remains only to check the simplicity condition ${\beta<2\alpha}$. For this sake, we combine the two integrals defining ${2\alpha}$ and ${\beta}$, and we transfer it from ${\overline{t}\in \mathbb{P}^2}$ to ${B_3}$. In this way, we obtain: $\displaystyle 2\alpha-\beta=\int \log\frac{|gu|\cdot|gv|}{|gu\times gv|} dm_{B_3}(\overline{u}, \overline{v})d\mu(g)$ On the other hand, by definition, ${(\overline{u}, \overline{v})}$ runs over orthogonal pairs of directions in ${\mathbb{P}^2}$. Thus, since ${\mu}$ is not confined to an orthogonal subgroup of ${G_3}$, we have that the integral in the right-hand side of the equation above is strictly positive, i.e., ${\beta<2\alpha}$. In summary, we showed that, for each fixed ${z_0}$, the sequence ${z_{-k}}$ converges to a Dirac mass with probability ${1}$. Since ${y_{-i}}$ are independent of ${z_0}$, this actually proves that ${z_{-k}}$ converges to Dirac masses independently of ${z_0}$. Finally, since the sequence ${\{z_k\}}$ is stationary, we conclude that the ${z_{-k}}$ were Dirac masses to begin with. This completes the proof of Lemma 16. $\Box$ Closing this post, we will use the fact that ${(B_n, m_{B_n})}$ is the Poisson boundary of ${(\Gamma, \mu)}$ (where ${\mu}$ is the probability measure constructed in Proposition 14) to show Furstenberg’s theorem 1 that the lattices of ${SL(2,\mathbb{R})}$ are “distinct” from the lattices of ${SL(3,\mathbb{R})}$. 3. End of the proof of Furstenberg’s theorem 1 In this section, we will show the following statement (providing a slightly stronger version of Theorem 1) in the case ${n=3}$: Theorem 17 A cocompact lattice of ${G_n=SL(n,\mathbb{R})}$, ${n\geq 3}$, can not be isomorphic to a subgroup of ${SL(2,\mathbb{R})}$. The proof of this theorem proceeds by contradiction. Suppose that ${\Gamma}$ is isomorphic to a cocompact lattice of ${G_3=SL(3,\mathbb{R})}$ and also to a subgroup of ${G_2=SL(3,\mathbb{R})}$. By Theorem 14, we can equip ${\Gamma}$ with a fully supported probability measure ${\mu}$ such that ${(\Gamma, \mu)}$ has Poisson boundary ${P(\Gamma,\mu)=(B_3, m_{B_3})}$. Let us think now of ${\Gamma}$ as a subgroup of ${SL(2,\mathbb{R})}$. We claim that ${\Gamma}$ can not be confined to a compact subgroup of ${SL(2,\mathbb{R})}$: indeed, if this were the case, ${\Gamma}$ would be Abelian; however, we saw that Abelian groups have trivial Poisson boundary, while ${P(\Gamma,\mu)=(B_3,m_{B_3})}$. Next, let us observe that ${B_2}$ admits some ${\mu}$-stationary probability measure ${\theta}$, i.e., ${\mu\ast\theta=\theta}$. Indeed, this is a consequence of the Krylov-Bogolyubov argument: we fix ${\theta_1}$ an arbitrary probability measure on the compact space ${B_2}$, and we extract a ${\mu}$-stationary probability ${\theta}$ as a limit of some convergent subsequence of the sequence $\displaystyle \lambda_n:=\frac{1}{n}\sum\limits_{i=0}^{n-1}\mu^i \ast\theta_1 := \frac{1}{n}\sum\limits_{i=0}^{n-1}\underbrace{\mu\ast\dots\ast\mu}_{i}\ast\theta_1$ We affirm that ${(B_2,\theta)}$ is a boundary of ${(\Gamma,\mu)}$. In fact, given a stationary sequence ${\{y_n\}}$ of independent random variables with distribution ${\mu}$, we know that the elements ${y_1\dots y_n\in G_2}$ are unbounded as ${\Gamma}$ is a non-compact subgroup of ${SL(2,\mathbb{R})}$ (as we just saw) and ${\mu}$ is fully supported on ${\Gamma}$. By Lemma 12, it follows that ${y_1\dots y_n \theta}$ converges to a Dirac mass, and so, by Proposition 7 of the previous post, we deduce that ${(B_2,\theta)}$ is a boundary of ${(\Gamma,\mu)}$. By definition of Poisson boundary, the facts that ${(\Gamma,\mu)}$ has Poisson boundary ${P(\Gamma, \mu)=(B_3, m_{B_3})}$ and ${(B_2,\theta)}$ is a boundary of ${(\Gamma, \mu)}$ imply that ${(B_2,\theta)}$ is an equivariant image of ${(B_3, m_{B_3})}$ under some equivariant map ${\rho}$. We will prove that this is not possible (thus completing today’s post). The basic idea is that the sole way of going to infinity in ${G_2}$ is to approach ${B_2}$, i.e., ${\gamma_n\theta}$ converges to a Dirac mass as ${\gamma_n\rightarrow\infty}$ in ${G_2}$, but, on the other hand, we can go to infinity in ${G_3}$ without approaching ${B_3}$, i.e., we can let ${g\rightarrow\infty}$ in ${G_3}$ in such a way that ${gm_{B_3}}$ converges to a circle measure. Then, in the end of the day, these distinct (and incompatible) boundary behaviors (Dirac mass versus circle measure) will lead to the desired contradiction. In order to get Dirac mass behavior in the context of ${G_2}$, our plan is to apply Lemma 12. But, before doing so, we need to know that ${\theta}$ is not atomic, a property that we claim to be true. Indeed, suppose to the contrary that ${\theta(\{\zeta\})>0}$ for some point ${\zeta\in B_2}$, and denote by ${\Delta=\rho^{-1}(\zeta)}$, a set of ${m_{B_3}}$-measure ${m_{B_3}(\Delta)=\theta(\{\zeta\})>0}$. Consider the translates ${\gamma\Delta}$ of ${\Delta}$ under ${\gamma\in\Gamma\subset G_3}$. On one hand, if ${\gamma\Delta}$ intersects ${\Delta}$ in a subset of positive measure, then their images ${\rho(\gamma\Delta)}$ and ${\rho(\Delta)=\{\zeta\}}$ under ${\rho}$ in ${B_2}$ intersect, and, thus, by equivariance of ${\rho}$ and the fact that ${\rho(\Delta)}$ is a singleton, ${\rho(\gamma\Delta)=\rho(\Delta)}$, and, a fortiori, ${\gamma\Delta=\Delta}$. In particular, the property that ${\gamma\Delta=\Delta}$ whenever ${\gamma\Delta}$ intersects ${\Delta}$ with positive measure implies that ${\Gamma}$ does not act ergodically on ${B_3\times B_3}$ (because the ${\gamma\Delta}$‘s, ${\gamma\in\Gamma}$, do not get mixed together). However, this is a contradiction with Moore’s ergodicity theorem (implying that a lattice of ${G_3}$ acts ergodically on ${B_3\times B_3}$). This shows that ${\theta}$ is non-atomic. In particular, given ${Q_1}$ and ${Q_2}$ two disjoint closed subsets of ${B_2}$, if ${\gamma_n\in\Gamma\subset G_2}$ is any sequence going to ${\infty}$, then $\displaystyle \lim\limits_{n\rightarrow\infty} \min\{\gamma_n\theta(Q_1), \gamma_n\theta(Q_2)\}=0$ Indeed, this is so because Lemma 11 (and the proof of Lemma 12) says that, for each ${\varepsilon>0}$, the measure ${\gamma_n\theta}$ concentrates at least ${1-\varepsilon}$ of its mass in an interval of length ${<\varepsilon}$ for all ${n}$ sufficiently large. In particular, for any ${\varepsilon>0}$ smaller than the distance separating the disjoint closed sets ${Q_1}$ and ${Q_2}$ (i.e., ${0<\varepsilon), we obtain that either ${\gamma_n\theta(Q_1)\leq\varepsilon}$ or ${\gamma_n\theta(Q_2)\leq\varepsilon}$ for ${n}$ sufficiently large, as desired. We can rephrase the “concentration property” of the last paragraph in terms of ${\mu}$-harmonic functions as follows. Let ${0\leq\psi_i\leq1}$, ${i=1,2}$, be measurable functions supported on ${Q_1}$, ${Q_2}$, and consider the associated ${\mu}$-harmonic functions $\displaystyle h_i(\gamma)=\int_{B_2}\psi_i(\zeta)d\gamma\theta(\zeta). \ \ \ \ \ (5)$ Then, the “concentration property” is $\displaystyle \lim\limits_{\gamma\rightarrow\infty}\min\{h_1(\gamma), h_2(\gamma)\}=0 \ \ \ \ \ (6)$ Now, let us “transfer” this picture to the ${G_3=SL(3,\mathbb{R})}$ context, i.e., let us think of the ${\mu}$-harmonic ${h_i}$, ${i=1,2}$, as defined on ${\Gamma\subset G_3}$. By the Poisson formula (and the fact that ${P(\Gamma,\mu)=(B_3,m_{B_3})}$), we can represent the ${\mu}$-harmonic functions ${h_i}$, ${i=1,2}$, as $\displaystyle h_i(\gamma)=\int_{B_3} \Psi_i(\xi)d\gamma m_{B_3}(\xi) \ \ \ \ \ (7)$ where ${\Psi_i}$, ${i=1, 2}$, are bounded measurable functions on ${B_3}$. By replacing the variable ${\gamma\in\Gamma}$ by ${g\in G_3}$ in this formula, we see that ${h_i}$ can be extended to harmonic functions on ${G_3}$ (w.r.t. any spherical measure on ${G_3}$). In what follows, we will try to understand the boundary behavior of ${h_i}$‘s, ultimately to contradict the concentration property (6). In this direction, let us first “transfer” the concentration property (6) from ${G_2}$ to ${G_3}$ as follows. Observe that ${\Gamma}$ is a cocompact lattice of ${G_3}$, so that we can select a bounded fundamental domain ${A}$, i.e., a bounded set such that any element of ${G_3}$ has the form ${\gamma g}$ with ${\gamma\in\Gamma}$ and ${g\in A}$. Now, given ${\gamma\in\Gamma}$, let us compare the values ${h_i}$ of ${\gamma}$ and one of its “neighbors” ${\gamma g}$, ${g\in A}$. Here, we observe that $\displaystyle \left(\frac{d(\gamma g)_* m_{B_3}}{dm_{B_3}}(\xi)\right)\Big\slash\left(\frac{d(\gamma)_* m_{B_3}}{dm_{B_3}}(\xi)\right) = \frac{d(\gamma g)_* m_{B_3}}{d(\gamma)_*m_{B_3}}(\xi) = \frac{d(g)_* m_{B_3}}{dm_{B_3}}(\gamma^{-1}\xi)$ In particular, since the right-hand side is bounded for ${g}$ in the bounded set ${A}$, we conclude that the ratio between $\displaystyle \frac{d(\gamma g)_* m_{B_3}}{dm_{B_3}}(\xi)\quad \textrm{and} \quad \frac{d(\gamma)_* m_{B_3}}{dm_{B_3}}(\xi)$ is uniformly bounded away from ${0}$ and ${\infty}$ for ${\gamma\in\Gamma}$ and ${g\in A}$. Therefore, since the values of a positive ${\mu}$-harmonic function ${h}$ at ${\gamma}$ and ${\gamma g}$ can be written as $\displaystyle h(\gamma)=\int\hat{h}(\xi)\frac{d(\gamma)_* m_{B_3}}{dm_{B_3}}(\xi)dm_{B_3}(\xi)$ and $\displaystyle h(\gamma g) = \int\hat{h}(\xi)\frac{d(\gamma g)_* m_{B_3}}{dm_{B_3}}(\xi)dm_{B_3}(\xi),$ we deduce that the values ${h(\gamma)}$ and ${h(\gamma g)}$ are uniformly comparable for ${\gamma\in\Gamma}$ and ${g\in A}$, that is, there exists a constant ${c>0}$ such that $\displaystyle h(\gamma g) for all ${\gamma\in\Gamma}$ and ${g\in A}$. Hence, given ${\widetilde{g}\in G_3}$, there exists ${\gamma\in\Gamma}$ such that $\displaystyle h(\widetilde{g}) because ${\widetilde{g}=\gamma g}$ for some ${\gamma\in\Gamma}$, ${g\in G}$. Furthermore, when letting ${\widetilde{g}\rightarrow\infty}$, we have that ${\gamma\rightarrow\infty}$ (as ${g\in A}$ and ${A}$ is bounded). Thus, by combining this with the “concentration property” (6), we get the following concentration property in the context of ${SL(3,\mathbb{R})}$: $\displaystyle \lim\limits_{\substack{g\rightarrow\infty, \\ g\in G_3}} \min\{h_1(g), h_2(g)\}=0 \ \ \ \ \ (8)$ Let us now try to contradict the “transferred concentration property” (8) by analyzing the values ${h_i(g)}$ when ${g\rightarrow\infty}$ without approaching ${B_3}$ (something that is not possible in ${G_2}$!). More concretely, let us consider the sequences $\displaystyle g_n^{(A)}=\left(\begin{array}{ccc}n & 0 & 0 \\ 0 & n & 0 \\ 0 & 0 & 1/n^2\end{array}\right) \quad \textrm{and} \quad g_n^{(B)} = \left(\begin{array}{ccc}n^2 & 0 & 0 \\ 0 & 1/n & 0 \\ 0 & 0 & 1/n\end{array}\right)$ We want to investigate the “boundary” values of the harmonic functions ${h_1}$ and ${h_2}$ along the sequences ${g_n^{(A)}}$ and ${g_n^{(B)}}$ (and some adequate translates). For this, we need an analog of Fatou’s theorem saying that harmonic functions have boundary values along almost all radial direction. In our context, the analog of Fatou’s theorem goes as follows. The limits of the sequences ${g_n^{(A)}m_{B_3}}$ and ${g_n^{(B)}m_{B_3}}$ are circle measures ${\omega^{(A)}}$ and ${\omega^{(B)}}$ supported on the great circles $\displaystyle S_A=\{(\overline{u}, \overline{e_3}): \overline{u}\perp\overline{e_3}\}$ and $\displaystyle S_B=\{(\overline{e_1}, \overline{v}): \overline{v}\perp\overline{e_1}\}$ Also, it is possible to check that all circle measures have the form ${k\omega^{(A)}}$ or ${k\omega^{(B)}}$ for ${k}$ in the orthogonal subgroup ${K_3}$ of ${G_3}$. Now, we affirm that, given ${\phi}$ a bounded measurable function on ${B_3}$, the integrals $\displaystyle \Phi^{(A)}(k)=\int\phi(\xi)dk\omega'(\xi) \quad \textrm{and} \quad \Phi^{(B)}(k)=\int\phi(\xi) dk\omega^{(B)}(\xi)$ are defined for almost every ${k\in K_3}$. Indeed, we recall that ${K_3}$ acts transitively on ${B_3=G_3/P_3}$, so that ${B_3=K_3/W}$ where ${W}$ is a finite group, i.e., ${K_3}$ is a finite cover of ${B_3}$. The great circles ${S_A}$ and ${S_B}$ correspond to two ${1}$-parameter subgroups ${T_A}$ and ${T_B}$ of ${K_3}$ (as any great circle of ${B_3}$ passing through the identity coset). Also, the great circles of ${B_3}$ are just the cosets ${kT_A}$ and ${kT_B}$, ${k\in K_3}$, modulo ${W}$. In particular, given a bounded measurable function ${\phi}$ on ${B_3}$, we can lift it to a bounded measurable function ${\widetilde{\phi}}$ on ${K_3}$ and then define $\displaystyle \Phi^{(A)}(k)=\int_{T_A}\widetilde{\phi}(kt)dt \quad \textrm{and} \quad \Phi^{(B)}(k)=\int_{T_B}\widetilde{\phi}(kt) dt$ For later use, we observe that $\displaystyle \int\Phi^{(A)}(k)dk=\int\Phi^{(B)}(k)dk=\int\phi(\xi)dm_{B_3}(\xi) \ \ \ \ \ (9)$ Anyhow, in this setting, Fatou’s theorem implies that $\displaystyle \int\phi(k\xi)dg_n^{(A)}m_{B_3}(\xi)\rightarrow\Phi^{(A)}(k) \textrm{ and } \int\phi(k\xi)dg_n^{(B)}m_{B_3}(\xi)\rightarrow\Phi^{(B)}(k)$ (at least) in measure as ${n\rightarrow\infty}$. Therefore, if ${h(g)=\int\phi(\xi)dgm_{B_3}(\xi)}$ is the harmonic function associated to ${\phi}$, then $\displaystyle h(kg_n^{(A)})\rightarrow\Phi^{(A)}(k) \quad \textrm{and} \quad h(kg_n^{(B)})\rightarrow\Phi^{(B)}(k) \ \ \ \ \ (10)$ in measure as ${n\rightarrow\infty}$. Now let us come back to the harmonic functions ${h_1}$ and ${h_2}$ constructed above satisfying the concentration property (8). Denoting by ${\Phi_i^{(A)}}$ and ${\Phi_i^{(B)}}$ the “boundary values” of ${h_i}$ (along ${kg_n^{(A)}}$ and ${kg_n^{(B)}}$), we see that (10) and the concentration property (8) imply that $\displaystyle \min\{\Phi_1^{(A)}(k), \Phi_2^{(A)}(k)\}=0 \quad \textrm{and} \quad \min\{\Phi_1^{(B)}(k), \Phi_2^{(B)}(k)\}=0 \ \ \ \ \ (11)$ for almost every ${k\in K_3}$. We will show that this is not possible by choosing the disjoint closed sets ${Q_1}$ and ${Q_2}$ of ${B_2}$ leading to ${h_1}$ and ${h_2}$ in a careful way and by using the fact that the ${\mu}$-stationary measure ${\theta}$ on ${B_2}$ is non-atomic. More concretely, since ${\theta}$ is not atomic, we can choose a compact subset ${Q_1}$ of ${B_2}$ with ${0<\theta(Q_1)<1}$ that we fix once and for all. Set ${\psi_1=\chi_{Q_1}}$ and construct from ${\psi_1}$ a harmonic function ${h_1}$ as above (cf. (5)) and let ${\Psi_1}$ the associated function in (7). Denote by ${\Phi_1^{(A)}}$ and ${\Phi_1^{(B)}}$ the boundary values of ${h_1}$ along sequences ${kg_n^{(A)}}$ and ${kg_n^{(B)}}$. Now, we consider an increasing sequence ${Q_2(n)}$ of compact subsets exhausting ${B_2-Q_1}$. Again, set ${\psi_2(n)=\chi_{Q_2(n)}}$, construct the corresponding harmonic functions ${h_2(n)}$ (cf. (5)), and let ${\Phi_2^{(A)}(n)}$ and ${\Phi_2^{(B)}(n)}$ be the boundary values of ${h_2(n)}$ along sequences ${kg_n^{(A)}}$ and ${kg_n^{(B)}}$. By construction, since ${Q_2(n)}$ is an increasing sequence, the functions ${\Phi_2^{(A)}(n)}$ and ${\Phi_2^{(B)}(n)}$ form two sequences of non-negative functions uniformly bounded by ${1}$ that do not decrease as ${n}$ increases. It follows that we can extract limits ${\Phi_2^{(A)}(\infty)=\lim\limits_{n\rightarrow\infty}\Phi_2^{(A)}(n)}$ and ${\Phi_2^{(B)}(\infty)=\lim\limits_{n\rightarrow\infty}\Phi_2^{(B)}(n)}$. Moreover, from the definition of ${\Phi_2^{(A)}(n)}$ and ${\Phi_2^{(B)}(n)}$, we get $\displaystyle 0\leq\Phi_2^{(A)}(\infty), \Phi_2^{(B)}(\infty)\leq 1 \ \ \ \ \ (12)$ Also, since ${Q_2(n)}$ exhausts ${B_2-Q_1}$, i.e., ${\lim\limits_{n\rightarrow\infty}\int_{B_2}(\psi_1(\zeta)+\psi_2(n)(\zeta))d\theta(\zeta)=1}$, we obtain from (9) that $\displaystyle \int(\Phi_1^{(A)}(k)+\Phi_2^{(A)}(\infty)(k))dk=1 \textrm{ and } \int(\Phi_1^{(B)}(k)+\Phi_2^{(B)}(\infty)(k))dk=1 \ \ \ \ \ (13)$ Furthermore, the concentration property (11) implies that $\displaystyle \min\{\Phi_1^{(A)}(k),\Phi_2^{(A)}(\infty)(k)\}=0 \quad \textrm{and} \quad \min\{\Phi_1^{(B)}(k),\Phi_2^{(B)}(\infty)(k)\}=0 \ \ \ \ \ (14)$ for almost every ${k\in K_3}$. Finally, since the harmonic functions ${h_2(n)}$ have nice integral representations as in (7), i.e., $\displaystyle h_2(n)(g)=\int_{B_3}\Psi_2(n)(\xi)dgm_{B_3}(\xi),$ we can extract a limit ${\Psi_2(\infty)}$ such that $\displaystyle \Phi_2^{(A)}(\infty)(k)=\int\Psi_2(\infty)(\xi)dk\omega^{(A)}(\xi), \quad \Phi_2^{(B)}(\infty)(k)=\int\Psi_2(\infty)(\xi)dk\omega^{(B)}(\xi), \ \ \ \ \ (15)$ where ${\omega^{(A)}}$ and ${\omega^{(B)}}$ are circle measures in our preferred circles ${S_A}$ and ${S_B}$. Now, we can get a contradiction as follows. By (12), (13), and the concentration property (14), we have that $\displaystyle \Phi_1^{(A)}(k), \Phi_1^{(B)}(k), \Phi_2^{(A)}(\infty)(k), \Phi_2^{(B)}(\infty)(k)\in\{0, 1\}$ for almost every ${k\in K_3}$. By (15), this means that the functions ${\Psi_1}$ and ${\Psi_2(\infty)}$ are constant (${0}$ or ${1}$) on each great circle (up to some neglectable expectional set of zero measure). In particular, after lifting the function ${\Psi_1}$ on ${B_3}$ to a function ${\widetilde{\Psi}_1}$ on ${K_3}$, we have that ${\widetilde{\Psi}_1(k_0t)=\widetilde{\Psi}_1(k_0)}$ for ${t\in T_A\cup T_B}$ (where ${T_A}$ and ${T_B}$ are the lifts of the great circles ${S_A}$ and ${S_B}$ supporting ${\omega^{(A)}}$ and ${\omega^{(B)}}$), that is, $\displaystyle T_A\cup T_B\subset T:=\{k\in K_3: \widetilde{\Psi}_1(k_0t)=\widetilde{\Psi}_1(k_0)\}.$ Note that ${T}$ is a subgroup of ${K_3}$. It follows that ${T=K_3}$: indeed, it is a general fact that the group generated by two distinct ${1}$-parameter subgroups of ${K_3}$ (such as ${T_A}$ and ${T_B}$) is the whole ${K_3}$. This shows that ${\widetilde{\Psi}_1}$, and, a fortiori, ${\Psi_1}$ is the constant function ${0}$ or ${1}$. However, ${\Psi_1}$ can not be ${0}$ nor ${1}$: in fact, by (7) and (5), we know that $\displaystyle \int\Psi_1(\xi)dm_{B_3}(\xi)=h_1(e)=\int\psi_1(\zeta)d\theta(\zeta)=\theta(Q_1),$ a contradiction with the fact that ${\Psi_1\equiv 0}$ or ${1}$ by our choice of ${Q_1}$ with ${0<\theta(Q_1)<1}$. This completes our sketch of the proof of Theorem 17 (and Theorem 1).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1441, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9921630024909973, "perplexity": 135.33086962286248}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824931.84/warc/CC-MAIN-20171022003552-20171022023552-00472.warc.gz"}
https://www.physicsforums.com/threads/navier-stokes-spherical-form.11925/
# Navier Stokes: Spherical form 1. Jan 2, 2004 ### sam2 Hi, I'm trying to understand how to convert the cartesian form of the N-S equation to cylinderical/spherical form. Rather than re-derive the equation for spherical/cylindrical systems, I am trying to directly convert the cartesian PDE. I'm ok with converting the d/dx and d2/dx2 terms. What I am struggling with a little, is the v_x, v_y and v_z terms which represent velocity in the x, y and z directions respectively. Start simple with cylindrical... Any idea on how to represent v_x and v_y in terms of v_r and v_theta? I make v_r to be v_x / cos(theta). But can't see how to find v_theta. Any help is much appreciated. Regards, Sam 2. Jan 5, 2004 ### sam2 Any ideas at all? Thanks, Sam Similar Discussions: Navier Stokes: Spherical form
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8512579798698425, "perplexity": 3067.8440410513817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189466.30/warc/CC-MAIN-20170322212949-00112-ip-10-233-31-227.ec2.internal.warc.gz"}
http://ptsymmetry.net/?m=20111116
November 2011 Mon Tue Wed Thu Fri Sat Sun « Oct   Dec » 123456 78910111213 14151617181920 21222324252627 282930 Invisibility in PT-symmetric complex crystals Stefano Longhi Bragg scattering in sinusoidal PT-symmetric complex crystals of finite thickness is theoretically investigated by the derivation of exact analytical expressions for reflection and transmission coefficients in terms of modified Bessel functions of first kind. The analytical results indicate that unidirectional invisibility, recently predicted for such crystals by coupled-mode theory [Z. Lin et al., Phys. Rev. Lett. 106, 213901 (2011)], breaks down for crystals containing a large number of unit cells. In particular, for a given modulation depth in a shallow sinusoidal potential, three regimes are encountered as the crystal thickness is increased. At short lengths the crystal is reflectionless and invisible when probed from one side (unidirectional invisibility), whereas at intermediate lengths the crystal remains reflectionless but not invisible; for longer crystals both unidirectional reflectionless and invisibility properties are broken. http://arxiv.org/abs/1111.3448 Quantum Physics (quant-ph) N-site-lattice analogues of $$V(x)=i x^3$$ Miloslav Znojil Two discrete N-level alternatives to the popular imaginary cubic oscillator are proposed and studied. In a certain domain $${\cal D}$$ of parameters $$a$$ and $$z$$ of the model, the spectrum of energies is shown real (i.e., potentially, observable) and the unitarity of the evolution is shown mediated by the construction of a (non-unique) physical, ad hoc Hilbert space endowed with a nontrivial, Hamiltonian-dependent inner-product metric $$\Theta$$. Beyond $${\cal D}$$ the complex-energy curves are shown to form a “Fibonacci-numbered” geometric pattern and/or a “topologically complete” set of spectral loci. The dynamics-determining construction of the set of the eligible metrics is shown tractable by a combination of the computer-assisted algebra with the perturbation and extrapolation techniques. Confirming the expectation that for the local potentials the effect of the metric cannot be short-ranged. http://arxiv.org/abs/1111.0484 Quantum Physics (quant-ph); Mathematical Physics (math-ph)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288072943687439, "perplexity": 2066.92675340012}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00035.warc.gz"}
http://philpapers.org/browse/gauge-theories
This category needs an editor. We encourage you to help if you are qualified. # Gauge Theories Related categories Siblings: 148 found Search inside: (import / add options)   Sort by: publication yearbook pricefirst authorviewingsaddition date 1 — 50 / 148 1. Stephen L. Adler & Jeeva Anandan (1996). Nonadiabatic Geometric Phase in Quaternionic Hilbert Space. Foundations of Physics 26 (12):1579-1589. We develop the theory of the nonadiabatic geometric phase, in both the Abelian and non-Abelian cases, in quaternionic Hilbert space. My bibliography Export citation 2. Alexander Afriat (2013). Weyl's Gauge Argument. Foundations of Physics 43 (5):699-705. The standard $\mathbb{U}(1)$ “gauge principle” or “gauge argument” produces an exact potential A=dλ and a vanishing field F=d 2 λ=0. Weyl (in Z. Phys. 56:330–352, 1929; Rice Inst. Pam. 16:280–295, 1929) has his own gauge argument, which is sketchy, archaic and hard to follow; but at least it produces an inexact potential A and a nonvanishing field F=dA≠0. I attempt a reconstruction. My bibliography Export citation 3. B. E. Allman, A. Cimmino, S. L. Griffin & A. G. Klein (1999). Quantum Phase Shift Caused by Spatial Confinement. Foundations of Physics 29 (3):325-332. This paper presents the results of optical interferometry experiments in which the phase of photons in one arm of a Mach-Zehnder interferometer is modified by applying a transverse constriction. An equivalent quantum interferometry experiment using neutron de Broglie waves is discussed in which the observed phase shift is in the spirit of the force-free phase shift of the Aharonov-Bohm effects. In the optical experiments the experimental results are in excellent agreement with predictions. My bibliography Export citation 4. J. Anandan (1980). On the Hypotheses Underlying Physical Geometry. Foundations of Physics 10 (7-8):601-629. The relationship between physics and geometry is examined in classical and quantum physics based on the view that the symmetry group of physics and the automorphism group of the geometry are the same. Examination of quantum phenomena reveals that the space-time manifold is not appropriate for quantum theory. A different conception of geometry for quantum theory on the group manifold, which may be an arbitrary Lie group, is proposed. This provides a unified description of gravity and gauge fields as well (...) My bibliography Export citation 5. Using covariant derivatives and the operator definitions of quantum mechanics, gauge invariant Proca and Lehnert equations are derived and the Lorenz condition is eliminated in U(1) invariant electrodynamics. It is shown that the structure of the gauge invariant Lehnert equation is the same in an O(3) invariant theory of electrodynamics. My bibliography Export citation 6. F. Antonuccio, S. Pinsky & S. Tsujimaru (2000). A Comment on the Light-Cone Vacuum in 1+1 Dimensional Super-Yang–Mills Theory. Foundations of Physics 30 (3):475-486. The discrete light-cone quantization (DLCQ) of a supersymmetric gauge theory in 1+1 dimensions is discussed, with particular attention given to the inclusion of the gauge zero mode. Interestingly, the notorious “zero-mode” problem is now tractable because of special supersymmetric cancellations. In particular, we show that anomalous zero-mode contributions to the currents are absent, in contrast to what is observed in the nonsupersymmetric case. An analysis of the vacuum structure is provided by deriving the effective quantum mechanical Hamiltonian of the gauge (...) My bibliography Export citation 7. A recent claim that in quantum chromodynamics in the Landau gauge the gluon propagator vanishes in the infrared limit, while the ghost propagator is more singular than a simple pole, is investigated analytically and numerically. This picture is shown to be supported even at the level in which the vertices in the Dyson- Schwinger equations are taken to be bare. The gauge invariant running coupling is shown to be uniquely determined by the equations and to have a large finite infrared (...) Remove from this list | My bibliography Export citation 8. Jürgen Audretsch & Vladimir D. Skarzhinsky (1998). Quantum Processes Beyond the Aharonov-Bohm Effect. Foundations of Physics 28 (5):777-788. We consider QED processes in the presence of an infinitely thin and infinitely long straight string with a magnetic flux inside it. The bremsstrahlung from an electron passing by the magnetic string and the electron-positron pair production by a single photon are reviewed. Based on the exact electron and positron solutions of the Dirac equation in the external Aharonov-Bohm potential we present matrix elements for these processes. The dependence of the resulting cross sections on energies, directions, and polarizations of the (...) My bibliography Export citation My bibliography Export citation 10. Rabin Banerjee, Biswajit Chakraborty, Subir Ghosh, Pradip Mukherjee & Saurav Samanta (2009). Topics in Noncommutative Geometry Inspired Physics. Foundations of Physics 39 (12):1297-1345. In this review article we discuss some of the applications of noncommutative geometry in physics that are of recent interest, such as noncommutative many-body systems, noncommutative extension of Special Theory of Relativity kinematics, twisted gauge theories and noncommutative gravity. My bibliography Export citation 11. Julian Barbour (2010). The Definition of Mach's Principle. Foundations of Physics 40 (9-10):1263-1284. Two definitions of Mach’s principle are proposed. Both are related to gauge theory, are universal in scope and amount to formulations of causality that take into account the relational nature of position, time, and size. One of them leads directly to general relativity and may have relevance to the problem of creating a quantum theory of gravity. My bibliography Export citation 12. Robert Batterman (2003). Falling Cats, Parallel Parking, and Polarized Light. Studies in History and Philosophy of Science Part B 34 (4):527-557. This paper addresses issues surrounding the concept of geometric phase or "anholonomy". Certain physical phenomena apparently require for their explanation and understanding, reference to toplogocial/geometric features of some abstract space of parameters. These issues are related to the question of how gauge structures are to be interpreted and whether or not the debate over their "reality" is really going to be fruitful. My bibliography Export citation 13. R. G. Beil (1995). Moving Frame Transport and Gauge Transformations. Foundations of Physics 25 (5):717-742. An outline is given as to how gauge transformations in a frame fiber can be interpreted as defining various types of transport of a moving frame along a path. The cases of general linear, parallel, Lorentz, and other transport groups are examined in Minkowski space-time. A specific set of frame coordinates is introduced. A number of results are obtained including a generalization of Frenet-Serret transport, an extension of Fermi-Walker transport, a relation between frame spaces and certain types of Finsler space, (...) My bibliography Export citation 14. An elementary notion of gauge equivalence is introduced that does not require any Lagrangian or Hamiltonian apparatus. It is shown that in the special case of theories, such as general relativity, whose symmetries can be identified with spacetime diffeomorphisms this elementary notion has many of the same features as the usual notion. In particular, it performs well in the presence of asymptotic boundary conditions. My bibliography Export citation 15. Gordon Belot (2001). The Principle of Sufficient Reason. Journal of Philosophy 98 (2):55-74. The paper is about the physical theories which result when one identifies points in phase space related by symmetries; with applications to problems concerning gauge freedom and the structure of spacetime in classical mechanics. My bibliography Export citation 16. Gordon Belot (1998). Understanding Electromagnetism. British Journal for the Philosophy of Science 49 (4):531-555. It is often said that the Aharonov-Bohm effect shows that the vector potential enjoys more ontological significance than we previously realized. But how can a quantum-mechanical effect teach us something about the interpretation of Maxwell's theory—let alone about the ontological structure of the world—when both theories are false? I present a rational reconstruction of the interpretative repercussions of the Aharonov-Bohm effect, and suggest some morals for our conception of the interpretative enterprise. My bibliography Export citation 17. This document records the discussion between participants at the workshop "Philosophy of Gauge Theory," Center for Philosophy of Science, University of Pittsburgh, 18-19 April 2009. Remove from this list | Translate to English My bibliography Export citation 18. R. Blanco (1999). On a Hypothetical Explanation of the Aharonov-Bohm Effect. Foundations of Physics 29 (5):693-720. I study in detail a proposal made by T. H. Boyer in an attempt to explain classically the Aharonov-Bohm (AB) effect. Boyer claims that in an AB experiment, the perturbation the external incident particle produces on the charge and current distributions within the solenoid will affect back the motion of the external particle. With a qualitative analysis based on energetic considerations, Boyer seemed to arrive at the conclusion that this perturbation could give account of the AB effect. In this paper (...) My bibliography Export citation 19. Timothy H. Boyer (2008). Comment on Experiments Related to the Aharonov–Bohm Phase Shift. Foundations of Physics 38 (6):498-505. Recent experiments undertaken by Caprez, Barwick, and Batelaan should clarify the connections between classical and quantum theories in connection with the Aharonov–Bohm phase shift. It is pointed out that resistive aspects for the solenoid current carriers play a role in the classical but not the quantum analysis for the phase shift. The observed absence of a classical lag effect for a macroscopic solenoid does not yet rule out the possibility of a lag explanation of the observed phase shift for a (...) My bibliography Export citation 20. A fundamentally new understanding of the classical electromagnetic interaction of a point charge and a magnetic dipole moment through order v 2 /c 2 is suggested. This relativistic analysis connects together hidden momentum in magnets, Solem's strange polarization of the classical hydrogen atom, and the Aharonov–Bohm phase shift. First we review the predictions following from the traditional particle-on-a-frictionless-rigid-ring model for a magnetic moment. This model, which is not relativistic to order v 2 /c 2 , does reveal a connection between (...) My bibliography Export citation 21. Timothy H. Boyer (2002). Semiclassical Explanation of the Matteucci–Pozzi and Aharonov–Bohm Phase Shifts. Foundations of Physics 32 (1):41-49. Classical electromagnetic forces can account for the experimentally observed phase shifts seen in an electron interference pattern when a line of electric dipoles or a line of magnetic dipoles (a solenoid) is placed between the electron beams forming the interference pattern. My bibliography Export citation 22. Timothy H. Boyer (2000). Classical Electromagnetism and the Aharonov–Bohm Phase Shift. Foundations of Physics 30 (6):907-932. Although there is good experimental evidence for the Aharonov–Bohm phase shift occurring when a solenoid is placed between the beams forming a double-slit electron interference pattern, there has been very little analysis of the relevant classical electromagnetic forces. These forces between a point charge and a solenoid involve subtle relativistic effects of order v 2 /c 2 analogous to those discussed by Coleman and Van Vleck in their treatment of the Shockley–James paradox. In this article we show that a treatment (...) My bibliography Export citation 23. Timothy H. Boyer (2000). Does the Aharonov–Bohm Effect Exist? Foundations of Physics 30 (6):893-905. We draw a distinction between the Aharonov–Bohm phase shift and the Aharonov–Bohm effect. Although the Aharonov–Bohm phase shift occurring when an electron beam passes around a magnetic solenoid is well-verified experimentally, it is not clear whether this phase shift occurs because of classical forces or because of a topological effect occurring in the absence of classical forces as claimed by Aharonov and Bohm. The mathematics of the Schroedinger equation itself does not reveal the physical basis for the effect. However, the (...) My bibliography Export citation 24. Katherine A. Brading & Elena Castellani (eds.) (2003). Symmetries in Physics: Philosophical Reflections. Cambridge University Press. Highlighting main issues and controversies, this book brings together current philosophical discussions of symmetry in physics to provide an introduction to the subject for physicists and philosophers. The contributors cover all the fundamental symmetries of modern physics, such as CPT and permutation symmetry, as well as discussing symmetry-breaking and general interpretational issues. Classic texts are followed by new review articles and shorter commentaries for each topic. Suitable for courses on the foundations of physics, philosophy of physics and philosophy of science, (...) My bibliography Export citation 25. Katherine Brading & Harvey R. Brown (2004). Are Gauge Symmetry Transformations Observable? British Journal for the Philosophy of Science 55 (4):645-665. In a recent paper in the British Journal for the Philosophy of Science, Kosso discussed the observational status of continuous symmetries of physics. While we are in broad agreement with his approach, we disagree with his analysis. In the discussion of the status of gauge symmetry, a set of examples offered by ’t Hooft has influenced several philosophers, including Kosso; in all cases the interpretation of the examples is mistaken. In this paper we present our preferred approach to the empirical (...) My bibliography Export citation 26. Paul Busch (1990). On the Energy-Time Uncertainty Relation. Part II: Pragmatic Time Versus Energy Indeterminacy. [REVIEW] Foundations of Physics 20 (1):33-43. The discussion of a particular kind of interpretation of the energy-time uncertainty relation, the “pragmatic time” version of the ETUR outlined in Part I of this work [measurement duration (pragmatic time) versus uncertainty of energy disturbance or measurement inaccuracy] is reviewed. Then the Aharonov-Bohm counter-example is reformulated within the modern quantum theory of unsharp measurements and thereby confirmed in a rigorous way. My bibliography Export citation 27. Adam Caprez & Herman Batelaan (2009). Feynman's Relativistic Electrodynamics Paradox and the Aharonov-Bohm Effect. Foundations of Physics 39 (3):295-306. An analysis is done of a relativistic paradox posed in the Feynman Lectures of Physics involving two interacting charges. The physical system presented is compared with similar systems that also lead to relativistic paradoxes. The momentum conservation problem for these systems is presented. The relation between the presented analysis and the ongoing debates on momentum conservation in the Aharonov-Bohm problem is discussed. My bibliography Export citation 28. M. Carmeli & S. Malin (1987). Field Theory onR×S 3 Topology. V:SU 2 Gauge Theory. [REVIEW] Foundations of Physics 17 (2):193-200. A gauge theory on R×S 3 topology is developed. It is a generalization to the previously obtained field theory on R×S 3 topology and in which equations of motion were obtained for a scalar particle, a spin one-half particle, the electromagnetic field of magnetic moments, and a Shrödinger-type equation, as compared to ordinary field equations defined on a Minkowskian manifold. The new gauge field equations are presented and compared to the ordinary Yang-Mills field equations, and the mathematical and physical differences (...) My bibliography Export citation 29. This paper is devoted to examining the relevance of Dirac's view on the use of transformation theory and invariants in modern physics --- as it emerges from his 1930 book on quantum mechanics as well as from his later work on singular theories and constraints --- to current reflections on the meaning of physical symmetries, especially gauge symmetries. Remove from this list | My bibliography Export citation 30. Gabriel López Castro & Alejandro Mariano (2003). Unstable Particles, Gauge Invariance and the Δ++ Resonance Parameters. Foundations of Physics 33 (5):719-734. The elastic and radiative π + p scattering are studied in the framework of an effective Lagrangian model for the Δ ++ resonance and its interactions. The finite width effects of this spin-3/2 resonance are introduced in the scattering amplitudes through a complex mass scheme to respect electromagnetic gauge invariance. The resonant pole (Δ ++) and background contributions (ρ 0, σ, Δ, and neutron states) are separated according to the principles of the analytic S-matrix theory. The mass and width parameters (...) My bibliography Export citation 31. J. S. R. Chisholm & R. S. Farwell (1995). Unified Spin Gauge Model and the Top Quark Mass. Foundations of Physics 25 (10):1511-1522. Spin gauge models use a real Clifford algebraic structure Rp,q associated with a real manifold of dimension p + q to describe the fundamental interactions of elementary particles. This review provides a comparison between those models and the standard model, indicating their similarities and differences. By contrast with the standard model, the spin gauge model based on R3,8 generates intermediate boson mass terms without the need to use the Higgs-Kibble mechanism and produces a precise prediction for the mass of the (...) My bibliography Export citation 32. The physical meaning of the particularly simple non-degenerate supermetric, introduced in the previous part by the authors, is elucidated and the possible connection with processes of topological origin in high energy physics is analyzed and discussed. New possible mechanism of the localization of the fields in a particular sector of the supermanifold is proposed and the similarity and differences with a 5-dimensional warped model are shown. The relation with gauge theories of supergravity based in the OSP(1/4) group is explicitly given (...) My bibliography Export citation 33. R. Eugene Collins (1996). Differentiable Probabilities: A New Viewpoint on Spin, Gauge Invariance, Gauge Fields, and Relativistic Quantum Mechanics. [REVIEW] Foundations of Physics 26 (11):1469-1527. A new approach to developing formulisms of physics based solely on laws of mathematics is presented. From simple, classical statistical definitions for the observed space-time position and proper velocity of a particle having a discrete spectrum of internal states we derive u generalized Schrödinger equation on the space-time manifold. This governs the evolution of an N component wave function with each component square integrable over this manifold and is structured like that for a charged particle in an electromagnetic field but (...) My bibliography Export citation 34. N. C. A. Da Costa, F. A. Doria, A. F. Furtado-do-Amaral & J. A. De Barros (1994). Two Questions on the Geometry of Gauge Fields. Foundations of Physics 24 (5):783-800. We first show that a theorem by Cartan that generalizes the Frobenius integrability theorem allows us (given certain conditions) to obtain noncurvature solutions for the differential Bianchi conditions and for higher-degree similar relations. We then prove that there is no algorithmic procedure to determine, for a reasonable restricted algebra of functions on spacetime, whether a given connection form satisfies the preceding conditions. A parallel result gives a version of Gödel's first incompleteness theorem within an (axiomatized) theory of gauge fields. My bibliography Export citation 35. C. Dariescu & Marina Dariescu (1991). U(1) Gauge Theory of the Quantum Hall Effect. Foundations of Physics 21 (11):1329-1333. The solution of the Klein-Gordon equation for a complex scalar field in the presence of an electrostatic field orthogonal to a magnetostatic field is analyzed. Considerations concerning the quantum Hall-type evolution are presented also. Using the Hamiltonian with a self-interaction term, we obtain a critical value for the magnetic field in the case of the spontaneous symmetry breaking. My bibliography Export citation 36. C. Dariescu & Marina Dariescu (1991). U(1) Gauge Theory for Charged Bosonic Fields onR×S 3 Topology. Foundations of Physics 21 (11):1323-1327. A model for U(1) gauge theories over a compact Lie group is described usingR×S 3 as background space. A comparison with other results is given. Electrodynamics equations are obtained. Finally, some considerations and observations about gravity onR×S 3 space are presented. My bibliography Export citation 37. Ciprian Dariescu & Marina-Aura Dariescu (1994). SU (2)× U (1) Gauge Theory of Bosonic and Fermionic Fields inS 3× R Space-Time. Foundations of Physics 24 (11):1577-1582. My bibliography Export citation 38. Marina -Aura Dariescu, C. Dariescu & I. Gottlieb (1995). Gauge Theory of Fermions onR × S 3 Spacetime. Foundations of Physics 25 (6):959-963. A Lorentz-invariant gauge theory for massive fermions on R × S 3 spacetime is built up. Using the symmetry of S 3,we obtain Dirac-type equation and derive the expression of the fermionic propagator. Finally, starting from the SU(N) gauge-invariant Lagrangian, we obtain the set of Dirac-Yang-Mills equations on R × S 3 spacetime, pointing out major differences from the Minkowskian case. My bibliography Export citation 39. Marina-Aura Dariescu, C. Dariescu & I. Gottlieb (1995). Gauge Theory of Fermions onR× S 3 Spacetime. Foundations of Physics 25 (6):959-963. My bibliography Export citation 40. O. Costa de Beauregard (2004). To Believe Or Not Believe In The A Potential, That's a Question. Flux Quantization in Autistic Magnets. Prediction of a New Effect. Foundations of Physics 34 (11):1695-1702. Electromagnetic gauge as an integration condition was my wording in previous publications. I argue here, on the examples of the Möllenstaedt-Bayh and Tonomura tests of the Ahraronov–Bohm (AB) effect, that not only the trapped flux Φ but also, under the integration condition A ≡ 0 if Φ = 0, the local value of the vector potential is measured. My bibliography Export citation 41. O. Costa de Beauregard (1992). Electromagnetic Gauge as an Integration Condition: De Broglie's Argument Revisited and Expanded. [REVIEW] Foundations of Physics 22 (12):1485-1494. Einstein's mass-energy equivalence law, argues de Broglie, by fixing the zero of the potential energy of a system,ipso facto selects a gauge in electromagnetism. We examine how this works in electrostatics and in magnetostatics and bring in, as a “trump card,” the familiar, but highly peculiar, system consisting of a toroidal magnet m and a current coil c, where none of the mutual energy W resides in the vacuum. We propose the principle of a crucial test for measuring the fractions (...) My bibliography Export citation 42. J. A. De Wet (1987). Nuclear Structure on a Grassmann Manifold. Foundations of Physics 17 (10):993-1018. Products of particlelike representations of the homogeneous Lorentz group are used to construct the degrees of spin angular momentum of a composite system of protons and neutrons. If a canonical labeling system is adopted for each state, a shell structure emerges. Furthermore the use of the Dirac ring ensures that the spin is characterized by half-angles in accord with the neutron-rotation experiment. It is possible to construct a Clebsch-Gordan decomposition to reduce a state of complex angular momentum into simpler states (...) My bibliography Export citation 43. Francisco Antonio Doria (2009). Theoretical Physics: A Primer for Philosophers of Science. Principia 13 (2):195-232. We give a overview of the main areas in theoretical physics, with emphasis on their relation to Lagrangian formalism in classical mechanics. This review covers classical mechanics; the road from classical mechanics to Schrodinger's quantum mechanics; electromagnetism, special and general relativity, and (very briefly) gauge field theory and the Higgs mechanism. We shun mathematical rigor in favor of a straightforward presentation. My bibliography Export citation 44. W. Drechsler (1992). Quantized Fiber Dynamics for Extended Elementary Objects Involving Gravitation. Foundations of Physics 22 (8):1041-1077. The geometro-stochastic quantization of a gauge theory for extended objects based on the (4, 1)-de Sitter group is used for the description of quantized matter in interaction with gravitation. In this context a Hilbert bundle ℋ over curved space-time B is introduced, possessing the standard fiber ℋ $_{\bar \eta }^{(\rho )}$ , being a resolution kernel Hilbert space (with resolution generator $\tilde \eta$ and generalized coherent state basis) carrying a spin-zero phase space representation of G=SO(4, 1) belonging to (...) My bibliography Export citation 45. W. Drechsler (1989). Modified Weyl Theory and Extended Elementary Objects. Foundations of Physics 19 (12):1479-1497. To represent extension of objects in particle physics, a modified Weyl theory is used by gauging the curvature radius of the local fibers in a soldered bundle over space-time possessing a homogeneous space G/H of the (4, 1)-de Sitter group G as fiber. Objects with extension determined by a fundamental length parameter R0 appear as islands D(i) in space-time characterized by a geometry of the Cartan-Weyl type (i.e., involving torsion and modified Weyl degrees of freedom). Farther away from the domains (...) My bibliography Export citation 46. Wolfgang Drechsler & Eduard Prugovečki (1991). Geometro-Stochastic Quantization of a Theory for Extended Elementary Objects. Foundations of Physics 21 (5):513-546. The geometro-stochastic quantization of a gauge theory based on the (4,1)-de Sitter group is presented. The theory contains an intrinsic elementary length parameter R of geometric origin taken to be of a size typical for hadron physics. Use is made of a soldered Hilbert bundle ℋ over curved spacetime carrying a phase space representation of SO(4, 1) with the Lorentz subgroup related to a vierbein formulation of gravitation. The typical fiber of ℋ is a resolution kernel Hilbert space ℋ \$_{\bar (...) My bibliography Export citation 47. I. H. Duru (1993). Casimir Force Between Two Aharonov-Bohm Solenoids. Foundations of Physics 23 (5):809-818. The vacuum structure for the massive charged scalar field in the region of two parallel, infinitely long and thin solenoids confining the fluxesn 1 andn 2 is studied. By using the Green function method, it is found that the vacuum expectation value of the system's energy has a finite mutual interaction term depending on the distance a between the solenoids, which implies an attractive force per unit length given by F n1n2 =−(ℏc/π2)(n 1 n 2)2/a 3. My bibliography Export citation 48. John Earman (2002). Gauge Matters. Proceedings of the Philosophy of Science Association 2002 (3):S209--20. The constrained Hamiltonian formalism is recommended as a means for getting a grip on the concepts of gauge and gauge transformation. This formalism makes it clear how the gauge concept is relevant to understanding Newtonian and classical relativistic theories as well as the theories of elementary particle physics; it provides an explication of the vague notions of "local" and "global" gauge transformations; it explains how and why a fibre bundle structure emerges for theories which do not wear their bundle structure (...) My bibliography Export citation 49. M. W. Evans (1995). The Charge Quantization Condition inO(3) Vacuum Electrodynamics. Foundations of Physics 25 (1):175-181. The existence of the longitudinal field B (3) in the vacuum implies that the gauge group of electrodynamics is O(3),and not U(1) [or O(2)].This results directly in the charge quantization condition e=h(ϰ/A (0)).This condition is derived independently in this paper from the relativistic motion of one electron in the field and is shown to he that in which the electron travels infinitesimally close to the speed of light.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.899039626121521, "perplexity": 1469.3132945886807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273381.44/warc/CC-MAIN-20140728011753-00229-ip-10-146-231-18.ec2.internal.warc.gz"}
https://www.fastcalculus.com/calculus-other-calculus-problems-23728/
# Calculus: Other Calculus Problems – #23728 Question: A rectangular tank with a base 6 meters by 8 meters and a height of 7 meters is currently filled with water to a height of 5 meters. The water is to be pumped out to a height 1 meter above the top of the tank. Write an integral that gives the amount of work that is required to pump out enough water to leave 2 meters of water in the tank? (Note: the density of the water is 1000 kg/m3 and g =9.8 m/sec2). DO NOT ATTEMPT TO EVALUATE THE INTEGRAL; JUST SET IT UP.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590235948562622, "perplexity": 207.62538237588453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247522457.72/warc/CC-MAIN-20190222180107-20190222202107-00506.warc.gz"}
https://de.scribd.com/document/281124343/qutip-doc-3-1-0
Sie sind auf Seite 1von 241 # QuTiP: Quantum Toolbox in Python Release 3.1.0 ## December 31, 2014 Contents Contents 1 Frontmatter 1.2 Citing This Project . . . . 1.3 Funding . . . . . . . . . . 1.4 About QuTiP . . . . . . . 1.5 Contributing to QuTiP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5 5 5 6 6 Installation 2.1 General Requirements . . . . . . . . . . . . . . . . . . 2.2 Platform-independent installation . . . . . . . . . . . . . 2.3 Get the source code . . . . . . . . . . . . . . . . . . . . 2.4 Installing from source . . . . . . . . . . . . . . . . . . . 2.5 Installation on Ubuntu Linux . . . . . . . . . . . . . . . 2.6 Installation on Mac OS X (10.8+) . . . . . . . . . . . . 2.7 Installation on Windows . . . . . . . . . . . . . . . . . 2.8 Optional Installation Options . . . . . . . . . . . . . . . 2.9 Verifying the Installation . . . . . . . . . . . . . . . . . 2.10 Checking Version Information using the About Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 7 7 7 8 8 9 10 10 11 11 Users Guide 3.1 Guide Overview . . . . . . . . . . . . . . . . . . 3.2 Basic Operations on Quantum Objects . . . . . . 3.3 Manipulating States and Operators . . . . . . . . 3.4 Using Tensor Products and Partial Traces . . . . 3.5 Time Evolution and Quantum System Dynamics . 3.6 Solving for Steady-State Solutions . . . . . . . . 3.7 An Overview of the Eseries Class . . . . . . . . 3.8 Two-time correlation functions . . . . . . . . . . 3.9 Plotting on the Bloch Sphere . . . . . . . . . . . 3.10 Visualization of quantum states and processes . . 3.11 Parallel computation . . . . . . . . . . . . . . . 3.12 Saving QuTiP Objects and Data Sets . . . . . . . 3.13 Generating Random Quantum States & Operators 3.14 Modifying Internal QuTiP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 13 20 32 36 70 74 77 84 97 105 108 111 112 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . API documentation 115 4.1 Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.2 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 Change Log 5.1 Version 3.1.0 (January 1, 2015): . 5.2 Version 3.0.1 (Aug 5, 2014): . . . 5.3 Version 3.0.0 (July 17, 2014): . . . 5.4 Version 2.2.0 (March 01, 2013): . 5.5 Version 2.1.0 (October 05, 2012): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 219 220 220 221 222 5.6 5.7 5.8 5.9 5.10 5.11 5.12 ## Version 2.0.0 (June 01, 2012): . . . Version 1.1.4 (May 28, 2012): . . . Version 1.1.3 (November 21, 2011): Version 1.1.2 (October 27, 2011) . . Version 1.1.1 (October 25, 2011) . . Version 1.1.0 (October 04, 2011) . . Version 1.0.0 (July 29, 2011) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222 223 223 224 224 224 225 Developers 227 6.1 Lead Developers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 6.2 Contributors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 Bibliography 229 231 Bibliography 233 ## Python Module Index 235 Index 237 CHAPTER ONE FRONTMATTER This document contains a user guide and automatically generated API documentation for QuTiP. A PDF version of this text is available at the documentation page. Author P.D. Nation Address Department of Physics, Korea University, Seongbuk-gu Seoul, 136-713 South Korea Author J.R. Johansson Address iTHES Research Group, RIKEN, Wako-shi Saitama, 351-0198 Japan version 3.1.0 status Released (January 1, 2015) ## 1.2 Citing This Project If you find this project useful, then please cite: J. R. Johansson, P.D. Nation, and F. Nori, QuTiP 2: A Python framework for the dynamics of open quantum systems, Comp. Phys. Comm. 184, 1234 (2013). or J. R. Johansson, P.D. Nation, and F. Nori, QuTiP: An open-source Python framework for the dynamics of open quantum systems, Comp. Phys. Comm. 183, 1760 (2012). 1.3 Funding The development of QuTiP has been partially supported by the Japanese Society for the Promotion of Science Foreign Postdoctoral Fellowship Program under grants P11202 (PDN) and P11501 (JRJ). Additional funding comes from RIKEN, Kakenhi grant Nos. 2301202 (PDN), 2302501 (JRJ), and Korea University. Every quantum system encountered in the real world is an open quantum system. For although much care is taken experimentally to eliminate the unwanted influence of external interactions, there remains, if ever so slight, a coupling between the system of interest and the external world. In addition, any measurement performed on the system necessarily involves coupling to the measuring device, therefore introducing an additional source of external influence. Consequently, developing the necessary tools, both theoretical and numerical, to account for the interactions between a system and its environment is an essential step in understanding the dynamics of quantum systems. In general, for all but the most basic of Hamiltonians, an analytical description of the system dynamics is not possible, and one must resort to numerical simulations of the equations of motion. In absence of a quantum computer, these simulations must be carried out using classical computing techniques, where the exponentially increasing dimensionality of the underlying Hilbert space severely limits the size of system that can be efficiently simulated. However, in many fields such as quantum optics, trapped ions, superconducting circuit devices, and most recently nanomechanical systems, it is possible to design systems using a small number of effective oscillator and spin components, excited by a limited number of quanta, that are amenable to classical simulation in a truncated Hilbert space. The Quantum Toolbox in Python, or QuTiP, is a fully open-source implementation of a framework written in the Python programming language designed for simulating the open quantum dynamics for systems such as those listed above. This framework distinguishes itself from the other available software solutions in providing QuTiP relies entirely on open-source software. You are free to modify and use it as you wish with no licensing fees or limitations. QuTiP is based on the Python scripting language, providing easy to read, fast code generation without the need to compile after modification. The numerics underlying QuTiP are time-tested algorithms that run at C-code speeds, thanks to the Numpy and Scipy libraries, and are based on many of the same algorithms used in propriety software. QuTiP allows for solving the dynamics of Hamiltonians with arbitrary time-dependence, including collapse operators. Time-dependent problems can be automatically compiled into C-code at run-time for increased performance. Takes advantage of the multiple processing cores found in essentially all modern computers. QuTiP was designed from the start to require a minimal learning curve for those users who have experience using the popular quantum optics toolbox by Sze M. Tan. Includes the ability to create high-quality plots, and animations, using the excellent Matplotlib package. For detailed information about new features of each release of QuTiP, see the Change Log. ## 1.5 Contributing to QuTiP We welcome anyone who is interested in helping us make QuTiP the best package for simulating quantum systems. Anyone who contributes will be duly recognized. Even small contributions are noted. See Contributors for a list of people who have helped in one way or another. If you are interested, please drop us a line at the QuTiP discussion group webpage. 6 CHAPTER TWO INSTALLATION ## 2.1 General Requirements QuTiP depends on several open-source libraries for scientific computing in the Python programming language. The following packages are currently required: Package Version Details Python 2.7+ Version 3.3+ is highly recommended. Numpy 1.7+ Not tested on lower versions. Scipy 0.14+ Lower versions have missing features. Matplotlib 1.2.0+ Some plotting does not work on lower versions. Cython 0.15+ Needed for compiling some time-dependent Hamiltonians. GCC 4.2+ Needed for compiling Cython files. Compiler Fortran Fortran 90 Needed for compiling the optional Fortran-based Monte Carlo solver. Compiler BLAS library 1.2+ Optional, Linux & Mac only. Needed for installing Fortran Monte Carlo solver. Mayavi 4.1+ Optional. Needed for using the Bloch3d class. Linux only. Needed for compiling Cython files. LaTeX TexLive Optional. Needed if using LaTeX in figures. 2009+ nose 1.1.2+ Optional. For running tests. scikits.umfpack 5.2.0+ Optional. Faster (~2-5x) steady state calculations. As of version 2.2, QuTiP includes an optional Fortran-based Monte Carlo solver that has some performance benefit over the Python-based solver when simulating small systems. In order to install this package you must have a Fortran compiler (for example gfortran) and BLAS development libraries. At present, these packages are tested only on the Linux and OS X platforms. ## 2.2 Platform-independent installation Often the easiest way is to install QuTiP is to use the Python package manager pip. pip install qutip ## Or, optionally, to also include the Fortran-based Monte Carlo solver: pip install qutip --install-option=--with-f90mc ## 2.3 Get the source code Official releases of QuTiP are available from the download section on the projects web pages and the latest source code is available in our Github repository http://github.com/qutip In general we recommend users to use the latest stable release of QuTiP, but if you are interested in helping us out with development or wish to submit bug fixes, then use the latest development version from the Github repository. ## 2.4 Installing from source Installing QuTiP from source requires that all the dependencies are satisfied. The installation of these dependencies is different on each platform, and detailed instructions for Linux (Ubuntu), Mac OS X and Windows are given below. Regardless of platform, to install QuTiP from the source code run: sudo python setup.py install ## To also include the optional Fortran Monte Carlo solver, run: sudo python setup.py install --with-f90mc ## 2.5 Installation on Ubuntu Linux Using QuTiPs PPA The easiest way to install QuTiP in Ubuntu (14.04 and later) is to use the QuTiP PPA sudo apt-get update sudo apt-get install python-qutip ## A Python 3 version is also available, and can be installed using: sudo apt-get install python3-qutip With this method the most important dependencies are installed automatically, and when a new version of QuTiP is released it can be upgraded through the standard package management system. In addition to the required dependencies, it is also strongly recommended that you install the texlive-latex-extra package: sudo apt-get install texlive-latex-extra ## Manual installation of dependencies First install the required dependencies using: sudo apt-get install python-dev cython python-setuptools python-nose sudo apt-get install python-numpy python-scipy python-matplotlib Then install QuTiP from source following the instructions given above. Alternatively (or additionally), to install a Python 3 environment, use: sudo apt-get install python3-dev cython3 python3-setuptools python3-nose sudo apt-get install python3-numpy python3-scipy python3-matplotlib and then do the installation from source using python3 instead of python. Optional, but recommended, dependencies can be installed using: sudo sudo sudo sudo sudo apt-get apt-get apt-get apt-get apt-get install install install install install texlive-latex-extra mayavi2 libblas-dev liblapack-dev gfortran # # # # # ## recommended for plotting optional, for Bloch3d only optional, for Fortran Monte Carlo solver optional, for Fortran Monte Carlo solver optional, for Fortran Monte Carlo solver ## 2.6 Installation on Mac OS X (10.8+) Setup Using Homebrew The latest version of QuTiP can be quickly installed on OS X using Homebrew and the automated installation shell scripts Python 2.7 installation script Python 3.4 installation script Having downloaded the script corresponding to the version of Python you want to use, the installation script can be run from the terminal using (replacing X with 2 or 3) sh install_qutip_pyX.sh The script will then install Homebrew and the required QuTiP dependencies before installing QuTiP itself and running the built in test suite. Any errors in the homebrew configuration will be displayed at the end. Using Python 2.7 or 3.4, the python commend-line and IPython interpreter can be run by calling python and ipython or python3 and ipython3, respectively. If you have installed other packages in the /usr/local/ directory, or have changed the permissions of any of its sub-directories, then this script may fail to install all the necessary tools automatically. ## Setup Using Macports If you have not done so already, install the Apple Xcode developer tools from the Apple App Store. After installation, open Xcode and go to: Preferences -> Downloads, and install the Command Line Tools. On the Mac OS, you can install the required libraries via MacPorts. After installation, the necessary ports for QuTiP may be installed via (Replace 34 with 27 if you want Python 2.7) sudo sudo sudo sudo sudo port port port port port install install install install install py34-scipy py34-matplotlib +latex py34-cython py34-ipython +notebook+parallel py34-pip Now, we want to tell OS X which Python and iPython we are going to use sudo port select python python34 sudo port select ipython ipython34 sudo port select pip pip34 We now want to set the macports compiler to the vanilla GCC version. From the command line type port select gcc ## which will bring up a list of installed compilers, such as Available versions for gcc: mp-gcc48 none (active) We want to set the the compiler to the gcc4x compiler, where x is the highest number available, in this case mp-gcc48 (the mp- does not matter). To do this type sudo port select gcc mp-gcc48 ## Running port select again should give Available versions for gcc: mp-gcc48 (active) none 9 ## sudo pip install qutip --install-option=--with-f90mc Warning: Having both macports and homebrew installations on the same machine is not recommended, and can lead to QuTiP installation problems. ## Setup via SciPy Superpack A third option is to install the required Python packages using the SciPy Superpack. Further information on Anaconda CE Distribution Finally, one can also use the Anaconda CE package to install all of QuTiP. ## 2.7 Installation on Windows QuTiP is primarily developed for Unix-based platforms such as Linux an Mac OS X, but it can also be used on Windows. We have limited experience and ability to help troubleshoot problems on Windows, but the following installation steps have been reported to work: 1. Install the Python(X,Y) distribution (tested with version 2.7.3.1). Other Python distributions, such as Enthought Python Distribution or Anaconda CE have also been reported to work. 2. When installing Python(x,y), explicitly select to include the Cython package in the installation. This package is not selected by default. 3. Add the following content to the file C:/Python27/Lib/distutils/distutils.cfg (or create the file if it does not [build] compiler = mingw32 [build_ext] compiler = mingw32 The directory where the distutils.cfg file should be placed might be different if you have installed the Python environment in a different location than in the example above. 4. Obtain the QuTiP source code and installed it following the instructions given above. Note: In some cases, to get the dynamic compilation of Cython code to work, it might be necessary to edit the PATH variable and make sure that C:\MinGW32-xy\bin appears either first in the PATH list, or possibly right after C:\Python27\Lib\site-packages\PyQt4. This is to make sure that the right version of the MinGW compiler is used if more than one is installed (not uncommon under Windows, since many packages are distributed and installed with their own version of all dependencies). ## 2.8 Optional Installation Options UMFPACK Linear Solver As of SciPy 0.14+, the umfpack linear solver routines for solving large-scale sparse linear systems have been replaced due to licensing restrictions. The default method for all sparse linear problems is now the SuperLU library. However, scipy still includes the ability to call the umfpack library via the scikits.umfpack module. In our experience, the umfpack solver is 2-5x faster than the SuperLU routines, which is a very noticeable performance increase when used for solving steady state solutions. We have an updated scikits.umfpack module available at http://github.com/nonhermitian/umfpack that can be installed to have SciPy find and use the umfpack library. 10 ## Optimized BLAS Libraries QuTiP is designed to take advantage of some of the optimized BLAS libraries that are available for NumPy. At present, this includes the OPENBLAS and MKL libraries. If NumPy is built against these libraries, then QuTiP will take advantage of the performance gained by using these optimized tools. As these libraries are multithreaded, you can change the number of threads used in these packages by adding: >>> import os at the top of your Python script files, or iPython notebooks, and then loading the QuTiP framework. If these commands are not present, then QuTiP automatically sets the number of threads to one. ## 2.9 Verifying the Installation QuTiP includes a collection of built-in test scripts to verify that an installation was successful. To run the suite of tests scripts you must have the nose testing library. After installing QuTiP, leave the installation directory, run Python (or iPython), and call: >>> import qutip.testing as qt >>> qt.run() If successful, these tests indicate that all of the QuTiP functions are working properly. If any errors occur, please check that you have installed all of the required modules. See the next section on how to check the installed versions of the QuTiP dependencies. If these tests still fail, then head on over to the QuTiP Discussion Board and post a message detailing your particular issue. ## 2.10 Checking Version Information using the About Function QuTiP includes an about function for viewing information about QuTiP and the important dependencies installed on your system. To view this information: In [1]: from qutip import * QuTiP: Quantum Toolbox in Python Paul D. Nation & Robert J. Johansson QuTiP Version: Numpy Version: Scipy Version: Cython Version: Matplotlib Version: Fortran mcsolver: scikits.umfpack: Python Version: Platform Info: Installation path: 3.1.0 1.9.1 0.14.0 0.21.1 1.4.2 True False 2.7.9 Darwin (x86_64) /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site 11 CHAPTER THREE USERS GUIDE ## 3.1 Guide Overview The goal of this guide is to introduce you to the basic structures and functions that make up QuTiP. This guide is divided up into several sections, each highlighting a specific set of functionalities. In combination with the examples that can be found on the project web page http://qutip.org/tutorials.html, this guide should provide a more or less complete overview. In addition, the API documentation for each function is located at the end of this guide. Organization QuTiP is designed to be a general framework for solving quantum mechanics problems such as systems composed of few-level quantum systems and harmonic oscillators. To this end, QuTiP is built from a large (and ever growing) library of functions and classes; from qutip.states.basis to qutip.wigner. The general organization of QuTiP, highlighting the important API available to the user, is shown in the QuTiP tree-diagram of user accessible functions and classes. ## 3.2 Basic Operations on Quantum Objects First things first Warning: Do not run QuTiP from the installation directory. To load the qutip modules, we must first call the import statement: In [1]: from qutip import * that will load all of the user available functions. Often, we also need to import the Numpy and Matplotlib libraries with: In [2]: import numpy as np Note that, in the rest of the documentation, functions are written using qutip.module.function() notation which links to the corresponding function in the QuTiP API: Functions. However, in calling import *, we have already loaded all of the QuTiP modules. Therefore, we will only need the function name and not the complete path when calling the function from the interpreter prompt or a Python script. ## The quantum object class Introduction The key difference between classical and quantum mechanics lies in the use of operators instead of numbers as variables. Moreover, we need to specify state vectors and their properties. Therefore, in computing the dynamics of quantum systems we need a data structure that is capable of encapsulating the properties of a quantum operator and ket/bra vectors. The quantum object class, qutip.Qobj, accomplishes this using matrix representation. To begin, let us create a blank Qobj: 13 ## Figure 3.1: QuTiP tree-diagram of user accessible functions and classes. In [3]: Qobj() Out[3]: Quantum object: dims = [[1], [1]], shape = [1, 1], type = oper, isherm = True Qobj data = [[ 0.]] where we see the blank Qobj object with dimensions, shape, and data. Here the data corresponds to a 1x1dimensional matrix consisting of a single zero entry. Hint: By convention, Class objects in Python such as Qobj() differ from functions in the use of a beginning capital letter. We can create a Qobj with a user defined data set by passing a list or array of data into the Qobj: In [4]: Qobj([[1],[2],[3],[4],[5]]) Out[4]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 1.] [ 2.] 14 [ 3.] [ 4.] [ 5.]] In [5]: x = np.array([[1, 2, 3, 4, 5]]) In [6]: Qobj(x) Out[6]: Quantum object: dims = [[1], [5]], shape = [1, 5], type = bra Qobj data = [[ 1. 2. 3. 4. 5.]] In [7]: r = np.random.rand(4, 4) In [8]: Qobj(r) Out[8]: Quantum object: dims = [[4], [4]], shape = [4, 4], type = oper, isherm = False Qobj data = [[ 0.13271688 0.96498406 0.6217972 0.05120659] [ 0.95073525 0.4422577 0.93436513 0.39684026] [ 0.14249098 0.57866168 0.75444556 0.95474959] [ 0.43023463 0.67188093 0.42813227 0.53413365]] Notice how both the dims and shape change according to the input data. Although dims and shape appear to have the same function, the difference will become quite clear in the section on tensor products and partial traces. Note: If you are running QuTiP from a python script you must use the print function to view the Qobj attributes. States and operators Manually specifying the data for each quantum object is inefficient. Even more so when most objects correspond to commonly used types such as the ladder operators of a harmonic oscillator, the Pauli spin operators for a twolevel system, or state vectors such as Fock states. Therefore, QuTiP includes predefined objects for a variety of states: States Command (# Inputs means optional) Fock state ket vector basis(N,#m)/fock(N,#m) N = number of levels in Hilbert space, m = level containing excitation (0 if no m given) Fock density matrix (outer fock_dm(N,#p) same as basis(N,m) / fock(N,m) product of basis) Coherent state coherent(N,alpha)alpha = complex number (eigenvalue) for requested coherent state Coherent density matrix coherent_dm(N,alpha) same as coherent(N,alpha) (outer product) Thermal density matrix thermal_dm(N,n) n = particle number expectation value (for n particles) and operators: 15 Operators Identity Lowering (destruction) operator Raising (creation) operator Number operator Single-mode displacement operator Single-mode squeezing operator Sigma-X Sigma-Y Sigma-Z Sigma plus Sigma minus Higher spin operators Command (# means optional) qeye(N) destroy(N) Inputs create(N) same as above ## N = number of levels in Hilbert space. same as above num(N) same as above displace(N,alpha) N=number of levels in Hilbert space, alpha = complex displacement amplitude. squeeze(N,sp) N=number of levels in Hilbert space, sp = squeezing parameter. sigmax() sigmay() sigmaz() sigmap() sigmam() jmat(j,#s) j = integer or half-integer representing spin, s = x, y, z, +, or - As an example, we give the output for a few of these functions: In [9]: basis(5,3) Out[9]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 0.] [ 1.] [ 0.]] In [10]: coherent(5,0.5-0.5j) Out[10]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.77880170+0.j ] [ 0.38939142-0.38939142j] [ 0.00000000-0.27545895j] [-0.07898617-0.07898617j] [-0.04314271+0.j ]] In [11]: destroy(4) Out[11]: Quantum object: dims = [[4], [4]], shape = [4, 4], type = oper, isherm = False Qobj data = [[ 0. 1. 0. 0. ] [ 0. 0. 1.41421356 0. ] [ 0. 0. 0. 1.73205081] [ 0. 0. 0. 0. ]] In [12]: sigmaz() Out[12]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 1. 0.] [ 0. -1.]] In [13]: jmat(5/2.0,'+') Out[13]: Quantum object: dims = [[6], [6]], shape = [6, 6], type = oper, isherm = False Qobj data = 16 [[ [ [ [ [ [ 0. 0. 0. 0. 0. 0. 2.23606798 0. 0. 0. 0. 0. 0. 2.82842712 0. 0. 0. 0. 0. 0. 3. 0. 0. 0. 0. 0. 0. 2.82842712 0. 0. 0. ] 0. ] 0. ] 0. ] 2.23606798] 0. ]] Qobj attributes We have seen that a quantum object has several internal attributes, such as data, dims, and shape. These can be accessed in the following way: In [14]: q = destroy(4) In [15]: q.dims Out[15]: [[4], [4]] In [16]: q.shape Out[16]: [4, 4] In general, the attributes (properties) of a Qobj object (or any Python class) can be retrieved using the Q.attribute notation. In addition to the attributes shown with the print function, the Qobj class also has the following: Property AtDescription tribute Data Q.data Matrix representing state or operator DimenQ.dims List keeping track of shapes for individual components of a multipartite system (for sions tensor products and partial traces). Shape Q.shape Dimensions of underlying data matrix. is Hermi- Q.ishermIs the operator Hermitian or not? tian? Type Q.type Is object of type ket, bra, oper, or super? Figure 3.2: The Qobj Class viewed as a container for the properties need to characterize a quantum operator or state vector. For the destruction operator above: In [17]: q.type Out[17]: 'oper' In [18]: q.isherm Out[18]: False 17 In [19]: q.data Out[19]: <4x4 sparse matrix of type '<type 'numpy.complex128'>' with 3 stored elements in Compressed Sparse Row format> The data attribute returns a message stating that the data is a sparse matrix. All Qobj instances store their data as a sparse matrix to save memory. To access the underlying dense matrix one needs to use the qutip.Qobj.full function as described below. Qobj Math The rules for mathematical operations on Qobj instances are similar to standard matrix arithmetic: In [20]: q = destroy(4) In [21]: x = sigmax() In [22]: q + 5 Out[22]: Quantum object: dims = [[4], [4]], shape = [4, 4], type = oper, isherm = False Qobj data = [[ 5. 1. 0. 0. ] [ 0. 5. 1.41421356 0. ] [ 0. 0. 5. 1.73205081] [ 0. 0. 0. 5. ]] In [23]: x * x Out[23]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 1. 0.] [ 0. 1.]] In [24]: q ** 3 Out[24]: Quantum object: dims = [[4], [4]], shape = [4, 4], type = oper, isherm = False Qobj data = [[ 0. 0. 0. 2.44948974] [ 0. 0. 0. 0. ] [ 0. 0. 0. 0. ] [ 0. 0. 0. 0. ]] In [25]: x / np.sqrt(2) Out[25]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 0.70710678] [ 0.70710678 0. ]] Of course, like matrices, multiplying two objects of incompatible shape throws an error: In [26]: q * x --------------------------------------------------------------------------TypeError Traceback (most recent call last) <ipython-input-26-c5138e004127> in <module>() ----> 1 q * x /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/qutip/qobj 433 434 else: --> 435 raise TypeError("Incompatible Qobj shapes") 436 437 elif isinstance(other, (list, np.ndarray)): 18 ## TypeError: Incompatible Qobj shapes In addition, the logic operators is equal == and is not equal != are also supported. ## Functions operating on Qobj class Like attributes, the quantum object class has defined functions (methods) that operate on Qobj class instances. For a general quantum object Q: Function Command Description Check Q.check_herm() Check if quantum object is Hermitian Hermicity Conjugate Q.conj() Conjugate of quantum object. Dagger Q.dag() Diagonal Q.diag() Returns the diagonal elements. Eigenenergies Q.eigenenergies() Eigenenergies (values) of operator. Eigenstates Q.eigenstates() Returns eigenvalues and eigenvectors. Eliminate Q.eliminate_states(inds) Returns quantum object with states in list inds removed. States Exponential Q.expm() Matrix exponential of operator. Extract States Q.extract_states(inds) Qobj with states listed in inds only. Full Q.full() Returns full (not sparse) array of Qs data. Groundstate Q.groundstate() Eigenval & eigket of Qobj groundstate. Matrix Q.matrix_element(bra,ket) Matrix element <bra|Q|ket> Element Norm Q.norm() Returns L2 norm for states, trace norm for operators. Overlap Q.overlap(state) Overlap between current Qobj and a given state. Partial Trace Q.ptrace(sel) Partial trace returning components selected using sel parameter. Permute Q.permute(order) Permutes the tensor structure of a composite object in the given order. Sqrt Q.sqrtm() Matrix sqrt of operator. Tidyup Q.tidyup() Removes small elements from Qobj. Trace Q.tr() Returns trace of quantum object. Transform Q.transform(inpt) A basis transformation defined by matrix or list of kets inpt . Transpose Q.trans() Transpose of quantum object. Unit Q.unit() Returns normalized (unit) vector Q/Q.norm(). In [27]: basis(5, 3) Out[27]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 0.] [ 1.] [ 0.]] In [28]: basis(5, 3).dag() Out[28]: Quantum object: dims = [[1], [5]], shape = [1, 5], type = bra Qobj data = [[ 0. 0. 0. 1. 0.]] In [29]: coherent_dm(5, 1) Out[29]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = 19 [[ [ [ [ [ 0.36791117 0.36774407 0.26105441 0.14620658 0.08826704 0.36774407 0.36757705 0.26093584 0.14614018 0.08822695 0.26105441 0.26093584 0.18523331 0.10374209 0.06263061 0.14620658 0.14614018 0.10374209 0.05810197 0.035077 ## In [30]: coherent_dm(5, 1).diag() Out[30]: array([ 0.36791117, 0.36757705, In [31]: coherent_dm(5, 1).full() Out[31]: array([[ 0.36791117+0.j, 0.36774407+0.j, 0.08826704+0.j], [ 0.36774407+0.j, 0.36757705+0.j, 0.08822695+0.j], [ 0.26105441+0.j, 0.26093584+0.j, 0.06263061+0.j], [ 0.14620658+0.j, 0.14614018+0.j, 0.03507700+0.j], [ 0.08826704+0.j, 0.08822695+0.j, 0.02117650+0.j]]) 0.08826704] 0.08822695] 0.06263061] 0.035077 ] 0.0211765 ]] 0.18523331, 0.05810197, 0.0211765 ]) 0.26105441+0.j, 0.14620658+0.j, 0.26093584+0.j, 0.14614018+0.j, 0.18523331+0.j, 0.10374209+0.j, 0.10374209+0.j, 0.05810197+0.j, 0.06263061+0.j, 0.03507700+0.j, ## In [32]: coherent_dm(5, 1).norm() Out[32]: 1.0 In [33]: coherent_dm(5, 1).sqrtm() Out[33]: Quantum object: dims = [[5], [5]], shape = [5, 5], Qobj data = [[ 0.36791118 0.36774407 0.26105441 0.14620658 [ 0.36774407 0.36757705 0.26093584 0.14614018 [ 0.26105441 0.26093584 0.18523331 0.10374209 [ 0.14620658 0.14614018 0.10374209 0.05810197 [ 0.08826704 0.08822695 0.06263061 0.035077 0.08826704] 0.08822695] 0.06263061] 0.035077 ] 0.0211765 ]] ## In [34]: coherent_dm(5, 1).tr() Out[34]: 1.0 In [35]: (basis(4, 2) + basis(4, 1)).unit() Out[35]: Quantum object: dims = [[4], [1]], shape = [4, 1], type = ket Qobj data = [[ 0. ] [ 0.70710678] [ 0.70710678] [ 0. ]] ## 3.3 Manipulating States and Operators Introduction In the previous guide section Basic Operations on Quantum Objects, we saw how to create states and operators, using the functions built into QuTiP. In this portion of the guide, we will look at performing basic operations with states and operators. For more detailed demonstrations on how to use and manipulate these objects, see the examples on the tutorials web page. ## State Vectors (kets or bras) Here we begin by creating a Fock qutip.states.basis vacuum state vector |0 with in a Hilbert space with 5 number states, from 0 to 4: 20 ## In [1]: vac = basis(5, 0) In [2]: vac Out[2]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 1.] [ 0.] [ 0.] [ 0.] [ 0.]] ## and then create a lowering operator qutip.operators.destroy function: ( ) corresponding to number states using the In [3]: a = destroy(5) In [4]: a Out[4]: Quantum object: dims = [[5], [5]], shape = [5, 5], Qobj data = [[ 0. 1. 0. 0. [ 0. 0. 1.41421356 0. [ 0. 0. 0. 1.73205081 [ 0. 0. 0. 0. [ 0. 0. 0. 0. ## type = oper, isherm = False 0. 0. 0. 2. 0. ] ] ] ] ]] Now lets apply the destruction operator to our vacuum state vac, In [5]: a * vac Out[5]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]] We see that, as expected, the vacuum is transformed to the zero vector. A more interesting example comes from using the adjoint of the lowering operator, the raising operator : In [6]: a.dag() * vac Out[6]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 1.] [ 0.] [ 0.] [ 0.]] The raising operator has in indeed raised the state vec from the vacuum to the |1 state. Instead of using the dagger Qobj.dag() method to raise the state, we could have also used the built in qutip.operators.create function to make a raising operator: In [7]: c = create(5) In [8]: c * vac Out[8]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] 21 [ [ [ [ 1.] 0.] 0.] 0.]] which does the same thing. We can raise the vacuum state more than once by successively apply the raising operator: In [9]: c * c * vac Out[9]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0. ] [ 0. ] [ 1.41421356] [ 0. ] [ 0. ]] ( )2 or just taking the square of the raising operator : In [10]: c ** 2 * vac Out[10]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0. ] [ 0. ] [ 1.41421356] [ 0. ] [ 0. ]] ## Applying the raising operator twice gives the expected to also apply the number operator to the state vector vac: ## + 1 dependence. We can use the product of * In [11]: c * a * vac Out[11]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 0.] [ 0.] [ 0.]] or on the |1 state: In [12]: c * a * (c * vac) Out[12]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 1.] [ 0.] [ 0.] [ 0.]] or the |2 state: In [13]: c * a * (c**2 * vac) Out[13]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0. ] [ 0. ] 22 [ 2.82842712] [ 0. ] [ 0. ]] Notice how in this last example, application of the number operator does not give the expected value = 2, but rather 2 2. This is because this last state is not normalized to unity as | = + 1 | + 1. Therefore, we should normalize our vector first: In [14]: c * a * (c**2 * vac).unit() Out[14]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 2.] [ 0.] [ 0.]] Since we are giving a demonstration of using states and operators, we have done a lot more work than we should have. For example, we do not need to operate on the vacuum state to generate a higher number Fock state. Instead we can use the qutip.states.basis (or qutip.states.fock) function to directly obtain the required state: In [15]: ket = basis(5, 2) In [16]: print(ket) Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 1.] [ 0.] [ 0.]] Notice how it is automatically normalized. We can also use the built in qutip.operators.num operator: In [17]: n = num(5) In [18]: print(n) Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[ 0. 0. 0. 0. 0.] [ 0. 1. 0. 0. 0.] [ 0. 0. 2. 0. 0.] [ 0. 0. 0. 3. 0.] [ 0. 0. 0. 0. 4.]] ## Therefore, instead of c * a * (c ** 2 * vac).unit() we have: In [19]: n * ket Out[19]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 2.] [ 0.] [ 0.]] 23 ## In [20]: ket = (basis(5, 0) + basis(5, 1)).unit() In [21]: print(ket) Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.70710678] [ 0.70710678] [ 0. ] [ 0. ] [ 0. ]] where we have used the qutip.Qobj.unit method to again normalize the state. Operating with the number function again: In [22]: n * ket Out[22]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0. ] [ 0.70710678] [ 0. ] [ 0. ] [ 0. ]] We can also create coherent states and squeezed states by applying the qutip.operators.displace and qutip.operators.squeeze functions to the vacuum state: In [23]: vac = basis(5, 0) In [24]: d = displace(5, 1j) In [25]: s = squeeze(5, 0.25 + 0.25j) In [26]: d * vac Out[26]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.60655682+0.j ] [ 0.00000000+0.60628133j] [-0.43038740+0.j ] [ 0.00000000-0.24104351j] [ 0.14552147+0.j ]] In [27]: d * s * vac Out[27]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.65893786+0.08139381j] [ 0.10779462+0.51579735j] [-0.37567217-0.01326853j] [-0.02688063-0.23828775j] [ 0.26352814+0.11512178j]] Of course, displacing the vacuum gives a coherent state, which can also be generated using the built in qutip.states.coherent function. Density matrices One of the main purpose of QuTiP is to explore the dynamics of open quantum systems, where the most general state of a system is not longer a state vector, but rather a density matrix. Since operations on density matrices operate identically to those of vectors, we will just briefly highlight creating and using these structures. The simplest density matrix is created by forming the outer-product | | of a ket vector: 24 ## In [28]: ket = basis(5, 2) In [29]: ket * ket.dag() Out[29]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]] ## A similar task can also be accomplished via the qutip.states.fock_dm or qutip.states.ket2dm functions: In [30]: fock_dm(5, 2) Out[30]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]] In [31]: ket2dm(ket) Out[31]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 1. 0. 0.] [ 0. 0. 0. 0. 0.] [ 0. 0. 0. 0. 0.]] If we want to create a density matrix with equal classical probability of being found in the |2 or |4 number states we can do the following: In [32]: 0.5 * ket2dm(basis(5, 4)) + 0.5 * ket2dm(basis(5, 2)) Out[32]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0.5 0. 0. ] [ 0. 0. 0. 0. 0. ] [ 0. 0. 0. 0. 0.5]] or use 0.5 * fock_dm(5, 2) + 0.5 * fock_dm(5, 4). There are also several other builtin functions for creating predefined density matrices, for example qutip.states.coherent_dm and qutip.states.thermal_dm which create coherent state and thermal state density matrices, respectively. In [33]: coherent_dm(5, 1.25) Out[33]: Quantum object: dims = [[5], [5]], shape = [5, 5], Qobj data = [[ 0.20980701 0.26141096 0.23509686 0.15572585 [ 0.26141096 0.32570738 0.29292109 0.19402805 [ 0.23509686 0.29292109 0.26343512 0.17449684 [ 0.15572585 0.19402805 0.17449684 0.11558499 [ 0.13390765 0.16684347 0.1500487 0.09939079 0.13390765] 0.16684347] 0.1500487 ] 0.09939079] 0.0854655 ]] 25 ## In [34]: thermal_dm(5, 1.25) Out[34]: Quantum object: dims = [[5], [5]], shape = [5, 5], Qobj data = [[ 0.46927974 0. 0. 0. [ 0. 0.26071096 0. 0. [ 0. 0. 0.14483942 0. [ 0. 0. 0. 0.08046635 [ 0. 0. 0. 0. ## type = oper, isherm = True 0. ] 0. ] 0. ] 0. ] 0.04470353]] QuTiP also provides a set of distance metrics for determining how close two density matrix distributions are to each other. Included are the trace distance qutip.metrics.tracedist, fidelity qutip.metrics.fidelity, Hilbert-Schmidt distance qutip.metrics.hilbert_dist, Bures distance qutip.metrics.bures_dist, and Bures angle qutip.metrics.bures_angle. In [35]: x = coherent_dm(5, 1.25) In [36]: y = coherent_dm(5, 1.25j) ## In [37]: z = thermal_dm(5, 0.125) In [38]: fidelity(x, x) Out[38]: 1.0000000208397526 In [39]: tracedist(y, y) Out[39]: 0.0 We also know that for two pure states, the trace distance (T) and the fidelity (F) are related by = 1 2. In [40]: tracedist(y, x) Out[40]: 0.9771565895267291 ## In [41]: np.sqrt(1 - fidelity(y, x) ** 2) Out[41]: 0.97715657013508528 For a pure state and a mixed state, 1 2 which can also be verified: In [42]: 1 - fidelity(x, z) ** 2 Out[42]: 0.7782890497791632 In [43]: tracedist(x, z) Out[43]: 0.8559028328862591 ## Qubit (two-level) systems Having spent a fair amount of time on basis states that represent harmonic oscillator states, we now move on to qubit, or two-level quantum systems (for example a spin-1/2). To create a state vector corresponding to a qubit system, we use the same qutip.states.basis, or qutip.states.fock, function with only two levels: In [44]: spin = basis(2, 0) Now at this point one may ask how this state is different than that of a harmonic oscillator in the vacuum state truncated to two energy levels? In [45]: vac = basis(2, 0) ## At this stage, there is no difference. This should not be surprising as we called the exact same function twice. The difference between the two comes from the action of the spin operators qutip.operators.sigmax, qutip.operators.sigmay, qutip.operators.sigmaz, qutip.operators.sigmap, and qutip.operators.sigmam on these two-level states. For example, 26 if vac corresponds to the vacuum state of a harmonic oscillator, then, as we have already seen, we can use the raising operator to get the |1 state: In [46]: vac Out[46]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]] In [47]: c = create(2) In [48]: c * vac Out[48]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.] [ 1.]] For a spin system, the operator analogous to the raising operator is the sigma-plus operator qutip.operators.sigmap. Operating on the spin state gives: In [49]: spin Out[49]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]] In [50]: sigmap() * spin Out[50]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.] [ 0.]] Now we see the difference! The qutip.operators.sigmap operator acting on the spin state returns the zero vector. Why is this? To see what happened, let us use the qutip.operators.sigmaz operator: In [51]: sigmaz() Out[51]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 1. 0.] [ 0. -1.]] In [52]: sigmaz() * spin Out[52]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]] In [53]: spin2 = basis(2, 1) In [54]: spin2 Out[54]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.] [ 1.]] 27 ## In [55]: sigmaz() * spin2 Out[55]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.] [-1.]] The answer is now apparent. Since the QuTiP qutip.operators.sigmaz function uses the standard z-basis representation of the sigma-z spin operator, the spin state corresponds to the | state of a two-level spin system while spin2 gives the | state. Therefore, in our previous example sigmap() * spin, we raised the qubit state out of the truncated two-level Hilbert space resulting in the zero state. While at first glance this convention might seem somewhat odd, it is in fact quite handy. For one, the spin operators remain in the conventional form. Second, when the spin system is in the | state: In [56]: sigmaz() * spin Out[56]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]] the non-zero component is the zeroth-element of the underlying matrix (remember that python uses c-indexing, and matrices start with the zeroth element). The | state therefore has a non-zero entry in the first index position. This corresponds nicely with the quantum information definitions of qubit states, where the excited | state is label as |0, and the | state by |1. If one wants to create spin operators for higher spin systems, then the qutip.operators.jmat function comes in handy. Expectation values Some of the most important information about quantum systems comes from calculating the expectation value of operators, both Hermitian and non-Hermitian, as the state or density matrix of the system varies in time. Therefore, in this section we demonstrate the use of the qutip.expect function. To begin: In [57]: vac = basis(5, 0) In [58]: one = basis(5, 1) In [59]: c = create(5) In [60]: N = num(5) In [61]: expect(N, vac) Out[61]: 0.0 In [62]: expect(N, one) Out[62]: 1.0 ## In [63]: coh = coherent_dm(5, 1.0j) In [64]: expect(N, coh) Out[64]: 0.9970555745806599 ## In [65]: cat = (basis(5, 4) + 1.0j * basis(5, 3)).unit() In [66]: expect(c, cat) Out[66]: 0.9999999999999998j The qutip.expect function also accepts lists or arrays of state vectors or density matrices for the second input: 28 ## In [67]: states = [(c**k * vac).unit() for k in range(5)] In [68]: expect(N, states) Out[68]: array([ 0., 1., 2., 3., # must normalize 4.]) ## In [69]: cat_list = [(basis(5, 4) + x * basis(5, 3)).unit() ....: for x in [0, 1.0j, -1.0, -1.0j]] ....: In [70]: expect(c, cat_list) Out[70]: array([ 0.+0.j, 0.+1.j, -1.+0.j, 0.-1.j]) Notice how in this last example, all of the return values are complex numbers. This is because the qutip.expect function looks to see whether the operator is Hermitian or not. If the operator is Hermitian, than the output will always be real. In the case of non-Hermitian operators, the return values may be complex. Therefore, the qutip.expect function will return an array of complex values for non-Hermitian operators when the input is a list/array of states or density matrices. Of course, the qutip.expect function works for spin states and operators: In [71]: up = basis(2, 0) In [72]: down = basis(2, 1) In [73]: expect(sigmaz(), up) Out[73]: 1.0 In [74]: expect(sigmaz(), down) Out[74]: -1.0 as well as the composite objects discussed in the next section Using Tensor Products and Partial Traces: In [75]: spin1 = basis(2, 0) In [76]: spin2 = basis(2, 1) In [77]: two_spins = tensor(spin1, spin2) In [78]: sz1 = tensor(sigmaz(), qeye(2)) In [79]: sz2 = tensor(qeye(2), sigmaz()) In [80]: expect(sz1, two_spins) Out[80]: 1.0 In [81]: expect(sz2, two_spins) Out[81]: -1.0 ## Superoperators and Vectorized Operators In addition to state vectors and density operators, QuTiP allows for representing maps that act linearly on density operators using the Kraus, Liouville supermatrix and Choi matrix formalisms. This support is based on the correspondance between linear operators acting on a Hilbert space, and vectors in two copies of that Hilbert space, vec : () [Hav03], [Wat13]. This isomorphism is implemented in QuTiP by the operator_to_vector and vector_to_operator functions: In [82]: psi = basis(2, 0) In [83]: rho = ket2dm(psi) 29 In [84]: rho Out[84]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 1. 0.] [ 0. 0.]] In [85]: vec_rho = operator_to_vector(rho) In [86]: vec_rho Out[86]: Quantum object: dims = [[[2], [2]], [1]], shape = [4, 1], type = operator-ket Qobj data = [[ 1.] [ 0.] [ 0.] [ 0.]] In [87]: rho2 = vector_to_operator(vec_rho) In [88]: (rho - rho2).norm() Out[88]: 0.0 The type attribute indicates whether a quantum object is a vector corresponding to an operator (operator-ket), or its Hermitian conjugate (operator-bra). Note that QuTiP uses the column-stacking convention for the isomorphism between () and : In [89]: import numpy as np In [90]: A = Qobj(np.arange(4).reshape((2, 2))) In [91]: A Out[91]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = False Qobj data = [[ 0. 1.] [ 2. 3.]] In [92]: operator_to_vector(A) Out[92]: Quantum object: dims = [[[2], [2]], [1]], shape = [4, 1], type = operator-ket Qobj data = [[ 0.] [ 2.] [ 1.] [ 3.]] Since is a vector space, linear maps on this space can be represented as matrices, often called supermatrices. Using the Qobj, the spre and spost functions, supermatrices corresponding to left- and rightmultiplication respectively can be quickly constructed. In [93]: X = sigmax() In [94]: S = spre(X) * spost(X.dag()) # Represents conjugation by X. Note that this is done automatically by the to_super function when given type=oper input. In [95]: S2 = to_super(X) In [96]: (S - S2).norm() Out[96]: 0.0 ## Quantum objects representing superoperators are denoted by type=super: 30 In [97]: S Out[97]: Quantum object: dims = [[[2], [2]], [[2], [2]]], shape = [4, 4], type = super, isherm = True Qobj data = [[ 0. 0. 0. 1.] [ 0. 0. 1. 0.] [ 0. 1. 0. 0.] [ 1. 0. 0. 0.]] Information about superoperators, such as whether they represent completely positive maps, is exposed through the iscp, istp and iscptp attributes: In [98]: S.iscp, S.istp, S.iscptp Out[98]: (True, True, True) In addition, dynamical generators on this extended space, often called Liouvillian superoperators, can be created using the liouvillian function. Each of these takes a Hamilonian along with a list of collapse operators, and returns a type="super" object that can be exponentiated to find the superoperator for that evolution. In [99]: H = 10 * sigmaz() In [100]: c1 = destroy(2) In [101]: L = liouvillian(H, [c1]) In [102]: L Out[102]: Quantum object: dims = [[[2], [2]], [[2], [2]]], shape = [4, 4], type = super, isherm = False Qobj data = [[ 0.0 +0.j 0.0 +0.j 0.0 +0.j 1.0 +0.j] [ 0.0 +0.j -0.5+20.j 0.0 +0.j 0.0 +0.j] [ 0.0 +0.j 0.0 +0.j -0.5-20.j 0.0 +0.j] [ 0.0 +0.j 0.0 +0.j 0.0 +0.j -1.0 +0.j]] In [103]: S = (12 * L).expm() Once a superoperator has been obtained, it can be converted between the supermatrix, Kraus and Choi formalisms by using the to_super, to_kraus and to_choi functions. The superrep attribute keeps track of what reprsentation is a Qobj is currently using. In [104]: J = to_choi(S) In [105]: J Out[105]: Quantum object: dims = [[[2], [2]], [[2], [2]]], shape = [4, 4], type = super, isherm = True, supe Qobj data = [[ 1.00000000e+00+0.j 0.00000000e+00+0.j 0.00000000e+00+0.j 8.07531120e-04-0.00234352j] [ 0.00000000e+00+0.j 0.00000000e+00+0.j 0.00000000e+00+0.j 0.00000000e+00+0.j ] [ 0.00000000e+00+0.j 0.00000000e+00+0.j 9.99993856e-01+0.j 0.00000000e+00+0.j ] [ 8.07531120e-04+0.00234352j 0.00000000e+00+0.j 0.00000000e+00+0.j 6.14421235e-06+0.j ]] In [106]: K = to_kraus(J) In [107]: K Out[107]: [Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = False Qobj data = [[ 1.00000000e+00 +1.34376978e-22j 0.00000000e+00 +0.00000000e+00j] 31 [ 0.00000000e+00 +0.00000000e+00j 8.07531120e-04 +2.34352424e-03j]], Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = False Qobj data = [[ -1.11923759e-13 +6.02807402e-15j 0.00000000e+00 +0.00000000e+00j] [ 0.00000000e+00 +0.00000000e+00j 1.70093171e-11 +4.18976706e-11j]], Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 0.] [ 0. 0.]], Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = False Qobj data = [[ 0. 0.99999693] [ 0. 0. ]]] ## 3.4 Using Tensor Products and Partial Traces Tensor products To describe the states of multipartite quantum systems - such as two coupled qubits, a qubit coupled to an oscillator, etc. - we need to expand the Hilbert space by taking the tensor product of the state vectors for each of the system components. Similarly, the operators acting on the state vectors in the combined Hilbert space (describing the coupled system) are formed by taking the tensor product of the individual operators. In QuTiP the function qutip.tensor.tensor is used to accomplish this task. This function takes as argument a collection: >>> tensor(op1, op2, op3) or a list: >>> tensor([op1, op2, op3]) of state vectors or operators and returns a composite quantum object for the combined Hilbert space. The function accepts an arbitray number of states or operators as argument. The type returned quantum object is the same as that of the input(s). For example, the state vector describing two qubits in their ground states is formed by taking the tensor product of the two single-qubit ground state vectors: In [1]: tensor(basis(2, 0), basis(2, 0)) Out[1]: Quantum object: dims = [[2, 2], [1, 1]], shape = [4, 1], type = ket Qobj data = [[ 1.] [ 0.] [ 0.] [ 0.]] ## or equivalently using the list format: In [2]: tensor([basis(2, 0), basis(2, 0)]) Out[2]: Quantum object: dims = [[2, 2], [1, 1]], shape = [4, 1], type = ket Qobj data = [[ 1.] [ 0.] [ 0.] [ 0.]] This is straightforward to generalize to more qubits by adding more component state vectors in the argument list to the qutip.tensor.tensor function, as illustrated in the following example: 32 ## In [3]: tensor((basis(2, 0) + basis(2, 1)).unit(), ...: (basis(2, 0) + basis(2, 1)).unit(), basis(2, 0)) ...: Out[3]: Quantum object: dims = [[2, 2, 2], [1, 1, 1]], shape = [8, 1], type = ket Qobj data = [[ 0.5] [ 0. ] [ 0.5] [ 0. ] [ 0.5] [ 0. ] [ 0.5] [ 0. ]] This state is slightly more complicated, describing two qubits in a superposition between the up and down states, while the third qubit is in its ground state. To construct operators that act on an extended Hilbert space of a combined system, we similarly pass a list of operators for each component system to the qutip.tensor.tensor function. For example, to form the operator that represents the simultaneous action of the operator on two qubits: In [4]: tensor(sigmax(), sigmax()) Out[4]: Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isherm = True Qobj data = [[ 0. 0. 0. 1.] [ 0. 0. 1. 0.] [ 0. 1. 0. 0.] [ 1. 0. 0. 0.]] To create operators in a combined Hilbert space that only act only on a single component, we take the tensor product of the operator acting on the subspace of interest, with the identity operators corresponding to the components that are to be unchanged. For example, the operator that represents on the first qubit in a two-qubit system, while leaving the second qubit unaffected: In [5]: tensor(sigmaz(), identity(2)) Out[5]: Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isherm = True Qobj data = [[ 1. 0. 0. 0.] [ 0. 1. 0. 0.] [ 0. 0. -1. 0.] [ 0. 0. 0. -1.]] ## Example: Constructing composite Hamiltonians The qutip.tensor.tensor function is extensively used when constructing Hamiltonians for composite systems. Here well look at some simple examples. Two coupled qubits First, lets consider a system of two coupled qubits. Assume that both qubit has equal energy splitting, and that the qubits are coupled through a interaction with strength g = 0.05 (in units where the bare qubit energy splitting is unity). The Hamiltonian describing this system is: In [6]: H = tensor(sigmaz(), identity(2)) + tensor(identity(2), ...: sigmaz()) + 0.05 * tensor(sigmax(), sigmax()) ...: In [7]: H Out[7]: 33 Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isherm = True Qobj data = [[ 2. 0. 0. 0.05] [ 0. 0. 0.05 0. ] [ 0. 0.05 0. 0. ] [ 0.05 0. 0. -2. ]] ## Three coupled qubits The two-qubit example is easily generalized to three coupled qubits: In [8]: H = (tensor(sigmaz(), identity(2), identity(2)) + ...: tensor(identity(2), sigmaz(), identity(2)) + ...: tensor(identity(2), identity(2), sigmaz()) + ...: 0.5 * tensor(sigmax(), sigmax(), identity(2)) + ...: 0.25 * tensor(identity(2), sigmax(), sigmax())) ...: In [9]: H Out[9]: Quantum object: dims = [[2, 2, 2], [2, 2, 2]], shape = [8, 8], type = oper, isherm = True Qobj data = [[ 3. 0. 0. 0.25 0. 0. 0.5 0. ] [ 0. 1. 0.25 0. 0. 0. 0. 0.5 ] [ 0. 0.25 1. 0. 0.5 0. 0. 0. ] [ 0.25 0. 0. -1. 0. 0.5 0. 0. ] [ 0. 0. 0.5 0. 1. 0. 0. 0.25] [ 0. 0. 0. 0.5 0. -1. 0.25 0. ] [ 0.5 0. 0. 0. 0. 0.25 -1. 0. ] [ 0. 0.5 0. 0. 0.25 0. 0. -3. ]] ## A two-level system coupled to a cavity: The Jaynes-Cummings model The simplest possible quantum mechanical description for light-matter interaction is encapsulated in the JaynesCummings model, which describes the coupling between a two-level atom and a single-mode electromagnetic field (a cavity mode). Denoting the energy splitting of the atom and cavity omega_a and omega_c, respectively, and the atom-cavity interaction strength g, the Jaynes-Cumming Hamiltonian can be constructed as: In [10]: N = 10 In [11]: omega_a = 1.0 In [12]: omega_c = 1.25 In [13]: g = 0.05 In [14]: a = tensor(identity(2), destroy(N)) In [15]: sm = tensor(destroy(2), identity(N)) In [16]: sz = tensor(sigmaz(), identity(N)) In [17]: H = 0.5 * omega_a * sz + omega_c * a.dag() * a + g * (a.dag() * sm + a * sm.dag()) ## Here N is the number of Fock states included in the cavity mode. Partial trace The partial trace is an operation that reduces the dimension of a Hilbert space by eliminating some degrees of freedom by averaging (tracing). In this sense it is therefore the converse of the tensor product. It is useful when one is interested in only a part of a coupled quantum system. For open quantum systems, this typically involves tracing 34 over the environment leaving only the system of interest. In QuTiP the class method qutip.Qobj.ptrace is used to take partial traces. qutip.Qobj.ptrace acts on the qutip.Qobj instance for which it is called, and it takes one argument sel, which is a list of integers that mark the component systems that should be kept. All other components are traced out. For example, the density matrix describing a single qubit obtained from a coupled two-qubit system is obtained via: In [18]: ## psi = tensor(basis(2, 0), basis(2, 1)) In [19]: psi.ptrace(0) Out[19]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 1. 0.] [ 0. 0.]] In [20]: psi.ptrace(1) Out[20]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 0.] [ 0. 1.]] Note that the partial trace always results in a density matrix (mixed state), regardless of whether the composite system is a pure state (described by a state vector) or a mixed state (described by a density matrix): In [21]: ## psi = tensor((basis(2, 0) + basis(2, 1)).unit(), basis(2, 0)) In [22]: psi Out[22]: Quantum object: dims = [[2, 2], [1, 1]], shape = [4, 1], type = ket Qobj data = [[ 0.70710678] [ 0. ] [ 0.70710678] [ 0. ]] In [23]: psi.ptrace(0) Out[23]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0.5 0.5] [ 0.5 0.5]] In [24]: ## rho = tensor(ket2dm((basis(2, 0) + basis(2, 1)).unit()), fock_dm(2, 0)) In [25]: rho Out[25]: Quantum object: dims Qobj data = [[ 0.5 0. 0.5 0. [ 0. 0. 0. 0. [ 0.5 0. 0.5 0. [ 0. 0. 0. 0. = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isherm = True ] ] ] ]] In [26]: rho.ptrace(0) Out[26]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0.5 0.5] [ 0.5 0.5]] 35 ## Superoperators and Tensor Manipulations As described in Superoperators and Vectorized Operators, superoperators are operators that act on Liouville space, the vectorspace of linear operators. Superoperators can be represented using the isomorphism vec : () [Hav03], [Wat13]. To represent superoperators acting on (1 2 ) thus takes some tensor rearrangement to get the desired ordering 1 2 1 2 . In particular, this means that qutip.tensor does not act as one might expect on the results of qutip.to_super: In [27]: A = qeye([2]) In [28]: B = qeye([3]) In [29]: to_super(tensor(A, B)).dims Out[29]: [[[2, 3], [2, 3]], [[2, 3], [2, 3]]] In [30]: tensor(to_super(A), to_super(B)).dims Out[30]: [[[2], [2], [3], [3]], [[2], [2], [3], [3]]] In the former case, the result correctly has four copies of the compound index with dims [2, 3]. In the latter case, however, each of the Hilbert space indices is listed independently and in the wrong order. The qutip.super_tensor function performs the needed rearrangement, providing the most direct analog to qutip.tensor on the underlying Hilbert space. In particular, for any two type="oper" Qobjs A and B, to_super(tensor(A, B)) == super_tensor(to_super(A), to_super(B)) and operator_to_vector(tensor(A, B)) == super_tensor(operator_to_vector(A), operator_to_vector(B)). Returning to the previous example: In [31]: super_tensor(to_super(A), to_super(B)).dims Out[31]: [[[2, 3], [2, 3]], [[2, 3], [2, 3]]] ## The qutip.composite function automatically switches between qutip.tensor and qutip.super_tensor based on the type of its arguments, such that composite(A, B) returns an appropriate Qobj to represent the composition of two systems. In [32]: composite(A, B).dims Out[32]: [[2, 3], [2, 3]] In [33]: composite(to_super(A), to_super(B)).dims Out[33]: [[[2, 3], [2, 3]], [[2, 3], [2, 3]]] QuTiP also allows more general tensor manipulations that are useful for converting between superoperator representations [WBC11]. In particular, the tensor_contract function allows for contracting one or more pairs of indices. As detailed in the channel contraction tutorial, this can be used to find superoperators that represent partial trace maps. Using this functionality, we can construct some quite exotic maps, such as a map from 3 3 operators to 2 2 operators: In [34]: tensor_contract(composite(to_super(A), to_super(B)), (1, 3), (4, 6)).dims Out[34]: [[[2], [2]], [[3], [3]]] ## 3.5 Time Evolution and Quantum System Dynamics Dynamics Simulation Results Important: In QuTiP 2, the results from all of the dynamics solvers are returned as Odedata objects. This unified and significantly simplified postprocessing of simulation results from different solvers, compared to QuTiP 1. However, this change also results in the loss of backward compatibility with QuTiP version 1.x. In QuTiP 3, the Odedata class has been renamed to Result, but for backwards compatibility an alias between Result and Odedata is provided. 36 ## The solver.Result Class Before embarking on simulating the dynamics of quantum systems, we will first look at the data structure used for returning the simulation results to the user. This object is a qutip.solver.Result class that stores all the crucial data needed for analyzing and plotting the results of a simulation. Like the qutip.Qobj class, the Result class has a collection of properties for storing information. However, in contrast to the Qobj class, this structure contains no methods, and is therefore nothing but a container object. A generic Result object result contains the following properties for storing simulation data: Property Description result.solver String indicating which solver was used to generate the data. result.times List/array of times at which simulation data is calculated. result.expect List/array of expectation values, if requested. result.states List/array of state vectors/density matrices calculated at times, if requested. result.num_expectThe number of expectation value operators in the simulation. result.num_collapse The number of collapse operators in the simulation. result.ntraj Number of Monte Carlo trajectories run. result.col_times Times at which state collapse occurred. Only for Monte Carlo solver. result.col_which Which collapse operator was responsible for each collapse in in col_times. Only used by Monte Carlo solver. result.seeds Seeds used in generating random numbers for Monte Carlo solver. Accessing Result Data To understand how to access the data in a Result object we will use an example as a guide, although we do not worry about the simulation details at this stage. Like all solvers, the Monte Carlo solver used in this example returns an Result object, here called simply result. To see what is contained inside result we can use the print function: >>> print(result) Result object with mcsolve data. --------------------------------expect = True num_expect = 2, num_collapse = 2, ntraj = 500 The first line tells us that this data object was generated from the Monte Carlo solver mcsolve (discussed in Monte Carlo Solver). The next line (not the --- line of course) indicates that this object contains expectation value data. Finally, the last line gives the number of expectation value and collapse operators used in the simulation, along with the number of Monte Carlo trajectories run. Note that the number of trajectories ntraj is only displayed when using the Monte Carlo solver. Now we have all the information needed to analyze the simulation results. To access the data for the two expectation values one can do: >>> expt0 = result.expect[0] >>> expt1 = result.expect[1] Recall that Python uses C-style indexing that begins with zero (i.e., [0] => 1st collapse operator data). Together with the array of times at which these expectation values are calculated: >>> times = result.times ## we can plot the resulting expectation values: >>> plot(times, expt0, times, expt1) >>> show() State vectors, or density matrices, as well as col_times and col_which, are accessed in a similar manner, although typically one does not need an index (i.e [0]) since there is only one list for each of these components. The one exception to this rule is if you choose to output state vectors from the Monte Carlo solver, in which case there are ntraj number of state vector arrays. 37 The main advantage in using the Result class as a data storage object comes from the simplicity in which simulation data can be stored and later retrieved. The qutip.fileio.qsave and qutip.fileio.qload functions are designed for this task. To begin, let us save the data object from the previous section into a file called cavity+qubit-data in the current working directory by calling: >>> qsave(result, 'cavity+qubit-data') All of the data results are then stored in a single file of the same name with a .qu extension. Therefore, everything needed to later this data is stored in a single file. Loading the file is just as easy as saving: Result object with mcsolve data. --------------------------------expect = True num_expect = 2, num_collapse = 2, ntraj = 500 where stored_result is the new name of the Result object. We can then extract the data and plot in the same manner as before: expt0 = stored_result.expect[0] expt1 = stored_result.expect[1] times = stored_result.times plot(times, expt0, times, expt1) show() Also see Saving QuTiP Objects and Data Sets for more information on saving quantum objects, as well as arrays for use in other programs. Unitary evolution The dynamics of a closed (pure) quantum system is governed by the Schrdinger equation = , (3.1) ## the Hamiltonian, and is Plancks constant. In general, the Schrdinger equation where is the wave function, are functions of space and time. For computational is a partial differential equation (PDE) where both and purposes it is useful to expand the PDE in a set of basis functions that span the Hilbert space of the Hamiltonian, and to write the equation in matrix and vector form | = | where | is the state vector and is the matrix representation of the Hamiltonian. This matrix equation can, in principle, be solved by diagonalizing the Hamiltonian matrix . In practice, however, it is difficult to perform this diagonalization unless the size of the Hilbert space (dimension of the matrix ) is small. Analytically, it is a formidable task to calculate the dynamics for systems with more than two states. If, in addition, we consider dissipation due to the inevitable interaction with a surrounding environment, the computational complexity grows even larger, and we have to resort to numerical calculations in all realistic situations. This illustrates the importance of numerical calculations in describing the dynamics of open quantum systems, and the need for efficient and The Schrdinger equation, which governs the time-evolution of closed quantum systems, is defined by its Hamiltonian and state vector. In the previous section, Using Tensor Products and Partial Traces, we showed how Hamiltonians and state vectors are constructed in QuTiP. Given a Hamiltonian, we can calculate the unitary (nondissipative) time-evolution of an arbitrary state vector |0 (psi0) using the QuTiP function qutip.mesolve. It evolves the state vector and evaluates the expectation values for a set of operators expt_ops at the points in time in the list times, using an ordinary differential equation solver. Alternatively, we can use the function qutip.essolve, which uses the exponential-series technique to calculate the time evolution of a system. The 38 qutip.mesolve and qutip.essolve functions take the same arguments and it is therefore easy switch between the two solvers. For example, the time evolution of a quantum spin-1/2 system with tunneling rate 0.1 that initially is in the up state is calculated, and the expectation values of the operator evaluated, with the following code In [1]: H = 2 * np.pi * 0.1 * sigmax() In [2]: psi0 = basis(2, 0) In [3]: times = np.linspace(0.0, 10.0, 20.0) In [4]: result = mesolve(H, psi0, times, [], [sigmaz()]) The brackets in the fourth argument is an empty list of collapse operators, since we consider unitary evolution in this example. See the next section for examples on how dissipation is included by defining a list of collapse operators. The function returns an instance of qutip.solver.Result, as described in the previous section Dynamics Simulation Results. The attribute expect in result is a list of expectation values for the operators that are included in the list in the fifth argument. Adding operators to this list results in a larger output list returned by the function (one array of numbers, corresponding to the times in times, for each operator) In [5]: result = mesolve(H, psi0, times, [], [sigmaz(), sigmay()]) In [6]: result.expect Out[6]: [array([ 1. , 0.78914057, 0.24548559, -0.40169513, -0.8794735 , -0.98636142, -0.67728219, -0.08258023, 0.54694721, 0.94581685, 0.94581769, 0.54694945, -0.08257765, -0.67728015, -0.98636097, -0.87947476, -0.40169736, 0.24548326, 0.78913896, 1. ]), array([ 0.00000000e+00, -6.14212640e-01, -9.69400240e-01, -9.15773457e-01, -4.75947849e-01, 1.64593874e-01, 7.35723339e-01, 9.96584419e-01, 8.37167094e-01, 3.24700624e-01, -3.24698160e-01, -8.37165632e-01, -9.96584633e-01, -7.35725221e-01, -1.64596567e-01, 4.75945525e-01, 9.15772479e-01, 9.69400830e-01, 6.14214701e-01, 2.77159958e-06])] The resulting list of expectation values can easily be visualized using matplotlibs plotting functions: In [7]: H = 2 * np.pi * 0.1 * sigmax() In [8]: psi0 = basis(2, 0) In [9]: times = np.linspace(0.0, 10.0, 100) In [10]: result = mesolve(H, psi0, times, [], [sigmaz(), sigmay()]) In [11]: fig, ax = subplots() In [12]: ax.plot(result.times, result.expect[0]); In [13]: ax.plot(result.times, result.expect[1]); In [14]: ax.set_xlabel('Time'); In [15]: ax.set_ylabel('Expectation values'); In [16]: ax.legend(("Sigma-Z", "Sigma-Y")); In [17]: show() 39 If an empty list of operators is passed as fifth parameter, the qutip.mesolve function returns a qutip.solver.Result instance that contains a list of state vectors for the times specified in times In [18]: times = [0.0, 1.0] In [19]: result = mesolve(H, psi0, times, [], []) In [20]: result.states Out[20]: [Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 1.] [ 0.]], Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.80901699+0.j ] [ 0.00000000-0.58778526j]]] Non-unitary evolution While the evolution of the state vector in a closed quantum system is deterministic, open quantum systems are stochastic in nature. The effect of an environment on the system of interest is to induce stochastic transitions between energy levels, and to introduce uncertainty in the phase difference between states of the system. The state of an open quantum system is therefore described in terms of ensemble averaged states using the density matrix formalism. A density matrix describes a probability distribution of quantum states | , in a matrix representation = | |, where is the classical probability that the system is in the quantum state | . The time evolution of a density matrix is the topic of the remaining portions of this section. The standard approach for deriving the equations of motion for a system interacting with its environment is to expand the scope of the system to include the environment. The combined quantum system is then closed, and its evolution is governed by the von Neumann equation ## tot () = [tot , tot ()], (3.2) 40 the equivalent of the Schrdinger equation (3.1) in the density matrix formalism. Here, the total Hamiltonian tot = sys + env + int , includes the original system Hamiltonian sys , the Hamiltonian for the environment env , and a term representing the interaction between the system and its environment int . Since we are only interested in the dynamics of the system, we can at this point perform a partial trace over the environmental degrees of freedom in Eq. (3.2), and thereby obtain a master equation for the motion of the original system density matrix. The most general tracepreserving and completely positive form of this evolution is the Lindblad master equation for the reduced density matrix = Trenv [tot ] 1 [ ] 2 ()+ ()+ + () () = [(), ()] + (3.3) where the = are collapse operators, and are the operators through which the environment couples to the system in int , and are the corresponding rates. The derivation of Eq. (3.3) may be found in several sources, and will not be reproduced here. Instead, we emphasize the approximations that are required to arrive at the master equation in the form of Eq. (3.3) from physical arguments, and hence perform a calculation in QuTiP: Separability: At = 0 there are no correlations between the system and its environment such that the total density matrix can be written as a tensor product tot (0) = (0) env (0). Born approximation: Requires: (1) that the state of the environment does not significantly change as a result of the interaction with the system; (2) The system and the environment remain separable throughout the evolution. These assumptions are justified if the interaction is weak, and if the environment is much larger than the system. In summary, tot () () env . Markov approximation The time-scale of decay for the environment env is much shorter than the smallest time-scale of the system dynamics sys env . This approximation is often deemed a short-memory environment as it requires that environmental correlation functions decay on a time-scale fast compared to those of the system. Secular approximation Stipulates that elements in the master equation corresponding to transition frequencies satisfy | | 1/sys , i.e., all fast rotating terms in the interaction picture can be neglected. It also ignores terms that lead to a small renormalization of the system energy levels. This approximation is not strictly necessary for all master-equation formalisms (e.g., the Block-Redfield master equation), but it is required for arriving at the Lindblad form (3.3) which is used in qutip.mesolve. For systems with environments satisfying the conditions outlined above, the Lindblad master equation (3.3) governs the time-evolution of the system density matrix, giving an ensemble average of the system dynamics. In order to ensure that these approximations are not violated, it is important that the decay rates be smaller than the minimum energy splitting in the system Hamiltonian. Situations that demand special attention therefore include, for example, systems strongly coupled to their environment, and systems with degenerate or nearly degenerate energy levels. For non-unitary evolution of a quantum systems, i.e., evolution that includes incoherent processes such as relaxation and dephasing, it is common to use master equations. In QuTiP, the same function (qutip.mesolve) is used for evolution both according to the Schrdinger equation and to the master equation, even though these two equations of motion are very different. The qutip.mesolve function automatically determines if it is sufficient to use the Schrdinger equation (if no collapse operators were given) or if it has to use the master equation (if collapse operators were given). Note that to calculate the time evolution according to the Schrdinger equation is easier and much faster (for large systems) than using the master equation, so if possible the solver will fall back on using the Schrdinger equation. What is new in the master equation compared to the Schrdinger equation are processes that describe dissipation in the quantum system due to its interaction with an environment. These environmental interactions are defined by the operators through which the system couples to the environment, and rates that describe the strength of the processes. In QuTiP, the product of the square root of the rate and the operator that describe the dissipation process is called a collapse operator. A list of collapse operators (c_ops) is passed as the fourth argument to the qutip.mesolve function in order to define the dissipation processes in the master equation. When the c_ops isnt empty, the qutip.mesolve function will use the master equation instead of the unitary Schrdinger equation. 41 Using the example with the spin dynamics from the previous section, we can easily add a relaxation process (describing the dissipation of energy from the spin to its environment), by adding np.sqrt(0.05) * sigmax() to the previously empty list in the fourth parameter to the qutip.mesolve function: In [21]: times = np.linspace(0.0, 10.0, 100) In [22]: result = mesolve(H, psi0, times, [np.sqrt(0.05) * sigmax()], [sigmaz(), sigmay()]) In [23]: fig, ax = subplots() In [24]: ax.plot(times, result.expect[0]); In [25]: ax.plot(times, result.expect[1]); In [26]: ax.set_xlabel('Time'); In [27]: ax.set_ylabel('Expectation values'); In [28]: ax.legend(("Sigma-Z", "Sigma-Y")); In [29]: show(fig) Here, 0.05 is the rate and the operator (qutip.operators.sigmax) describes the dissipation process. Now a slightly more complex example: Consider a two-level atom coupled to a leaky single-mode cavity through a dipole-type interaction, which supports a coherent exchange of quanta between the two systems. If the atom initially is in its groundstate and the cavity in a 5-photon Fock state, the dynamics is calculated with the lines following code In [30]: times = np.linspace(0.0, 10.0, 200) In [31]: psi0 = tensor(fock(2,0), fock(10, 5)) In [32]: a = tensor(qeye(2), destroy(10)) ## In [33]: sm = tensor(destroy(2), qeye(10)) In [34]: H = 2 * np.pi * a.dag() * a + 2 * np.pi * sm.dag() * sm + \ ....: 2 * np.pi * 0.25 * (sm * a.dag() + sm.dag() * a) 42 ....: In [35]: result = mesolve(H, psi0, times, [np.sqrt(0.1)*a], [a.dag()*a, sm.dag()*sm]) In [36]: figure() Out[36]: <matplotlib.figure.Figure at 0x107ea3d50> In [37]: plot(times, result.expect[0]) Out[37]: [<matplotlib.lines.Line2D at 0x10d300450>] In [38]: plot(times, result.expect[1]) Out[38]: [<matplotlib.lines.Line2D at 0x10d300c90>] In [39]: xlabel('Time') Out[39]: <matplotlib.text.Text at 0x10d28d9d0> In [40]: ylabel('Expectation values') Out[40]: <matplotlib.text.Text at 0x10d2a16d0> In [41]: legend(("cavity photon number", "atom excitation probability")) Out[41]: <matplotlib.legend.Legend at 0x10d300b10> In [42]: show() ## Monte Carlo Solver Introduction Where as the density matrix formalism describes the ensemble average over many identical realizations of a quantum system, the Monte Carlo (MC), or quantum-jump approach to wave function evolution, allows for simulating an individual realization of the system dynamics. Here, the environment is continuously monitored, resulting in a series of quantum jumps in the system wave function, conditioned on the increase in information gained about the state of the system via the environmental measurements. In general, this evolution is governed by the Schrdinger equation with a non-Hermitian effective Hamiltonian eff = sys + , 2 (3.4) 43 where again, the are collapse operators, each corresponding to a separate irreversible process with rate . Here, the strictly negative non-Hermitian portion of Eq. (3.4) gives rise to a reduction in the norm of the wave function, that to first-order in a small time , is given by ( + )|( + ) = 1 where = ()|+ |() , (3.5) and is such that 1. With a probability of remaining in the state |( + ) given by 1 , the corresponding quantum jump probability is thus Eq. (3.5). If the environmental measurements register a quantum jump, say via the emission of a photon into the environment, or a change in the spin of a quantum dot, the wave function undergoes a jump into a state defined by projecting |() using the collapse operator corresponding to the measurement 1/2 |( + ) = |() / ()|+ |() . (3.6) If more than a single collapse operator is present in Eq. (3.4), the probability of collapse due to the th-operator is given by (3.7) () = ()|+ |() /. Evaluating the MC evolution to first-order in time is quite tedious. Instead, QuTiP uses the following algorithm to simulate a single realization of a quantum system. Starting from a pure state |(0): I: Choose a random number between zero and one, representing the probability that a quantum jump occurs. II: Integrate the Schrdinger equation, using the effective Hamiltonian (3.4) until a time such that the norm of the wave function satisfies ( ) |( ) = , at which point a jump occurs. III: The resultant jump projects the system at time into one of the renormalized states given by Eq. (3.6). The corresponding collapse operator is chosen such that is the smallest integer satisfying: ( ) (3.8) =1 where the individual are given by Eq. (3.7). Note that the left hand side of Eq. (3.8) is, by definition, normalized to unity. IV: Using the renormalized state from step III as the new initial condition at time , draw a new random number, and repeat the above procedure until the final simulation time is reached. Monte Carlo in QuTiP In QuTiP, Monte Carlo evolution is implemented with the qutip.mcsolve function. It takes nearly the same arguments as the qutip.mesolve function for master-equation evolution, except that the initial state must be a ket vector, as oppose to a density matrix, and there is an optional keyword parameter ntraj that defines the number of stochastic trajectories to be simulated. By default, ntraj=500 indicating that 500 Monte Carlo trajectories will be performed. To illustrate the use of the Monte Carlo evolution of quantum systems in QuTiP, lets again consider the case of a two-level atom coupled to a leaky cavity. The only differences to the master-equation treatment is that in this case we invoke the qutip.mcsolve function instead of qutip.mesolve In [1]: times = np.linspace(0.0, 10.0, 200) In [2]: psi0 = tensor(fock(2, 0), fock(10, 5)) In [3]: a = tensor(qeye(2), destroy(10)) ## In [5]: H = 2 * np.pi * a.dag() * a + 2 * np.pi * sm.dag() * sm + 2 * np.pi * 0.25 * (sm * a.dag() 44 In [6]: data = mcsolve(H, psi0, times, [np.sqrt(0.1) * a], [a.dag() * a, sm.dag() * sm]) 10.0%. Run time: 1.27s. Est. time left: 00:00:00:11 20.0%. Run time: 2.44s. Est. time left: 00:00:00:09 30.0%. Run time: 3.64s. Est. time left: 00:00:00:08 40.0%. Run time: 4.82s. Est. time left: 00:00:00:07 50.0%. Run time: 5.98s. Est. time left: 00:00:00:05 60.0%. Run time: 7.13s. Est. time left: 00:00:00:04 70.0%. Run time: 8.31s. Est. time left: 00:00:00:03 80.0%. Run time: 9.44s. Est. time left: 00:00:00:02 90.0%. Run time: 10.60s. Est. time left: 00:00:00:01 100.0%. Run time: 11.74s. Est. time left: 00:00:00:00 Total run time: 11.76s In [7]: figure() Out[7]: <matplotlib.figure.Figure at 0x10b2fa810> In [8]: plot(times, data.expect[0], times, data.expect[1]) Out[8]: [<matplotlib.lines.Line2D at 0x107d97a90>, <matplotlib.lines.Line2D at 0x107b5e9d0>] In [9]: title('Monte Carlo time evolution') Out[9]: <matplotlib.text.Text at 0x1079c9cd0> In [10]: xlabel('Time') Out[10]: <matplotlib.text.Text at 0x107d51a50> In [11]: ylabel('Expectation values') Out[11]: <matplotlib.text.Text at 0x10b5fda90> In [12]: legend(("cavity photon number", "atom excitation probability")) Out[12]: <matplotlib.legend.Legend at 0x107cd67d0> In [13]: show() The advantage of the Monte Carlo method over the master equation approach is that only the state vector is required to be kept in the computers memory, as opposed to the entire density matrix. For large quantum system this becomes a significant advantage, and the Monte Carlo solver is therefore generally recommended for such 45 systems. For example, simulating a Heisenberg spin-chain consisting of 10 spins with random parameters and initial states takes almost 7 times longer using the master equation rather than Monte Carlo approach with the default number of trajectories running on a quad-CPU machine. Furthermore, it takes about 7 times the memory as well. However, for small systems, the added overhead of averaging a large number of stochastic trajectories to obtain the open system dynamics, as well as starting the multiprocessing functionality, outweighs the benefit of the minor (in this case) memory saving. Master equation methods are therefore generally more efficient when Hilbert space sizes are on the order of a couple of hundred states or smaller. Like the master equation solver qutip.mesolve, the Monte Carlo solver returns a qutip.solver.Result object consisting of expectation values, if the user has defined expectation value operators in the 5th argument to mcsolve, or state vectors if no expectation value operators are given. If state vectors are returned, then the qutip.solver.Result returned by qutip.mcsolve will be an array of length ntraj, with each element containing an array of ket-type qobjs with the same number of elements as times. Furthermore, the output qutip.solver.Result object will also contain a list of times at which collapse occurred, and which collapse operators did the collapse, in the col_times and col_which properties, respectively. Changing the Number of Trajectories As mentioned earlier, by default, the mcsolve function runs 500 trajectories. This value was chosen because it gives good accuracy, Monte Carlo errors scale as 1/ where is the number of trajectories, and simultaneously does not take an excessive amount of time to run. However, like many other options in QuTiP you are free to change the number of trajectories to fit your needs. If we want to run 1000 trajectories in the above example, we can simply modify the call to mcsolve like: In [14]: data = mcsolve(H, psi0, times, [np.sqrt(0.1) * a], [a.dag() * a, sm.dag() * sm], ntraj=10 10.0%. Run time: 2.57s. Est. time left: 00:00:00:23 20.0%. Run time: 4.90s. Est. time left: 00:00:00:19 30.0%. Run time: 7.20s. Est. time left: 00:00:00:16 40.0%. Run time: 9.68s. Est. time left: 00:00:00:14 50.0%. Run time: 11.98s. Est. time left: 00:00:00:11 60.0%. Run time: 14.43s. Est. time left: 00:00:00:09 70.0%. Run time: 16.91s. Est. time left: 00:00:00:07 80.0%. Run time: 19.25s. Est. time left: 00:00:00:04 90.0%. Run time: 21.87s. Est. time left: 00:00:00:02 100.0%. Run time: 24.21s. Est. time left: 00:00:00:00 Total run time: 24.27s where we have added the keyword argument ntraj=1000 at the end of the inputs. Now, the Monte Carlo solver will calculate expectation values for both operators, a.dag() * a, sm.dag() * sm averaging over 1000 trajectories. Sometimes one is also interested in seeing how the Monte Carlo trajectories converge to the master equation solution by calculating expectation values over a range of trajectory numbers. If, for example, we want to average over 1, 10, 100, and 1000 trajectories, then we can input this into the solver using: In [15]: ntraj = [1, 10, 100, 1000] Keep in mind that the input list must be in ascending order since the total number of trajectories run by mcsolve will be calculated using the last element of ntraj. In this case, we need to use an extra index when getting the expectation values from the qutip.solver.Result object returned by mcsolve. In the above example using: In [16]: data = mcsolve(H, psi0, times, [np.sqrt(0.1) * a], [a.dag() * a, sm.dag() * sm], ntraj=[1 10.0%. Run time: 2.47s. Est. time left: 00:00:00:22 20.0%. Run time: 4.83s. Est. time left: 00:00:00:19 30.0%. Run time: 7.15s. Est. time left: 00:00:00:16 40.0%. Run time: 9.51s. Est. time left: 00:00:00:14 50.0%. Run time: 11.84s. Est. time left: 00:00:00:11 60.0%. Run time: 14.18s. Est. time left: 00:00:00:09 70.0%. Run time: 16.65s. Est. time left: 00:00:00:07 80.0%. Run time: 18.97s. Est. time left: 00:00:00:04 90.0%. Run time: 21.36s. Est. time left: 00:00:00:02 46 ## 100.0%. Run time: 23.81s. Est. time left: 00:00:00:00 Total run time: 23.88s ## we can extract the relevant expectation values using: In [17]: expt10 = data.expect[1] # <- expectation ## values avg. over 100 trajectories The Monte Carlo solver also has many available options that can be set using the qutip.solver.Options class as discussed in Setting Options for the Dynamics Solvers. Reusing Hamiltonian Data Note: This section covers a specialized topic and may be skipped if you are new to QuTiP. In order to solve a given simulation as fast as possible, the solvers in QuTiP take the given input operators and break them down into simpler components before passing them on to the ODE solvers. Although these operations are reasonably fast, the time spent organizing data can become appreciable when repeatedly solving a system over, for example, many different initial conditions. In cases such as this, the Hamiltonian and other operators may be reused after the initial configuration, thus speeding up calculations. Note that, unless you are planning to reuse the data many times, this functionality will not be very useful. To turn on the reuse functionality we must set the rhs_reuse=True flag in the qutip.solver.Options: In [20]: options = Options(rhs_reuse=True) A full account of this feature is given in Setting Options for the Dynamics Solvers. Using the previous example, we will calculate the dynamics for two different initial states, with the Hamiltonian data being reused on the second call In [21]: psi0 = tensor(fock(2, 0), fock(10, 5)) In [22]: a = tensor(qeye(2), destroy(10)) ## In [23]: sm = tensor(destroy(2), qeye(10)) In [24]: H = 2 * np.pi * a.dag() * a + 2 * np.pi * sm.dag() * sm + \ ....: 2 * np.pi * 0.25 * (sm * a.dag() + sm.dag() * a) ....: In [25]: data1 = mcsolve(H, psi0, times, [np.sqrt(0.1) * a], [a.dag() * a, sm.dag() * sm]) 10.0%. Run time: 1.20s. Est. time left: 00:00:00:10 20.0%. Run time: 2.49s. Est. time left: 00:00:00:09 30.0%. Run time: 3.78s. Est. time left: 00:00:00:08 40.0%. Run time: 4.97s. Est. time left: 00:00:00:07 50.0%. Run time: 6.06s. Est. time left: 00:00:00:06 60.0%. Run time: 7.25s. Est. time left: 00:00:00:04 70.0%. Run time: 8.36s. Est. time left: 00:00:00:03 80.0%. Run time: 9.55s. Est. time left: 00:00:00:02 90.0%. Run time: 10.73s. Est. time left: 00:00:00:01 100.0%. Run time: 11.89s. Est. time left: 00:00:00:00 Total run time: 11.95s In [26]: psi1 = tensor(fock(2, 0), coherent(10, 2 - 1j)) In [27]: opts = Options(rhs_reuse=True) # Run a second time, reusing RHS In [28]: data2 = mcsolve(H, psi1, times, [np.sqrt(0.1) * a], [a.dag() * a, sm.dag() * sm], options 47 ## 10.0%. Run time: 2.30s. Est. time left: 00:00:00:20 20.0%. Run time: 4.70s. Est. time left: 00:00:00:18 30.0%. Run time: 7.00s. Est. time left: 00:00:00:16 40.0%. Run time: 9.57s. Est. time left: 00:00:00:14 50.0%. Run time: 12.10s. Est. time left: 00:00:00:12 60.0%. Run time: 14.45s. Est. time left: 00:00:00:09 70.0%. Run time: 16.79s. Est. time left: 00:00:00:07 80.0%. Run time: 18.99s. Est. time left: 00:00:00:04 90.0%. Run time: 21.26s. Est. time left: 00:00:00:02 100.0%. Run time: 23.43s. Est. time left: 00:00:00:00 Total run time: 23.47s In [29]: figure() Out[29]: <matplotlib.figure.Figure at 0x107d48210> In [30]: plot(times, data1.expect[0], times, data1.expect[1], lw=2) Out[30]: [<matplotlib.lines.Line2D at 0x10ab737d0>, <matplotlib.lines.Line2D at 0x10ab73a50>] In [31]: plot(times, data2.expect[0], '--', times, data2.expect[1], '--', lw=2) Out[31]: [<matplotlib.lines.Line2D at 0x1096d45d0>, <matplotlib.lines.Line2D at 0x1096d47d0>] In [32]: title('Monte Carlo time evolution') Out[32]: <matplotlib.text.Text at 0x107db4790> In [33]: xlabel('Time', fontsize=14) Out[33]: <matplotlib.text.Text at 0x1078fcd10> In [34]: ylabel('Expectation values', fontsize=14) Out[34]: <matplotlib.text.Text at 0x107df7150> In [35]: legend(("cavity photon number", "atom excitation probability")) Out[35]: <matplotlib.legend.Legend at 0x107a31b90> In [36]: show() 48 In addition to the initial state, one may reuse the Hamiltonian data when changing the number of trajectories ntraj or simulation times times. The reusing of Hamiltonian data is also supported for time-dependent Hamiltonians. See Solving Problems with Time-dependent Hamiltonians for further details. Fortran Based Monte Carlo Solver Note: In order to use the Fortran Monte Carlo solver, you must have the blas development libraries, and installed QuTiP using the flag: --with-f90mc. In performing time-independent Monte Carlo simulations with QuTiP, systems with small Hilbert spaces suffer from poor performance as the ODE solver must exit the ODE solver at each time step and check for the state vector norm. To correct this, QuTiP now includes an optional Fortran based Monte Carlo solver that has enhanced performance for systems with small Hilbert space dimensionality. Using the Fortran based solver is extremely simple; one just needs to replace mcsolve with mcsolve_f90. For example, from our previous demonstration In [37]: data1 = mcsolve_f90(H, psi0, times, [np.sqrt(0.1) * a], [a.dag() * a, sm.dag() * sm]) In using the Fortran solver, there are a few limitations that must be kept in mind. First, this solver only works for time-independent systems. Second, you can not pass a list of trajectories to ntraj. ## Bloch-Redfield master equation Introduction The Lindblad master equation introduced earlier is constructed so that it describes a physical evolution of the density matrix (i.e., trace and positivity preserving), but it does not provide a connection to any underlaying microscopic physical model. The Lindblad operators (collapse operators) describe phenomenological processes, such as for example dephasing and spin flips, and the rates of these processes are arbitrary parameters in the model. In many situations the collapse operators and their corresponding rates have clear physical interpretation, such as dephasing and relaxation rates, and in those cases the Lindblad master equation is usually the method of choice. However, in some cases, for example systems with varying energy biases and eigenstates and that couple to an environment in some well-defined manner (through a physically motivated system-environment interaction operator), it is often desirable to derive the master equation from more fundamental physical principles, and relate it to for example the noise-power spectrum of the environment. The Bloch-Redfield formalism is one such approach to derive a master equation from a microscopic system. It starts from a combined system-environment perspective, and derives a perturbative master equation for the system 49 alone, under the assumption of weak system-environment coupling. One advantage of this approach is that the dissipation processes and rates are obtained directly from the properties of the environment. On the downside, it does not intrinsically guarantee that the resulting master equation unconditionally preserves the physical properties of the density matrix (because it is a perturbative method). The Bloch-Redfield master equation must therefore be used with care, and the assumptions made in the derivation must be honored. (The Lindblad master equation is in a sense more robust it always results in a physical density matrix although some collapse operators might not be physically justified). For a full derivation of the Bloch Redfield master equation, see e.g. [Coh92] or [Bre02]. Here we present only a brief version of the derivation, with the intention of introducing the notation and how it relates to the implementation in QuTiP. Brief Derivation and Definitions The starting point of the Bloch-Redfield formalism is the total Hamiltonian for the system and the environment (bath): = S + B + I , where is the total system+bath Hamiltonian, S and B are the system and bath Hamiltonians, respectively, and I is the interaction Hamiltonian. The most general form of a master equation for the system dynamics is obtained by tracing out the bath from the von-Neumann equation of motion for the combined system ( = 1 [, ]). In the interaction picture the result is 2 (3.9) Tr [ (), [ ( ), ( ) ]], () = 0 where the additional assumption that the total system-bath density matrix can be factorized as () () . This assumption is known as the Born approximation, and it implies that there never is any entanglement between the system and the bath, neither in the initial state nor at any time during the evolution. It is justified for weak system-bath interaction. The master equation (3.9) is non-Markovian, i.e., the change in the density matrix at a time depends on states at all times < , making it intractable to solve both theoretically and numerically. To make progress towards a manageable master equation, we now introduce the Markovian approximation, in which () is replaced by () in Eq. (3.9). The result is the Redfield equation 2 (3.10) () = Tr [ (), [ ( ), () ]], 0 which is local in time with respect the density matrix, but still not Markovian since it contains an implicit dependence on the initial state. By extending the integration to infinity and substituting , a fully Markovian master equation is obtained: 2 (3.11) () = Tr [ (), [ ( ), () ]]. 0 The two Markovian approximations introduced above are valid if the time-scale with which the system dynamics changes is large compared to the time-scale with which correlations in the bath decays (corresponding to a shortmemory bath, which results in Markovian system dynamics). The master equation (3.11) is still on a too general form to be suitable for numerical implementation. We therefore assume that the system-bath interaction takes the form = and where are system operators and are bath operators. This allows us to write master equation in terms of system operators and bath correlation functions: 2 () = { ( ) [ () ( ) () ( ) () ()] ( ) [ () ( ) () () () ( )]} , where ( ) = Tr [ () ( ) ] = ( ) (0), since the bath state is a steady state. In the eigenbasis of the system Hamiltonian, where () = , = and are the eigenfrequencies corresponding the eigenstate |, we obtain in matrix form in the Schrdinger picture { [ ] sec () = () ( ) , , 0 [ ]} + ( ) (), 50 where the sec above the summation symbol indicate summation of the secular terms which satisfy | | decay . This is an almost-useful form of the master equation. The final step before arriving at the form of the BlochRedfield master equation that is implemented in QuTiP, involves rewriting the bath correlation function ( ) in terms of the noise-power spectrum of the environment () = ( ): ( ) = 1 () + (), 2 (3.12) where () is an energy shift that is neglected here. The final form of the Bloch-Redfield master equation is sec () = () + (), (3.13) where ( ) ( ) 2 , } ( ) ( ) , ## is the Bloch-Redfield tensor. The Bloch-Redfield master equation in the form Eq. (3.13) is suitable for numerical implementation. The input parameters are the system Hamiltonian , the system operators through which the environment couples to the system , and the noise-power spectrum () associated with each system-environment interaction term. To simplify the numerical implementation we assume that are Hermitian and that cross-correlations between different environment operators vanish, so that the final expression for the Bloch-Redfield tensor that is implemented in QuTiP is { = ( ) ( ) 2 + ( ) ( ) . ## Bloch-Redfield master equation in QuTiP In QuTiP, the Bloch-Redfield tensor Eq. (3.5) can be calculated using the function qutip.bloch_redfield.bloch_redfield_tensor. It takes three mandatory arguments: The system Hamiltonian , a list of operators through which to the bath , and a list of corresponding spectral density functions (). The spectral density functions are callback functions that takes the (angular) frequency as a single argument. To illustrate how to calculate the Bloch-Redfield tensor, lets consider a two-level atom 1 1 = 0 2 2 (3.14) that couples to an Ohmic bath through the operator. The corresponding Bloch-Redfield tensor can be calculated in QuTiP using the following code In [1]: delta = 0.2 * 2*np.pi; eps0 = 1.0 * 2*np.pi; gamma1 = 0.5 In [2]: H = - delta/2.0 * sigmax() - eps0/2.0 * sigmaz() In [3]: def ohmic_spectrum(w): ...: if w == 0.0: # dephasing inducing noise ...: return gamma1 ...: else: # relaxation inducing noise ...: return gamma1 / 2 * (w / (2 * np.pi)) * (w > 0.0) ...: 51 ## In [4]: R, ekets = bloch_redfield_tensor(H, [sigmax()], [ohmic_spectrum]) In [5]: np.real(R.full()) Out[5]: array([[ 0. , 0. , 0. , 0.24514517], [ 0. , -0.16103412, 0. , 0. ], [ 0. , 0. , -0.16103412, 0. ], [ 0. , 0. , 0. , -0.24514517]]) ## For convenience, the function qutip.bloch_redfield.bloch_redfield_tensor also returns a list of eigenkets ekets, since they are calculated in the process of calculating the Bloch-Redfield tensor R, and the ekets are usually needed again later when transforming operators between the computational basis and the eigenbasis. The evolution of a wavefunction or density matrix, according to the Bloch-Redfield master equation (3.13), can be calculated using the QuTiP function qutip.bloch_redfield.bloch_redfield_solve. It takes five mandatory arguments: the Bloch-Redfield tensor R, the list of eigenkets ekets, the initial state psi0 (as a ket or density matrix), a list of times tlist for which to evaluate the expectation values, and a list of operators e_ops for which to evaluate the expectation values at each time step defined by tlist. For example, to evaluate the expectation values of the , , and operators for the example above, we can use the following code: In [6]: import matplotlib.pyplot as plt In [7]: tlist = np.linspace(0, 15.0, 1000) In [8]: psi0 = rand_ket(2) In [9]: e_ops = [sigmax(), sigmay(), sigmaz()] In [10]: expt_list = bloch_redfield_solve(R, ekets, psi0, tlist, e_ops) In [11]: sphere = Bloch() In [13]: sphere.vector_color = ['r'] In [14]: sphere.add_vectors(np.array([delta, 0, eps0]) / np.sqrt(delta ** 2 + eps0 ** 2)) In [15]: sphere.make_sphere() In [16]: plt.show() 52 The two steps of calculating the Bloch-Redfield tensor and evolve the corresponding master equation can be combined into one by using the function qutip.bloch_redfield.brmesolve, which takes same arguments as qutip.mesolve and qutip.mcsolve, expect for the additional list of spectral callback functions. In [17]: output = brmesolve(H, psi0, tlist, [sigmax()], e_ops, [ohmic_spectrum]) ## Solving Problems with Time-dependent Hamiltonians Methods for Writing Time-Dependent Operators In the previous examples of quantum evolution, we assumed that the systems under consideration were described by time-independent Hamiltonians. However, many systems have explicit time dependence in either the Hamiltonian, or the collapse operators describing coupling to the environment, and sometimes both components might depend on time. The two main evolution solvers in QuTiP, qutip.mesolve and qutip.mcsolve, discussed in Lindblad Master Equation Solver and Monte Carlo Solver respectively, are capable of handling time-dependent Hamiltonians and collapse terms. There are, in general, three different ways to implement time-dependent problems in QuTiP: 1. Function based: Hamiltonian / collapse operators expressed using [qobj, func] pairs, where the timedependent coefficients of the Hamiltonian (or collapse operators) are expressed in the Python functions. 2. String (Cython) based: The Hamiltonian and/or collapse operators are expressed as a list of [qobj, string] pairs, where the time-dependent coefficients are represented as strings. The resulting Hamiltonian is then compiled into C code using Cython and executed. 3. Hamiltonian function (outdated): The Hamiltonian is itself a Python function with time-dependence. Collapse operators must be time independent using this input format. Give the multiple choices of input style, the first question that arrises is which option to choose? In short, the function based method (option #1) is the most general, allowing for essentially arbitrary coefficients expressed via user defined functions. However, by automatically compiling your system into C code, the second option (string based) tends to be more efficient and will run faster. Of course, for small system sizes and evolution times, the difference will be minor. Although this method does not support all time-dependent coefficients that one can think of, it does support essentially all problems that one would typically encounter. Time-dependent coefficients 53 using any of the following functions, or combinations thereof (including constants) can be compiled directly into C-code: 'abs', 'acos', 'acosh', 'arg', 'asin', 'asinh', 'atan', 'atan2', 'atanh', 'conj', 'cos', 'cosh','exp', 'imag', 'log', 'pow', 'proj, 'real', 'sin', 'sinh', 'sqrt', 'tan', 'tanh' If you require mathematical functions other than those listed above, than it is possible to call any of the functions in the numpy math library using the prefix np. before the function name in the string, i.e np.sin(t). The available functions can be found using In [1]: import numpy as np In [2]: np.array(dir(np.math)[6:]) Out[2]: array(['asin', 'asinh', 'atan', 'atan2', 'atanh', 'ceil', 'copysign', 'cos', 'cosh', 'degrees', 'e', 'erf', 'erfc', 'exp', 'expm1', 'fabs', 'factorial', 'floor', 'fmod', 'frexp', 'fsum', 'gamma', 'hypot', 'isinf', 'isnan', 'ldexp', 'lgamma', 'log', 'log10', 'log1p', 'modf', 'pi', 'pow', 'radians', 'sin', 'sinh', 'sqrt', 'tan', 'tanh', 'trunc'], dtype='|S9') Finally option #3, expressing the Hamiltonian as a Python function, is the original method for time dependence in QuTiP 1.x. However, this method is somewhat less efficient then the previously mentioned methods, and does not allow for time-dependent collapse operators. However, in contrast to options #1 and #2, this method can be used in implementing time-dependent Hamiltonians that cannot be expressed as a function of constant operators with time-dependent coefficients. A collection of examples demonstrating the simulation of time-dependent problems can be found on the tutorials web page. Function Based Time Dependence A very general way to write a time-dependent Hamiltonian or collapse operator is by using Python functions as the time-dependent coefficients. To accomplish this, we need to write a Python function that returns the timedependent coefficient. Additionally, we need to tell QuTiP that a given Hamiltonian or collapse operator should be associated with a given Python function. To do this, one needs to specify operator-function pairs in list format: [Op, py_coeff], where Op is a given Hamiltonian or collapse operator and py_coeff is the name of the Python function representing the coefficient. With this format, the form of the Hamiltonian for both mesolve and mcsolve is: >>> H = [H0, [H1, py_coeff1], [H2, py_coeff2], ...] where H0 is a time-independent Hamiltonian, while H1,H2, are time dependent. The same format can be used for collapse operators: >>> c_ops = [[C0, py_coeff0], C1, [C2, py_coeff2], ...] Here we have demonstrated that the ordering of time-dependent and time-independent terms does not matter. In addition, any or all of the collapse operators may be time dependent. Note: While, in general, you can arrange time-dependent and time-independent terms in any order you like, it is best to place all time-independent terms first. As an example, we will look at an example that has a time-dependent Hamiltonian of the] form = 0 [ 2 ()1 where () is the time-dependent driving strength given as () = exp (/) . The follow code sets up the problem In [3]: ustate = basis(3, 0) In [4]: excited = basis(3, 1) 54 ## In [5]: ground = basis(3, 2) In [6]: N = 2 # Set where to truncate Fock state for cavity In [7]: sigma_ge = tensor(qeye(N), ground * excited.dag()) # |g><e| # |u><e| ## In [9]: a = tensor(destroy(N), qeye(3)) In [10]: ada = tensor(num(N), qeye(3)) In [11]: c_ops = [] ## In [12]: kappa = 1.5 # Cavity decay rate In [13]: c_ops.append(np.sqrt(kappa) * a) In [14]: gamma = 6 ## In [15]: c_ops.append(np.sqrt(5*gamma/9) * sigma_ue) # Use Rb branching ratio of 5/9 e->u In [16]: c_ops.append(np.sqrt(4*gamma/9) * sigma_ge) # 4/9 e->g In [17]: t = np.linspace(-15, 15, 100) # Define time vector In [18]: psi0 = tensor(basis(N, 0), ustate) # Define initial state In [19]: state_GG = tensor(basis(N, 1), ground) # Define states onto which to project In [20]: sigma_GG = state_GG * state_GG.dag() In [21]: state_UU = tensor(basis(N, 0), ustate) In [22]: sigma_UU = state_UU * state_UU.dag() In [23]: g = 5 # coupling strength ## In [24]: H0 = -g * (sigma_ge.dag() * a + a.dag() * sigma_ge) In [25]: H1 = (sigma_ue.dag() + sigma_ue) # time-independent term # time-dependent term Given that we have a single time-dependent Hamiltonian term, and constant collapse terms, we need to specify a single Python function for the coefficient (). In this case, one can simply do In [26]: def H1_coeff(t, args): ....: return 9 * np.exp(-(t / 5.) ** 2) ....: In this case, the return value dependents only on time. However, when specifying Python functions for coefficients, the function must have (t,args) as the input variables, in that order. Having specified our coefficient function, we can now specify the Hamiltonian in list format and call the solver (in this case qutip.mesolve) In [27]: H = [H0,[H1,H1_coeff]] In [28]: output = mesolve(H, psi0, t, c_ops, [ada, sigma_UU, sigma_GG]) We can call the Monte Carlo solver in the exact same way (if using the default ntraj=500): In [29]: output = mcsolve(H, psi0, t, c_ops, [ada, sigma_UU, sigma_GG]) 10.0%. Run time: 0.56s. Est. time left: 00:00:00:05 20.0%. Run time: 1.08s. Est. time left: 00:00:00:04 55 ## 30.0%. Run time: 1.64s. Est. time left: 00:00:00:03 40.0%. Run time: 2.31s. Est. time left: 00:00:00:03 50.0%. Run time: 2.90s. Est. time left: 00:00:00:02 60.0%. Run time: 3.49s. Est. time left: 00:00:00:02 70.0%. Run time: 4.10s. Est. time left: 00:00:00:01 80.0%. Run time: 4.65s. Est. time left: 00:00:00:01 90.0%. Run time: 5.22s. Est. time left: 00:00:00:00 100.0%. Run time: 5.80s. Est. time left: 00:00:00:00 Total run time: 5.92s The output from the master equation solver is identical to that shown in the examples, the Monte Carlo however will be noticeably off, suggesting we should increase the number of trajectories for this example. In addition, we can also consider the decay of a simple Harmonic oscillator with time-varying decay rate In [30]: kappa = 0.5 In [31]: def col_coeff(t, args): # coefficient function ....: return np.sqrt(kappa * np.exp(-t)) ....: In [32]: N = 10 ## # number of basis states In [33]: a = destroy(N) In [34]: H = a.dag() * a # simple HO # initial state ## In [37]: times = np.linspace(0, 10, 100) In [38]: output = mesolve(H, psi0, times, c_ops, [a.dag() * a]) ## Using the args variable In the previous example we hardcoded all of the variables, driving amplitude and width , with their numerical values. This is fine for problems that are specialized, or that we only want to run once. However, in many cases, we would like to change the parameters of the problem in only one location (usually at the top of the script), and not have to worry about manually changing the values on each run. QuTiP allows you to accomplish this using the keyword args as an input to the solvers. For instance, instead of explicitly writing 9 for the amplitude and 5 for the width of the gaussian driving term, we can make us of the args variable In [39]: def H1_coeff(t, args): ....: return args['A'] * np.exp(-(t/args['sigma'])**2) ....: or equivalently, In [40]: def H1_coeff(t, args): ....: A = args['A'] ....: sig = args['sigma'] ....: return A * np.exp(-(t / sig) ** 2) ....: where args is a Python dictionary of key: value pairs args = {A: a, sigma: b} where a and b are the two parameters for the amplitude and width, respectively. Of course, we can always hardcode the values in the dictionary as well args = {A: 9, sigma: 5}, but there is much more flexibility by using variables in args. To let the solvers know that we have a set of args to pass we append the args to the end of the solver input: 56 In [41]: output = mesolve(H, psi0, times, c_ops, [a.dag() * a], args={'A': 9, 'sigma': 5}) ## or to keep things looking pretty In [42]: args = {'A': 9, 'sigma': 5} In [43]: output = mesolve(H, psi0, times, c_ops, [a.dag() * a], args=args) Once again, the Monte Carlo solver qutip.mcsolve works in an identical manner. String Format Method Note: You must have Cython installed on your computer to use this format. See Installation for instructions on installing Cython. The string-based time-dependent format works in a similar manner as the previously discussed Python function method. That being said, the underlying code does something completely different. When using this format, the strings used to represent the time-dependent coefficients, as well as Hamiltonian and collapse operators, are rewritten as Cython code using a code generator class and then compiled into C code. The details of this metaprogramming will be published in due course. however, in short, this can lead to a substantial reduction in time for complex time-dependent problems, or when simulating over long intervals. Like the previous method, the string-based format uses a list pair format [Op, str] where str is now a string representing the time-dependent coefficient. For our first example, this string would be 9 * exp(-(t / 5.) ** 2). The Hamiltonian in this format would take the form: In [44]: H = [H0, [H1, '9 * exp(-(t / 5) ** 2)']] Notice that this is a valid Hamiltonian for the string-based format as exp is included in the above list of suitable functions. Calling the solvers is the same as before: In [45]: output = mesolve(H, psi0, t, c_ops, [a.dag() * a]) We can also use the args variable in the same manner as before, however we must rewrite our string term to read: A * exp(-(t / sig) ** 2) In [46]: H = [H0, [H1, 'A * exp(-(t / sig) ** 2)']] In [47]: args = {'A': 9, 'sig': 5} In [48]: output = mesolve(H, psi0, times, c_ops, [a.dag()*a], args=args) Important: Naming your args variables e, j or pi will cause errors when using the string-based format. Collapse operators are handled in the exact same way. Reusing Time-Dependent Hamiltonian Data Note: This section covers a specialized topic and may be skipped if you are new to QuTiP. When repeatedly simulating a system where only the time-dependent variables, or initial state change, it is possible to reuse the Hamiltonian data stored in QuTiP and there by avoid spending time needlessly preparing the Hamiltonian and collapse terms for simulation. To turn on the the reuse features, we must pass a qutip.Options object with the rhs_reuse flag turned on. Instructions on setting flags are found in Setting Options for the Dynamics Solvers. For example, we can do In [49]: H = [H0, [H1, 'A * exp(-(t / sig) ** 2)']] In [50]: args = {'A': 9, 'sig': 5} 57 ## In [51]: output = mcsolve(H, psi0, times, c_ops, [a.dag()*a], args=args) 10.0%. Run time: 0.36s. Est. time left: 00:00:00:03 20.0%. Run time: 0.64s. Est. time left: 00:00:00:02 30.0%. Run time: 0.93s. Est. time left: 00:00:00:02 40.0%. Run time: 1.21s. Est. time left: 00:00:00:01 50.0%. Run time: 1.48s. Est. time left: 00:00:00:01 60.0%. Run time: 1.76s. Est. time left: 00:00:00:01 70.0%. Run time: 2.00s. Est. time left: 00:00:00:00 80.0%. Run time: 2.23s. Est. time left: 00:00:00:00 90.0%. Run time: 2.46s. Est. time left: 00:00:00:00 100.0%. Run time: 2.71s. Est. time left: 00:00:00:00 Total run time: 2.81s In [52]: opts = Options(rhs_reuse=True) In [53]: args = {'A': 10, 'sig': 3} In [54]: output = mcsolve(H, psi0, times, c_ops, [a.dag()*a], args=args, options=opts) 10.0%. Run time: 0.28s. Est. time left: 00:00:00:02 20.0%. Run time: 0.53s. Est. time left: 00:00:00:02 30.0%. Run time: 0.78s. Est. time left: 00:00:00:01 40.0%. Run time: 1.09s. Est. time left: 00:00:00:01 50.0%. Run time: 1.37s. Est. time left: 00:00:00:01 60.0%. Run time: 1.67s. Est. time left: 00:00:00:01 70.0%. Run time: 1.96s. Est. time left: 00:00:00:00 80.0%. Run time: 2.35s. Est. time left: 00:00:00:00 90.0%. Run time: 2.72s. Est. time left: 00:00:00:00 100.0%. Run time: 2.99s. Est. time left: 00:00:00:00 Total run time: 3.03s The second call to qutip.mcsolve does not reorganize the data, and in the case of the string format, does not recompile the Cython code. For the small system here, the savings in computation time is quite small, however, if you need to call the solvers many times for different parameters, this savings will obviously start to add up. Running String-Based Time-Dependent Problems using Parfor Note: This section covers a specialized topic and may be skipped if you are new to QuTiP. In this section we discuss running string-based time-dependent problems using the qutip.parfor function. As the qutip.mcsolve function is already parallelized, running string-based time dependent problems inside of parfor loops should be restricted to the qutip.mesolve function only. When using the string-based format, the system Hamiltonian and collapse operators are converted into C code with a specific file name that is automatically genrated, or supplied by the user via the rhs_filename property of the qutip.Options class. Because the qutip.parfor function uses the built-in Python multiprocessing functionality, in calling the solver inside a parfor loop, each thread will try to generate compiled code with the same file name, leading to a crash. To get around this problem you can call the qutip.rhs_generate function to compile simulation into C code before calling parfor. You must then set the qutip.Odedata object rhs_reuse=True for all solver calls inside the parfor loop that indicates that a valid C code file already exists and a new one should not be generated. As an example, we will look at the Landau-Zener-Stuckelberg interferometry example that can be found in the notebook Time-dependent master equation: Landau-Zener-Stuckelberg inteferometry in the tutorials section of the QuTiP web site. To set up the problem, we run the following code: In [55]: delta = 0.1 In [56]: w = 2.0 * 2 * np.pi * 2 * np.pi ## # qubit sigma_x coefficient # driving frequency In [57]: T = 2 * np.pi / w # driving period ## In [58]: gamma1 = 0.00001 # relaxation rate 58 # dephasing rate # epsilon # Amplitude ## In [62]: sx = sigmax(); sz = sigmaz(); sm = destroy(2); sn = num(2) In [63]: c_ops = [np.sqrt(gamma1) * sm, np.sqrt(gamma2) * sz] ## In [64]: H0 = -delta / 2.0 * sx In [65]: H1 = [sz, '-eps / 2.0 + A / 2.0 * sin(w * t)'] In [66]: H_td = [H0, H1] In [67]: Hargs = {'w': w, 'eps': eps_list[0], 'A': A_list[0]} where the last code block sets up the problem using a string-based Hamiltonian, and Hargs is a dictionary of arguments to be passed into the Hamiltonian. In this example, we are going to use the qutip.propagator and qutip.propagator.propagator_steadystate to find expectation values for different values of and in the Hamiltonian = 12 12 21 sin(). We must now tell the qutip.mesolve function, that is called by qutip.propagator to reuse a pregenerated Hamiltonian constructed using the qutip.rhs_generate command: In [68]: opts = Options(rhs_reuse=True) In [69]: rhs_generate(H_td, c_ops, Hargs, name='lz_func') Here, we have given the generated file a custom name lz_func, however this is not necessary as a generic name will automatically be given. Now we define the function task that is called by qutip.parallel.parfor with the m-index parallelized in loop over the elements of p_mat[m,n]: ....: m, eps = args ....: p_mat_m = np.zeros(len(A_list)) ....: for n, A in enumerate(A_list): ....: # change args sent to solver, w is really a constant though. ....: Hargs = {'w': w, 'eps': eps,'A': A} ....: U = propagator(H_td, T, c_ops, Hargs, opts) #<- IMPORTANT LINE ....: ....: p_mat_m[n] = expect(sn, rho_ss) ....: return [m, p_mat_m] ....: ## Notice the Options opts in the call to the qutip.propagator function. This is tells the qutip.mesolve function used in the propagator to call the pre-generated file lz_func. If this were missing then the routine would fail. Floquet Formalism Introduction Many time-dependent problems of interest are periodic. The dynamics of such systems can be solved for directly by numerical integration of the Schrdinger or Master equation, using the time-dependent Hamiltonian. But they can also be transformed into time-independent problems using the Floquet formalism. Time-independent problems can be solve much more efficiently, so such a transformation is often very desirable. In the standard derivations of the Lindblad and Bloch-Redfield master equations the Hamiltonian describing the system under consideration is assumed to be time independent. Thus, strictly speaking, the standard forms of these master equation formalisms should not blindly be applied to system with time-dependent Hamiltonians. However, in many relevant cases, in particular for weak driving, the standard master equations still turns out to be useful for many time-dependent problems. But a more rigorous approach would be to rederive the master equation 59 taking the time-dependent nature of the Hamiltonian into account from the start. The Floquet-Markov Master equation is one such a formalism, with important applications for strongly driven systems (see e.g., [Gri98]). Here we give an overview of how the Floquet and Floquet-Markov formalisms can be used for solving timedependent problems in QuTiP. To introduce the terminology and naming conventions used in QuTiP we first give a brief summary of quantum Floquet theory. Floquet theory for unitary evolution The Schrdinger equation with a time-dependent Hamiltonian () is ()() = (), (3.15) where () is the wave function solution. Here we are interested in problems with periodic time-dependence, i.e., the Hamiltonian satisfies () = ( + ) where is the period. According to the Floquet theorem, there exist solutions to (3.15) on the form () = exp( /) (), (3.16) where () are the Floquet states (i.e., the set of wave function solutions to the Schrdinger equation), () = ( + ) are the periodic Floquet modes, and are the quasienergy levels. The quasienergy levels are constants in time, but only uniquely defined up to multiples of 2/ (i.e., unique value in the interval [0, 2/ ]). If we know the Floquet modes (for [0, ]) and the quasienergies for a particular (), we can easily decompose any initial wavefunction ( = 0) in the Floquet states and immediately obtain the solution for arbitrary () = () = exp( /) (), (3.17) where the coefficients are determined by the initial wavefunction (0) = (0). This formalism is useful for finding () for a given () only if we can obtain the Floquet modes () and quasienergies more easily than directly solving (3.15). By substituting (3.16) into the Schrdinger equation (3.15) we obtain an eigenvalue equation for the Floquet modes and quasienergies () () = (), (3.18) where () = () . This eigenvalue problem could be solved analytically or numerically, but in QuTiP we use an alternative approach for numerically finding the Floquet states and quasienergies [see e.g. Creffield et al., Phys. Rev. B 67, 165301 (2003)]. Consider the propagator for the time-dependent Schrdinger equation (3.15), which by definition satisfies ( + , )() = ( + ). Inserting the Floquet states from (3.16) into this expression results in ( + , ) exp( /) () = exp( ( + )/) ( + ), or, since ( + ) = (), ( + , ) () = exp( /) () = (), which shows that the Floquet modes are eigenstates of the one-period propagator. We can therefore find the Floquet modes and quasienergies = arg( )/ by numerically calculating ( + , ) and diagonalizing it. In particular this method is useful to find (0) by calculating and diagonalize (, 0). The Floquet modes at arbitrary time can then be found by propagating (0) to () using the wave function propagator (, 0) (0) = (), which for the Floquet modes yields (, 0) (0) = exp( /) (), so that () = exp( / ) (, 0) (0). Since () is periodic we only need to evaluate it for [0, ], and from ( [0, ]) we can directly evaluate (), () and () for arbitrary large . 60 ## Floquet formalism in QuTiP QuTiP provides a family of functions to calculate the Floquet modes and quasi energies, Floquet state decomposition, etc., given a time-dependent Hamiltonian on the callback format, list-string format and list-callback format (see, e.g., qutip.mesolve for details). Consider for example the case of a strongly driven two-level atom, described by the Hamiltonian 1 1 1 () = 0 + sin() . 2 2 2 (3.19) ## In QuTiP we can define this Hamiltonian as follows: In [1]: delta = 0.2 * 2*np.pi; eps0 = 1.0 * 2*np.pi; A = 2.5 * 2*np.pi; omega = 1.0 * 2*np.pi In [2]: H0 = - delta/2.0 * sigmax() - eps0/2.0 * sigmaz() In [3]: H1 = A/2.0 * sigmaz() In [4]: args = {'w': omega} In [5]: H = [H0, [H1, 'sin(w * t)']] The = 0 Floquet modes corresponding to the Hamiltonian (3.19) can then be calculated using the qutip.floquet.floquet_modes function, which returns lists containing the Floquet modes and the quasienergies In [6]: T = 2*pi / omega In [7]: f_modes_0, f_energies = floquet_modes(H, T, args) In [8]: f_energies Out[8]: array([-2.83131212, 2.83131212]) In [9]: f_modes_0 Out[9]: [Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.72964231+0.j ] [-0.39993746+0.554682j]], Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.39993746+0.554682j] [ 0.72964231+0.j ]]] For some problems interesting observations can be draw from the quasienergy levels alone. Consider for example the quasienergies for the driven two-level system introduced above as a function of the driving amplitude, calculated and plotted in the following example. For certain driving amplitudes the quasienergy levels cross. Since the the quasienergies can be associated with the time-scale of the long-term dynamics due that the driving, degenerate quasienergies indicates a freezing of the dynamics (sometimes known as coherent destruction of tunneling). In [10]: delta = 0.2 * 2*np.pi; eps0 = 0.0 * 2*np.pi ## In [11]: omega = 1.0 * 2*np.pi; A_vec = np.linspace(0, 10, 100) * omega; In [12]: T = (2*pi)/omega In [13]: tlist = np.linspace(0.0, 10 * T, 101) In [14]: psi0 = basis(2,0) 61 ## In [16]: H0 = delta/2.0 * sigmaz() - eps0/2.0 * sigmax() In [17]: args = omega In [18]: for idx, A in enumerate(A_vec): ....: H1 = A/2.0 * sigmax() ....: H = [H0, [H1, lambda t, w: sin(w*t)]] ....: f_modes, f_energies = floquet_modes(H, T, args, True) ....: q_energies[idx,:] = f_energies ....: In [19]: figure() Out[19]: <matplotlib.figure.Figure at 0x10b198b90> ## In [20]: plot(A_vec/omega, q_energies[:,0] / delta, 'b', A_vec/omega, q_energies[:,1] / delta, 'r' Out[20]: [<matplotlib.lines.Line2D at 0x107b30ed0>, <matplotlib.lines.Line2D at 0x107b30610>] In [21]: xlabel(r'$A/\omega$') Out[21]: <matplotlib.text.Text at 0x107c75b10> In [22]: ylabel(r'Quasienergy / $\Delta$') Out[22]: <matplotlib.text.Text at 0x107b30050> In [23]: title(r'Floquet quasienergies') Out[23]: <matplotlib.text.Text at 0x105dd4590> In [24]: show() Given the Floquet modes at = 0, we obtain the Floquet mode at some later time using the function qutip.floquet.floquet_mode_t: In [25]: f_modes_t = floquet_modes_t(f_modes_0, f_energies, 2.5, H, T, args) In [26]: f_modes_t Out[26]: [Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket 62 Qobj data = [[-0.89630512-0.23191946j] [ 0.37793106-0.00431336j]], Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[-0.37793106-0.00431336j] [-0.89630512+0.23191946j]]] The purpose of calculating the Floquet modes is to find the wavefunction solution to the original problem (3.19) given some initial state |0 . To do that, we first need to decompose the initial state in the Floquet states, using the function qutip.floquet.floquet_state_decomposition In [27]: psi0 = rand_ket(2) In [28]: ## f_coeff = floquet_state_decomposition(f_modes_0, f_energies, psi0) In [29]: f_coeff Out[29]: [(-0.46270277543605265+0.49439762918280311j), (-0.56466987689016934+0.47183159707149747j)] and given this decomposition of the initial state in the Floquet states we can easily evaluate the wavefunction that is the solution to (3.19) at an arbitrary time using the function qutip.floquet.floquet_wavefunction_t In [30]: t = 10 * np.random.rand() In [31]: psi_t = floquet_wavefunction_t(f_modes_0, f_energies, f_coeff, t, H, T, args) In [32]: psi_t Out[32]: Quantum object: dims = [[2], [1]], shape = [2, 1], type = ket Qobj data = [[ 0.60556819-0.05015488j] [ 0.63266806+0.48010705j]] The following example illustrates how to use the functions introduced above to calculate and plot the timeevolution of (3.19). from qutip import * from scipy import * delta = 0.2 * 2*pi; eps0 = 1.0 * 2*pi A = 0.5 * 2*pi; omega = 1.0 * 2*pi T = (2*pi)/omega tlist = linspace(0.0, 10 * T, 101) psi0 = basis(2,0) H0 = - delta/2.0 * sigmax() - eps0/2.0 * sigmaz() H1 = A/2.0 * sigmaz() args = {'w': omega} H = [H0, [H1, lambda t,args: sin(args['w'] * t)]] # find the floquet modes for the time-dependent hamiltonian f_modes_0,f_energies = floquet_modes(H, T, args) # decompose the inital state in the floquet modes f_coeff = floquet_state_decomposition(f_modes_0, f_energies, psi0) # calculate the wavefunctions using the from the floquet modes p_ex = zeros(len(tlist)) for n, t in enumerate(tlist): psi_t = floquet_wavefunction_t(f_modes_0, f_energies, f_coeff, t, H, T, args) 63 ## p_ex[n] = expect(num(2), psi_t) # For reference: calculate the same thing with mesolve p_ex_ref = mesolve(H, psi0, tlist, [], [num(2)], args).expect[0] # plot the results from pylab import * plot(tlist, real(p_ex), 'ro', tlist, 1-real(p_ex), 'bo') plot(tlist, real(p_ex_ref), 'r', tlist, 1-real(p_ex_ref), 'b') xlabel('Time') ylabel('Occupation probability') legend(("Floquet $P_1$", "Floquet $P_0$", "Lindblad $P_1$", "Lindblad $P_0$")) show() 1.0 Floquet P1 Floquet P0 Occupation probability 0.8 0.6 0.4 0.2 0.0 0 Time 10 ## Pre-computing the Floquet modes for one period When evaluating the Floquet states or the wavefunction at many points in time it is useful to pre-compute the Floquet modes for the first period of the driving with the required resolution. In QuTiP the function qutip.floquet.floquet_modes_table calculates a table of Floquet modes which later can be used together with the function qutip.floquet.floquet_modes_t_lookup to efficiently lookup the Floquet mode at an arbitrary time. The following example illustrates how the example from the previous section can be solved more efficiently using these functions for pre-computing the Floquet modes. from qutip import * from scipy import * delta = 0.0 * 2*pi; eps0 = 1.0 * 2*pi A = 0.25 * 2*pi; omega = 1.0 * 2*pi T = (2*pi)/omega tlist = linspace(0.0, 10 * T, 101) 64 psi0 = basis(2,0) ## H0 = - delta/2.0 * sigmax() - eps0/2.0 * sigmaz() H1 = A/2.0 * sigmax() args = {'w': omega} H = [H0, [H1, lambda t,args: sin(args['w'] * t)]] # find the floquet modes for the time-dependent hamiltonian f_modes_0,f_energies = floquet_modes(H, T, args) # decompose the inital state in the floquet modes f_coeff = floquet_state_decomposition(f_modes_0, f_energies, psi0) # calculate the wavefunctions using the from the floquet modes f_modes_table_t = floquet_modes_table(f_modes_0, f_energies, tlist, H, T, args) p_ex = zeros(len(tlist)) for n, t in enumerate(tlist): f_modes_t = floquet_modes_t_lookup(f_modes_table_t, t, T) psi_t = floquet_wavefunction(f_modes_t, f_energies, f_coeff, t) p_ex[n] = expect(num(2), psi_t) # For reference: calculate the same thing with mesolve p_ex_ref = mesolve(H, psi0, tlist, [], [num(2)], args).expect[0] # plot the results from pylab import * plot(tlist, real(p_ex), 'ro', tlist, 1-real(p_ex), 'bo') plot(tlist, real(p_ex_ref), 'r', tlist, 1-real(p_ex_ref), 'b') xlabel('Time') ylabel('Occupation probability') legend(("Floquet $P_1$", "Floquet $P_0$", "Lindblad $P_1$", "Lindblad $P_0$")) show() 65 1.0 Floquet P1 Floquet P0 Occupation probability 0.8 0.6 0.4 0.2 0.0 0 Time 10 Note that the parameters and the Hamiltonian used in this example is not the same as in the previous section, and hence the different appearance of the resulting figure. For convenience, all the steps described above for calculating the evolution of a quantum system using the Floquet formalisms are encapsulated in the function qutip.floquet.fsesolve. Using this function, we could have achieved the same results as in the examples above using: output = fsesolve(H, psi0, times, [num(2)], args) p_ex = output.expect[0] ## Floquet theory for dissipative evolution A driven system that is interacting with its environment is not necessarily well described by the standard Lindblad master equation, since its dissipation process could be time-dependent due to the driving. In such cases a rigorious approach would be to take the driving into account when deriving the master equation. This can be done in many different ways, but one way common approach is to derive the master equation in the Floquet basis. That approach results in the so-called Floquet-Markov master equation, see Grifoni et al., Physics Reports 304, 299 (1998) for details. The Floquet-Markov master equation in QuTiP ## The QuTiP function qutip.floquet.fmmesolve implements the Floquet-Markov master equation. It calculates the dynamics of a system given its initial state, a time-dependent hamiltonian, a list of operators through which the system couples to its environment and a list of corresponding spectral-density functions that describes the environment. In contrast to the qutip.mesolve and qutip.mcsolve, and the qutip.floquet.fmmesolve does characterize the environment with dissipation rates, but extract the strength of the coupling to the environment from the noise spectral-density functions and the instantaneous Hamiltonian parameters (similar to the Bloch-Redfield master equation solver qutip.bloch_redfield.brmesolve). Note: Currently the qutip.floquet.fmmesolve can only accept a single environment coupling operator 66 ## and spectral-density function. The noise spectral-density function of the environment is implemented as a Python callback function that is passed to the solver. For example: >>> gamma1 = 0.1 >>> def noise_spectrum(omega): >>> return 0.5 * gamma1 * omega/(2*pi) The other parameters are similar to the qutip.mesolve and qutip.mcsolve, and the same format for the return value is used qutip.solver.Result. The following example extends the example studied above, and uses qutip.floquet.fmmesolve to introduce dissipation into the calculation from qutip import * from scipy import * delta = 0.0 * 2*pi; eps0 = 1.0 * 2*pi A = 0.25 * 2*pi; omega = 1.0 * 2*pi T = (2*pi)/omega tlist = linspace(0.0, 20 * T, 101) psi0 = basis(2,0) H0 = - delta/2.0 * sigmax() - eps0/2.0 * sigmaz() H1 = A/2.0 * sigmax() args = {'w': omega} H = [H0, [H1, lambda t,args: sin(args['w'] * t)]] # noise power spectrum gamma1 = 0.1 def noise_spectrum(omega): return 0.5 * gamma1 * omega/(2*pi) # find the floquet modes for the time-dependent hamiltonian f_modes_0, f_energies = floquet_modes(H, T, args) # precalculate mode table f_modes_table_t = floquet_modes_table(f_modes_0, f_energies, linspace(0, T, 500 + 1), H, T, args) # solve the floquet-markov master equation output = fmmesolve(H, psi0, tlist, [sigmax()], [], [noise_spectrum], T, args) # calculate expectation values in the computational basis p_ex = zeros(shape(tlist), dtype=complex) for idx, t in enumerate(tlist): f_modes_t = floquet_modes_t_lookup(f_modes_table_t, t, T) p_ex[idx] = expect(num(2), output.states[idx].transform(f_modes_t, True)) # For reference: calculate the same thing with mesolve output = mesolve(H, psi0, tlist, [sqrt(gamma1) * sigmax()], [num(2)], args) p_ex_ref = output.expect[0] # plot the results from pylab import * plot(tlist, real(p_ex), 'r--', tlist, 1-real(p_ex), 'b--') plot(tlist, real(p_ex_ref), 'r', tlist, 1-real(p_ex_ref), 'b') xlabel('Time') ylabel('Occupation probability') legend(("Floquet $P_1$", "Floquet $P_0$", "Lindblad $P_1$", "Lindblad $P_0$")) show() 67 1.0 Floquet P1 Floquet P0 Occupation probability 0.8 0.6 0.4 0.2 0.0 0.2 0 10 Time 15 20 Alternatively, we can let the qutip.floquet.fmmesolve function transform the density matrix at each time step back to the computational basis, and calculating the expectation values for us, but using: output = fmmesolve(H, psi0, times, [sigmax()], [num(2)], [noise_spectrum], T, args) p_ex = output.expect[0] ## Setting Options for the Dynamics Solvers Occasionally it is necessary to change the built in parameters of the dynamics solvers used by for example the qutip.mesolve and qutip.mcsolve functions. The options for all dynamics solvers may be changed by using the Options class qutip.solver.Options. In [1]: options = Options() the properties and default values of this class can be view via the print function: In [2]: print(options) Options: ----------atol: 1e-08 rtol: 1e-06 method: order: 12 nsteps: 1000 first_step: 0 min_step: 0 max_step: 0 tidy: True num_cpus: 0 norm_tol: 0.001 norm_steps: 5 68 rhs_filename: rhs_reuse: seeds: rhs_with_state: average_expect: average_states: ntraj: store_states: store_final_state: None False 0 False True False 500 False False These properties are detailed in the following table. Assuming options = Options(): Property Default setting Description options.atol 1e-8 Absolute tolerance options.rtol 1e-6 Relative tolerance options.method Solver method. Can be adams (non-stiff) or bdf (stiff) options.order 12 Order of solver. Must be <=12 for adams and <=5 for bdf options.nsteps 1000 Max. number of steps to take for each interval options.first_step 0 Size of initial step. 0 = determined automatically by solver. options.min_step 0 Minimum step size. 0 = determined automatically by solver. options.max_step 0 Maximum step size. 0 = determined automatically by solver. options.tidy True Whether to run tidyup function on time-independent Hamiltonian. options.num_cpus installed num of Integer number of cpus used by mcsolve. processors opNone RHS filename when using compiled time-dependent tions.rhs_filename Hamiltonians. options.rhs_reuse False Reuse compiled RHS function. Useful for repeatative tasks. options.gui True (if GUI) Use the mcsolve progessbar. Defaults to False on Windows. options.mc_avg True Average over trajectories for expectation values from mcsolve. As an example, let us consider changing the number of processors used, turn the GUI off, and strengthen the absolute tolerance. There are two equivalent ways to do this using the Options class. First way, or one can use an inline method, Note that the order in which you input the options does not matter. Using either method, the resulting options variable is now: In [3]: print(options) Options: ----------atol: 1e-08 rtol: 1e-06 method: order: 12 nsteps: 1000 first_step: 0 min_step: 0 max_step: 0 tidy: True num_cpus: 0 norm_tol: 0.001 norm_steps: 5 rhs_filename: None rhs_reuse: False seeds: 0 rhs_with_state: False average_expect: True average_states: False ntraj: 500 69 store_states: False store_final_state: False To use these new settings we can use the keyword argument options in either the func:qutip.mesolve and qutip.mcsolve function. We can modify the last example as: >>> mesolve(H0, psi0, tlist, c_op_list, [sigmaz()], options=options) >>> mesolve(hamiltonian_t, psi0, tlist, c_op_list, [sigmaz()], H_args, options=options) or: >>> mcsolve(H0, psi0, tlist, ntraj,c_op_list, [sigmaz()], options=options) >>> mcsolve(hamiltonian_t, psi0, tlist, ntraj, c_op_list, [sigmaz()], H_args, options=options) ## 3.6 Solving for Steady-State Solutions Introduction For time-independent open quantum systems with decay rates larger than the corresponding excitation rates, the system will tend toward a steady state as that satisfies the equation = = 0. Although the requirement for time-independence seems quite resitrictive, one can often employ a transformation to the interaction picture that yields a time-independent Hamiltonian. For many these systems, solving for the asymptotic density matrix can be achieved using direct or iterative solution methods faster than using master equation or Monte Carlo simulations. Although the steady state equation has a simple mathematical form, the properties of the Liouvillian operator are such that the solutions to this equation are anything but straightforward to find. ## Steady State Solutions for Arbitrary Systems In QuTiP, the steady-state solution for a system Hamiltonian or Liouvillian is given by finding the steady state, each with their own pros and cons, where the method used can be chosen using the method keyword argument. Method Keyword Description Direct direct Direct solution solving = via sparse LU decomposition. (default) Eigenvalue eigen Iteratively find the eigenvector corresponding to the zero eigenvalue of . Inversepower Iteratively solve for the steady-state solution using the inverse-power method. Power GMRES iterativeIteratively solve for the steady-state solution using the GMRES method and gmres optional preconditioner. LGMRES iterativeIteratively solve for the steady-state solution using the LGMRES method and lgmres optional preconditioner. BICGSTAB iterativeIteratively solve for the steady-state solution using the BICGSTAB method bicgstab and optional preconditioner. SVD svd Steady-state solution via the SVD of the Liouvillian represented by a dense matrix. The function qutip.steadystate.steadystate can take either a Hamiltonian and a list of collapse operators as input, generating internally the corresponding Liouvillian super operator in Lindblad form, or alternatively, an arbitrary Liouvillian passed by the user. When possible, we recommend passing the Hamiltonian and Liouvillian for the system. 70 Solving for the steady state solution to the Lindblad master equation for a general system with where H is a quantum object representing the system Hamiltonian, and c_ops is a list of quantum objects for the system collapse operators. The output, labeled as rho_ss, is the steady-state solution for the systems. If no other keywords are passed to the solver, the default direct method is used, generating a solution that is exact to machine precision at the expense of a large memory requirement. The large amount of memory need for the direct LU decomposition method stems from the large bandwidth of the system Liouvillian and the correspondingly large fill-in (extra nonzero elements) generated in the LU factors. This fill-in can be reduced by using bandwidth minimization algorithms such as those discussed in Additional Solver Arguments. Additional parameters may be used by calling the steady-state solver as: >>> rho_ss = steadystate(H, c_ops, method='power', use_rcm=True) where method=power indicates that we are using the inverse-power solution method, and use_rcm=True turns on the bandwidth minimization routine. Although it is not obvious, the direct, eigen, and power methods all use an LU decomposition internally and thus suffer from a large memory overhead. In contrast, iterative methods such as the iterative-gmres, iterative-lgmres, and iterative-bicgstab methods do not factor the matrix and thus take less memory than these previous methods and allowing, in principle, for extremely large system sizes. The downside is that these methods can take much longer than the direct method as the condition number of the Liouvillian matrix is large, indicating that these iterative methods require a large number of iterations for convergence. To overcome this, one can use a preconditioner that solves for an approximate inverse for the (modified) Liouvillian, thus better conditioning the problem, leading to faster convergence. The use of a preconditioner can actually make these iterative methods faster than the other solution methods. The problem with precondioning is that it is only well defined for Hermitian matrices. Since the Liouvillian is non-Hermitian, the ability to find a good preconditioner is not guaranteed. And moreover, if a preconditioner is found, it is not guaranteed to have a good condition number. QuTiP can make use of an incomplete LU preconditioner when using the iterative gmres, lgmres, and bicgstab solvers by setting use_precond=True. The preconditioner optionally makes use of a combination of symmetric and anti-symmetric matrix permutations that attempt to improve the preconditioning process. These features are discussed in the Additional Solver Arguments section. Even with these state-of-the-art permutations, the generation of a successful preconditoner for non-symmetric matrices is currently a trial-and-error process due to the lack of mathematical work done in this area. It is always recommended to begin with the direct solver with no additional arguments before selecting a different method. Finding the steady-state solution is not limited to the Lindblad form of the master equation. Any timeindependent Liouvillian constructed from a Hamiltonian and collapse operators can be used as an input: where L is the Louvillian. All of the additional arguments can also be used in this case. 71 Keyword method sparse weight ## Options (default listed first) direct, eigen, power, iterative-gmres,iterativelgmres, svd True, False None permc_spec COLAMD, NATURAL use_rcm False, True use_umfpack False, True use_precond False, True M None, sparse_matrix, LinearOperator use_wbm False, True Description Method used for solving for the steady-state density matrix. ## Use sparse version of direct solver. Allows the user to define the weighting factor used in the direct, GMRES, and LGMRES solvers. Column ordering used in the sparse LU decomposition. Use a Reverse Cuthill-Mckee reordering to minimize the bandwidth of the modified Liouvillian used in the LU decomposition. If use_rcm=True then the column ordering is set to Natural automatically unless explicitly set. Use the umfpack solver rather than the default superLU. on SciPy 0.14+, this option requires installing the scikits.umfpack extension. Attempt to generate a preconditioner when using the iterative-gmres and iterative-lgmres methods. A user defined preconditioner, if any. ## Use a Weighted Bipartite Matching algorithm to attempt to make the modified Liouvillian more diagonally dominate, and thus for favorable for preconditioning. Set to True automatically when using a iterative method, unless explicitly set. tol 1e-9 Tolerance used in finding the solution for all methods expect direct and svd. max10000 Maximum number of iterations to perform for all methods expect iter direct and svd. fill_factor 10 Upper-bound on the allowed fill-in for the approximate inverse preconditioner. This value may need to be set much higher than this in some cases. drop_tol 1e-3 Sets the threshold for the relative magnitude of preconditioner elements that should be dropped. A lower number yields a more accurate approximate inverse at the expense of fill-in and increased runtime. diag_pivot_thresh None Sets the threshold between [0, 1] for which diagonal elements are considered acceptable pivot points when using a preconditioner. ILU_MILU smilu_2 Selects the incomplete LU decomposition method algorithm used. ## Example: Harmonic Oscillator in Thermal Bath A simple example of a system that reaches a steady state is a harmonic oscillator coupled to a thermal environment. Below we consider a harmonic oscillator, initially in the |10 number state, and weakly coupled to a thermal environment characterized by an average particle expectation value of = 2. We calculate the evolution via master equation and Monte Carlo methods, and see that they converge to the steady-state solution. Here we choose to perform only a few Monte Carlo trajectories so we can distinguish this evolution from the master-equation solution. In [1]: N = 20 ## # number of basis states to consider In [2]: a = destroy(N) In [3]: H = a.dag() * a In [4]: psi0 = basis(N, 10) In [5]: kappa = 0.1 # initial state # coupling to oscillator 72 In [6]: c_op_list = [] In [7]: n_th_a = 2 ## In [8]: rate = kappa * (1 + n_th_a) In [9]: c_op_list.append(sqrt(rate) * a) # decay operators ## In [10]: rate = kappa * n_th_a In [11]: c_op_list.append(sqrt(rate) * a.dag()) # excitation operators ## In [12]: final_state = steadystate(H, c_op_list) In [13]: fexpt = expect(a.dag() * a, final_state) In [14]: tlist = linspace(0, 50, 100) In [15]: mcdata = mcsolve(H, psi0, tlist, c_op_list, [a.dag() * a], ntraj=100) 10.0%. Run time: 0.41s. Est. time left: 00:00:00:03 20.0%. Run time: 0.82s. Est. time left: 00:00:00:03 30.0%. Run time: 1.17s. Est. time left: 00:00:00:02 40.0%. Run time: 1.61s. Est. time left: 00:00:00:02 50.0%. Run time: 2.01s. Est. time left: 00:00:00:02 60.0%. Run time: 2.35s. Est. time left: 00:00:00:01 70.0%. Run time: 2.65s. Est. time left: 00:00:00:01 80.0%. Run time: 3.01s. Est. time left: 00:00:00:00 90.0%. Run time: 3.43s. Est. time left: 00:00:00:00 100.0%. Run time: 3.82s. Est. time left: 00:00:00:00 Total run time: 3.87s In [16]: medata = mesolve(H, psi0, tlist, c_op_list, [a.dag() * a]) In [17]: figure() In [18]: plot(tlist, mcdata.expect[0], tlist, medata.expect[0], lw=2) Out[18]: [<matplotlib.lines.Line2D at 0x10db8d750>, <matplotlib.lines.Line2D at 0x10db8d9d0>] In [19]: axhline(y=fexpt, color='r', lw=1.5) # ss expt. value as horiz line (= 2) Out[19]: <matplotlib.lines.Line2D at 0x10da59150> In [20]: ylim([0, 10]) Out[20]: (0, 10) In [21]: xlabel('Time', fontsize=14) Out[21]: <matplotlib.text.Text at 0x10da59cd0> In [22]: ylabel('Number of excitations', fontsize=14) Out[22]: <matplotlib.text.Text at 0x10da88a50> In [23]: legend(('Monte-Carlo', 'Master Equation', 'Steady State')) Out[23]: <matplotlib.legend.Legend at 0x10d932fd0> In [24]: title('Decay of Fock state $\left|10\\rangle\\right.$' + ....: ' in a thermal environment with $\langle n\\rangle=2$') ....: Out[24]: <matplotlib.text.Text at 0x10da84550> In [25]: show() 73 ## 3.7 An Overview of the Eseries Class Exponential-series representation of time-dependent quantum objects The eseries object in QuTiP is a representation of an exponential-series expansion of time-dependent quantum objects (a concept borrowed from the quantum optics toolbox). An exponential series is parameterized by its amplitude coefficients and rates , so that the series takes the form () = . The coefficients are typically quantum objects (type Qobj: states, operators, etc.), so that the value of the eseries also is a quantum object, and the rates can be either real or complex numbers (describing decay rates and oscillation frequencies, respectively). Note that all amplitude coefficients in an exponential series must be of the same dimensions and composition. In QuTiP, an exponential series object is constructed by creating an instance of the class qutip.eseries: In [1]: es1 = eseries(sigmax(), 1j) where the first argument is the amplitude coefficient (here, the sigma-X operator), and the second argument is the rate. The eseries in this example represents the time-dependent operator . To add more terms to an qutip.eseries object we simply add objects using the + operator: In [2]: omega=1.0 In [3]: es2 = (eseries(0.5 * sigmax(), 1j * omega) + ...: eseries(0.5 * sigmax(), -1j * omega)) ...: The qutip.eseries in this example represents the operator 0.5 + 0.5 , which is the exponential series representation of cos(). Alternatively, we can also specify a list of amplitudes and rates when the qutip.eseries is created: In [4]: es2 = eseries([0.5 * sigmax(), 0.5 * sigmax()], [1j * omega, -1j * omega]) We can inspect the structure of an qutip.eseries object by printing it to the standard output console: In [5]: es2 Out[5]: ESERIES object: 2 terms 74 ## Hilbert space dimensions: [[2], [2]] Exponent #0 = -1j Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 0.5] [ 0.5 0. ]] Exponent #1 = 1j Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 0.5] [ 0.5 0. ]] ## and we can evaluate it at time t by using the qutip.eseries.esval function: In [6]: esval(es2, 0.0) # equivalent to es2.value(0.0) Out[6]: Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 1.] [ 1. 0.]] ## or for a list of times [0.0, 1.0 * pi, 2.0 * pi]: In [7]: times = [0.0, 1.0 * pi, 2.0 * pi] In [8]: esval(es2, times) # equivalent to es2.value(times) Out[8]: array([ Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 1.] [ 1. 0.]], Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. -1.] [-1. 0.]], Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 1.] [ 1. 0.]]], dtype=object) ## To calculate the expectation value of an time-dependent operator represented by an qutip.eseries, we use the qutip.expect function. For example, consider the operator cos() + sin(), and say we would like to know the expectation value of this operator for a spin in its excited state (rho = fock_dm(2,1) produce this state): In [9]: es3 = (eseries([0.5*sigmaz(), 0.5*sigmaz()], [1j, -1j]) + ...: eseries([-0.5j*sigmax(), 0.5j*sigmax()], [1j, -1j])) ...: In [10]: rho = fock_dm(2, 1) In [11]: es3_expect = expect(rho, es3) In [12]: es3_expect Out[12]: ESERIES object: 2 terms Hilbert space dimensions: [[1, 1]] Exponent #0 = -1j (-0.5+0j) Exponent #1 = 1j (-0.5+0j) 75 ## In [13]: es3_expect.value([0.0, pi/2]) Out[13]: array([ -1.00000000e+00, -6.12323400e-17]) Note the expectation value of the qutip.eseries object, expect(rho, es3), itself is an qutip.eseries, but with amplitude coefficients that are C-numbers instead of quantum operators. To evaluate the C-number qutip.eseries at the times times we use esval(es3_expect, times), or, equivalently, es3_expect.value(times). ## Applications of exponential series The exponential series formalism can be useful for the time-evolution of quantum systems. One approach to calculating the time evolution of a quantum system is to diagonalize its Hamiltonian (or Liouvillian, for dissipative systems) and to express the propagator (e.g., exp() exp()) as an exponential series. The QuTiP function qutip.essolve.ode2es and qutip.essolve use this method to evolve quantum systems in time. The exponential series approach is particularly suitable for cases when the same system is to be evolved for many different initial states, since the diagonalization only needs to be performed once (as opposed to e.g. the ode solver that would need to be ran independently for each initial state). As an example, consider a spin-1/2 with a Hamiltonian pointing in the direction, and that is subject to noise causing relaxation. For a spin originally is in the up state, we can create an qutip.eseries object describing its dynamics by using the qutip.es2ode function: In [14]: psi0 = basis(2,1) In [15]: H = sigmaz() In [16]: L = liouvillian(H, [sqrt(1.0) * destroy(2)]) In [17]: es = ode2es(L, psi0) The qutip.essolve.ode2es function diagonalizes the Liouvillian and creates an exponential series with the correct eigenfrequencies and amplitudes for the initial state 0 (psi0). We can examine the resulting qutip.eseries object by printing a text representation: In [18]: es Out[18]: ESERIES object: 2 terms Hilbert space dimensions: [[2], [2]] Exponent #0 = (-1+0j) Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[-1. 0.] [ 0. 1.]] Exponent #1 = 0j Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 1. 0.] [ 0. 0.]] ## or by evaluating it and arbitrary points in time (here at 0.0 and 1.0): In [19]: es.value([0.0, 1.0]) Out[19]: array([ Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0. 0.] [ 0. 1.]], Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isherm = True Qobj data = [[ 0.63212056 0. ] [ 0. 0.36787944]]], dtype=object) and the expectation value of the exponential series can be calculated using the qutip.expect function: 76 ## In [20]: es_expect = expect(sigmaz(), es) The result es_expect is now an exponential series with c-numbers as amplitudes, which easily can be evaluated at arbitrary times: In [21]: es_expect.value([0.0, 1.0, 2.0, 3.0]) Out[21]: array([-1. , 0.26424112, 0.72932943, 0.90042586]) ## In [22]: times = linspace(0.0, 10.0, 100) In [23]: sz_expect = es_expect.value(times) In [24]: from pylab import * In [25]: plot(times, sz_expect, lw=2); In [26]: xlabel("Time", fontsize=16) ....: ylabel("Expectation value of sigma-z", fontsize=16); ....: In [28]: title("The expectation value of the $\sigma_{z}$ operator", fontsize=16); ## 3.8 Two-time correlation functions With the QuTiP time-evolution functions (for example qutip.mesolve and qutip.mcsolve), a state vector or density matrix can be evolved from an initial state at 0 to an arbitrary time , () = (, 0 ) {(0 )}, where (, 0 ) is the propagator defined by the equation of motion. The resulting density matrix can then be used to evaluate the expectation values of arbitrary combinations of same-time operators. To calculate two-time correlation functions on the form ( + )(), we can use the quantum regression theorem (see, e.g., [Gar03]) to write ( + )() = Tr [ ( + , ) {()}] = Tr [ ( + , ) { (, 0) {(0)}}] We therefore first calculate () = (, 0) {(0)} using one of the QuTiP evolution solvers with (0) as initial state, and then again use the same solver to calculate ( + , ) {()} using () as initial state. Note that if the intial state is the steady state, then () = (, 0) {ss } = ss and ( + )() = Tr [ ( + , ) {ss }] = Tr [ (, 0) {ss }] = ( )(0) , which is independent of , so that we only have one time coordinate . 77 QuTiP provides a family of functions that assists in the process of calculating two-time correlation functions. The available functions and their usage is show in the table below. Each of these functions can use one of the following evolution solvers: Master-equation, Exponential series and the Monte-Carlo. The choice of solver is defined by the optional argument solver. QuTiP function Correlation function qutip.correlation.correlation or ( + )() or qutip.correlation.correlation_2op_2t ()( + ). qutip.correlation.correlation_ss or ( )(0) or qutip.correlation.correlation_2op_1t (0)( ). qutip.correlation.correlation_4op_1t (0)( )( )(0). qutip.correlation.correlation_4op_2t ()( + )( + )(). The most common use-case is to calculate correlation functions of the kind ( )(0), in which case we use the correlation function solvers that start from the steady state, e.g., the qutip.correlation.correlation_2op_1t function. These correlation function sovlers return a vector or matrix (in general complex) with the correlations as a function of the delays times. The following code demonstrates how to calculate the ()(0) correlation for a leaky cavity with three different relaxation rates. In [1]: times = np.linspace(0,10.0,200) In [2]: a = destroy(10) In [3]: x = a.dag() + a In [4]: H = a.dag() * a In [5]: corr1 = correlation_ss(H, times, [np.sqrt(0.5) * a], x, x) In [6]: corr2 = correlation_ss(H, times, [np.sqrt(1.0) * a], x, x) In [7]: corr3 = correlation_ss(H, times, [np.sqrt(2.0) * a], x, x) In [8]: figure() Out[8]: <matplotlib.figure.Figure at 0x10b2196d0> In [9]: plot(times, np.real(corr1), times, np.real(corr2), times, np.real(corr3)) Out[9]: [<matplotlib.lines.Line2D at 0x10b4a3490>, <matplotlib.lines.Line2D at 0x10b4a33d0>, <matplotlib.lines.Line2D at 0x10b4fddd0>] In [10]: legend(['0.5','1.0','2.0']) Out[10]: <matplotlib.legend.Legend at 0x10df0d310> In [11]: xlabel(r'Time $t$') Out[11]: <matplotlib.text.Text at 0x10d29b1d0> In [12]: ylabel(r'Correlation $\left<x(t)x(0)\right>$') Out[12]: <matplotlib.text.Text at 0x10d263f90> In [13]: show() 78 Emission spectrum Given a correlation function ( )(0) we can define the corresponding power spectrum as () = ( )(0) . ## In QuTiP, we can calculate () using either qutip.correlation.spectrum_ss, which first calculates the correlation function using the qutip.essolve.essolve solver and then performs the Fourier transform semi-analytically, or we can use the function qutip.correlation.spectrum_correlation_fft to numerically calculate the Fourier transform of a given correlation data using FFT. The following example demonstrates how these two functions can be used to obtain the emission power spectrum. import numpy as np from qutip import * import pylab as plt N = 4 # number of cavity fock states wc = wa = 1.0 * 2 * np.pi # cavity and atom frequency g = 0.1 * 2 * np.pi # coupling strength kappa = 0.75 # cavity dissipation rate gamma = 0.25 # atom dissipation rate # Jaynes-Cummings Hamiltonian a = tensor(destroy(N), qeye(2)) sm = tensor(qeye(N), destroy(2)) H = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag()) # collapse operators n_th = 0.25 c_ops = [np.sqrt(kappa * (1 + n_th)) * a, np.sqrt(kappa * n_th) * a.dag(), np.sqrt(gamma) * sm] # # # # ## calculate the correlation obtain the spectrum. Here function for a sufficient that the discrete Fourier ## function using the mesolve solver, and then fft to we need to make sure to evaluate the correlation long time and sufficiently high sampling rate so transform (FFT) captures all the features in the 79 # resulting spectrum. tlist = np.linspace(0, 100, 5000) corr = correlation_ss(H, tlist, c_ops, a.dag(), a) wlist1, spec1 = spectrum_correlation_fft(tlist, corr) # calculate the power spectrum using spectrum, which internally uses essolve # to solve for the dynamics (by default) wlist2 = np.linspace(0.25, 1.75, 200) * 2 * np.pi spec2 = spectrum(H, wlist2, c_ops, a.dag(), a) # plot the spectra fig, ax = plt.subplots(1, 1) ax.plot(wlist1 / (2 * np.pi), spec1, 'b', lw=2, label='eseries method') ax.plot(wlist2 / (2 * np.pi), spec2, 'r--', lw=2, label='me+fft method') ax.legend() ax.set_xlabel('Frequency') ax.set_ylabel('Power spectrum') ax.set_title('Vacuum Rabi splitting') ax.set_xlim(wlist2[0]/(2*np.pi), wlist2[-1]/(2*np.pi)) plt.show() ## Vacuum Rabi splitting 0.6 eseries method me+fft method 0.5 Power spectrum 0.4 0.3 0.2 0.1 0.0 0.4 0.6 0.8 1.0 1.2 Frequency 1.4 1.6 More generally, we can also calculate correlation functions of the kind (1 + 2 )(1 ), i.e., the correlation function of a system that is not in its steadystate. In QuTiP, we can evoluate such correlation functions using the function qutip.correlation.correlation_2op_2t. The default behavior of this function is to return a matrix with the correlations as a function of the two time coordinates (1 and 2 ). import numpy as np from qutip import * 80 ## from pylab import * times = np.linspace(0, 10.0, 200) a = destroy(10) x = a.dag() + a H = a.dag() * a alpha = 2.5 rho0 = coherent_dm(10, alpha) corr = correlation_2op_2t(H, rho0, times, times, [np.sqrt(0.25) * a], x, x) pcolor(corr) xlabel(r'Time $t_2$') ylabel(r'Time $t_1$') title(r'Correlation $\left<x(t)x(0)\right>$') show() Correlation hx(t)x(0)i 200 Time t1 150 100 50 0 0 50 100 Time t2 150 200 However, in some cases we might be interested in the correlation functions on the form (1 + 2 )(1 ), but only as a function of time coordinate 2 . In this case we can also use the qutip.correlation.correlation_2op_2t function, if we pass the density matrix at time 1 as second argument, and None as third argument. The qutip.correlation.correlation_2op_2t function then returns a vector with the correlation values corresponding to the times in taulist (the fourth argument). Example: first-order optical coherence function This example demonstrates how to calculate a correlation function on the form ( )(0) for a non-steady initial state. Consider an oscillator that is interacting with a thermal environment. If the oscillator initially is in a coherent state, it will gradually decay to a thermal (incoherent) state. The amount of coherence can be quantified using the ( )(0) first-order optical coherence function (1) ( ) = . For a coherent state | (1) ( )| = 1, and ( )( ) (0)(0) for a completely incoherent (thermal) state (1) ( ) = 0. The following code calculates and plots (1) ( ) as a function of . 81 import numpy as np from qutip import * from pylab import * N = 15 taus = np.linspace(0,10.0,200) a = destroy(N) H = 2 * np.pi * a.dag() * a # collapse operator G1 = 0.75 n_th = 2.00 # bath temperature in terms of excitation number c_ops = [np.sqrt(G1 * (1 + n_th)) * a, np.sqrt(G1 * n_th) * a.dag()] rho0 = coherent_dm(N, 2.0) # first calculate the occupation number as a function of time n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0] # calculate the correlation function G1 and normalize with n to obtain g1 G1 = correlation_2op_2t(H, rho0, None, taus, c_ops, a.dag(), a) g1 = G1 / np.sqrt(n[0] * n) plot(taus, g1, 'b') plot(taus, n, 'r') title('Decay of a coherent state to an incoherent (thermal) state') xlabel(r'$\tau$') legend((r'First-order coherence function $g^{(1)}(\tau)$', r'occupation number $n(\tau)$')) show() ## 4 Decay of a coherent state to an incoherent (thermal) state First-order coherence function g(1) () occupation number n() 3 2 1 0 1 0 10 82 For convenience, the steps for calculating the first-order coherence function have been collected in the function qutip.correlation.coherence_function_g1. Example: second-order optical coherence function The second-order optical coherence function, with time-delay , is defined as (2) ( ) = (0) ( )( )(0) (0)(0)2 For a coherent state (2) ( ) = 1, for a thermal state (2) ( = 0) = 2 and it decreases as a function of time (bunched photons, they tend to appear together), and for a Fock state with photons (2) ( = 0) = (1)/2 < 1 and it increases with time (anti-bunched photons, more likely to arrive separated in time). To calculate this type of correlation function with QuTiP, we can use qutip.correlation.correlation_4op_1t, which computes a correlation function on the form (0)( )( )(0) (four operators, one delay-time vector). The following code calculates and plots (2) ( ) as a function of for a coherent, thermal and fock state. import numpy as np from qutip import * import pylab as plt N = 25 taus = np.linspace(0, 25.0, 200) a = destroy(N) H = 2 * np.pi * a.dag() * a kappa = 0.25 n_th = 2.0 # bath temperature in terms of excitation number c_ops = [np.sqrt(kappa * (1 + n_th)) * a, np.sqrt(kappa * n_th) * a.dag()] states = [{'state': coherent_dm(N, np.sqrt(2)), 'label': "coherent state"}, {'state': thermal_dm(N, 2), 'label': "thermal state"}, {'state': fock_dm(N, 2), 'label': "Fock state"}] fig, ax = plt.subplots(1, 1) for state in states: rho0 = state['state'] # first calculate the occupation number as a function of time n = mesolve(H, rho0, taus, c_ops, [a.dag() * a]).expect[0] # calculate the correlation function G2 and normalize with n(0)n(t) to # obtain g2 G2 = correlation_4op_1t(H, rho0, taus, c_ops, a.dag(), a.dag(), a, a) g2 = G2 / (n[0] * n) ax.plot(taus, np.real(g2), label=state['label'], lw=2) ax.legend(loc=0) ax.set_xlabel(r'$\tau$') ax.set_ylabel(r'$g^{(2)}(\tau)$') plt.show() 83 2.0 coherent state thermal state Fock state 1.8 1.6 g(2) () 1.4 1.2 1.0 0.8 0.6 0.4 0 10 15 20 25 For convenience, the steps for calculating the second-order coherence function have been collected in the function qutip.correlation.coherence_function_g2. ## 3.9 Plotting on the Bloch Sphere Important: Updated in QuTiP version 3.0. Introduction When studying the dynamics of a two-level system, it is often convent to visualize the state of the system by plotting the state-vector or density matrix on the Bloch sphere. In QuTiP, we have created two different classes to allow for easy creation and manipulation of data sets, both vectors and data points, on the Bloch sphere. The qutip.Bloch class, uses Matplotlib to render the Bloch sphere, where as qutip.Bloch3d uses the Mayavi rendering engine to generate a more faithful 3D reconstruction of the Bloch sphere. ## The Bloch and Bloch3d Classes In QuTiP, creating a Bloch sphere is accomplished by calling either: In [1]: b = Bloch() ## which will load an instance of the qutip.Bloch class, or using: >>> b3d = Bloch3d() that loads the qutip.Bloch3d version. Before getting into the details of these objects, we can simply plot the blank Bloch sphere associated with these instances via: 84 In [2]: b.show() or In addition to the show() command, the Bloch class has the following functions: 85 Name ## Input Parameters (#=optional) pnts list/array of (x,y,z) points, meth=m (default meth=s) will plot a collection of points as multicolored data points. state Qobj or list/array of Qobjs representing state or density matrix of a two-level system, kind (optional) string specifying if state should be plotted as point (point) or vector (default). vec list/array of (x,y,z) points giving direction and length of state vectors. clear() save(#format,#dirc) ## format format (default=png) of output file, dirc (default=cwd) output directory show() As an example, we can add a single data point: Description Adds a single or set of data points to be plotted on the sphere. Input multiple states as a list or array ## adds single or multiple vectors to plot. Removes all data from Bloch sphere. Keeps customized figure properties. Saves Bloch sphere to a file. Generates Bloch sphere with given data. In [5]: b.show() ## and then a single vector: In [6]: vec = [0,1,0] In [8]: b.show() 86 ## and then add another vector corresponding to the |up state: In [9]: up = basis(2,0) In [11]: b.show() Notice that when we add more than a single vector (or data point), a different color will automatically be applied to the later data set (mod 4). In total, the code for constructing our Bloch sphere with one vector, one state, and a single data point is: In [12]: pnt = [1/np.sqrt(3),1/np.sqrt(3),1/np.sqrt(3)] 87 In [16]: b.show() where we have removed the extra show() commands. Replacing b=Bloch() with b=Bloch3d() in the above code generates the following 3D Bloch sphere. We can also plot multiple points, vectors, and states at the same time by passing list or arrays instead of individual elements. Before giving an example, we can use the clear() command to remove the current data from our Bloch sphere instead of creating a new instance: In [17]: b.clear() In [18]: b.show() 88 Now on the same Bloch sphere, we can plot the three states associated with the x, y, and z directions: In [19]: x = (basis(2,0)+(1+0j)*basis(2,1)).unit() In [20]: y = (basis(2,0)+(0+1j)*basis(2,1)).unit() In [21]: z = (basis(2,0)+(0+0j)*basis(2,1)).unit() In [23]: b.show() ## a similar method works for adding vectors: In [24]: b.clear() In [25]: vec = [[1,0,0],[0,1,0],[0,0,1]] 89 In [27]: b.show() Adding multiple points to the Bloch sphere works slightly differently than adding multiple states or vectors. For example, lets add a set of 20 points around the equator (after calling clear()): In [28]: xp = [np.cos(th) for th in np.linspace(0, 2*pi, 20)] In [29]: yp = [np.sin(th) for th in np.linspace(0, 2*pi, 20)] In [30]: zp = np.zeros(20) In [31]: pnts = [xp, yp, zp] In [33]: b.show() 90 Notice that, in contrast to states or vectors, each point remains the same color as the initial point. This is because adding multiple data points using the add_points function is interpreted, by default, to correspond to a single data point (single qubit state) plotted at different times. This is very useful when visualizing the dynamics of a qubit. An example of this is given in the example . If we want to plot additional qubit states we can call In [34]: xz = np.zeros(20) In [35]: yz = [np.sin(th) for th in np.linspace(0, pi, 20)] In [36]: zz = [np.cos(th) for th in np.linspace(0, pi, 20)] In [38]: b.show() 91 The color and shape of the data points is varied automatically by the Bloch class. Notice how the color and point markers change for each set of data. Again, we have had to call add_points twice because adding more than one set of multiple data points is not supported by the add_points function. What if we want to vary the color of our points. We can tell the qutip.Bloch class to vary the color of each point according to the colors listed in the b.point_color list (see Configuring the Bloch sphere below). Again after clear(): In [39]: xp = [np.cos(th) for th in np.linspace(0, 2*pi, 20)] In [40]: yp = [sin(th) for th in np.linspace(0, 2*pi, 20)] In [41]: zp = np.zeros(20) In [42]: pnts = [xp, yp, zp] In [43]: b.add_points(pnts,'m') # <-- add a 'm' string to signify 'multi' colored points In [44]: b.show() Now, the data points cycle through a variety of predefined colors. Now lets add another set of points, but this time we want the set to be a single color, representing say a qubit going from the |up state to the |down state in the y-z plane: In [45]: xz = np.zeros(20) In [46]: yz = [np.sin(th) for th in np.linspace(0, pi ,20)] In [47]: zz = [np.cos(th) for th in np.linspace(0, pi, 20)] In [48]: b.add_points([xz, yz, zz]) # no 'm' In [49]: b.show() 92 Again, the same plot can be generated using the qutip.Bloch3d class by replacing Bloch with Bloch3d: A more slick way of using this multi color feature is also given in the example, where we set the color of the markers as a function of time. Differences Between Bloch and Bloch3d While in general the Bloch and Bloch3d classes are interchangeable, there are some important differences to consider when choosing between them. The Bloch class uses Matplotlib to generate figures. As such, the data plotted on the sphere is in reality just a 2D object. In contrast the Bloch3d class uses the 3D rendering engine from VTK via mayavi to generate the sphere and the included data. In this sense the Bloch3d class is much more advanced, as objects are rendered in 3D leading to a higher quality figure. 93 Only the Bloch class can be embedded in a Matplotlib figure window. Thus if you want to combine a Bloch sphere with another figure generated in QuTiP, you can not use Bloch3d. Of course you can always post-process your figures using other software to get the desired result. Due to limitations in the rendering engine, the Bloch3d class does not support LaTex for text. Again, you can get around this by post-processing. The user customizable attributes for the Bloch and Bloch3d classes are not identical. Therefore, if you change the properties of one of the classes, these changes will cause an exception if the class is switched. ## Configuring the Bloch sphere Bloch Class Options At the end of the last section we saw that the colors and marker shapes of the data plotted on the Bloch sphere are automatically varied according to the number of points and vectors added. But what if you want a different choice of color, or you want your sphere to be purple with different axes labels? Well then you are in luck as the Bloch class has 22 attributes which one can control. Assuming b=Bloch(): Attribute Function Default Setting b.axes Matplotlib axes instance for animations. Set by None axes keyword arg. b.fig User supplied Matplotlib Figure instance. Set by None fig keyword arg. b.font_color Color of fonts black b.font_size Size of fonts 20 b.frame_alpha Transparency of wireframe 0.1 b.frame_color Color of wireframe gray b.frame_width Width of wireframe 1 b.point_color List of colors for Bloch point markers to cycle [b,r,g,#CC6600] through b.point_marker List of point marker shapes to cycle through [o,s,d,^] b.point_size List of point marker sizes (not all markers look the [55,62,65,75] same size when plotted) b.sphere_alpha Transparency of Bloch sphere 0.2 b.sphere_color Color of Bloch sphere #FFDDDD b.size Sets size of figure window [7,7] (700x700 pixels) b.vector_color List of colors for Bloch vectors to cycle through [g,#CC6600,b,r] b.vector_width Width of Bloch vectors 4 b.view Azimuthal and Elevation viewing angles [-60,30] b.xlabel Labels for x-axis [$x$,] +x and -x (labels use LaTeX) b.xlpos Position of x-axis labels [1.1,-1.1] b.ylabel Labels for y-axis [$y$,] +y and -y (labels use LaTeX) b.ylpos Position of y-axis labels [1.2,-1.2] b.zlabel Labels for z-axis [$left|0\right>$,$left|1\right>$] +z and -z (la bels use LaTeX) b.zlpos Position of z-axis labels [1.2,-1.2] Bloch3d Class Options The Bloch3d sphere is also customizable. Note however that the attributes for the Bloch3d class are not in oneto-one correspondence to those of the Bloch class due to the different underlying rendering engines. Assuming b=Bloch3d(): 94 Attribute b.fig Function User supplied Mayavi Figure instance. Set by fig keyword arg. b.font_color Color of fonts b.font_scale Scale of fonts b.frame Draw wireframe for sphere? b.frame_alpha Transparency of wireframe b.frame_color Color of wireframe b.frame_num Number of wireframe elements to draw b.point_color List of colors for Bloch point markers to cycle through b.point_mode Type of point markers to draw b.point_size Size of points b.sphere_alpha Transparency of Bloch sphere b.sphere_color Color of Bloch sphere b.size Sets size of figure window b.vector_color List of colors for Bloch vectors to cycle through b.vector_width Width of Bloch vectors b.view Azimuthal and Elevation viewing angles b.xlabel Labels for x-axis b.xlpos Position of x-axis labels b.ylabel Labels for y-axis b.ylpos Position of y-axis labels b.zlabel Labels for z-axis b.zlpos Position of z-axis labels These properties can also be accessed via the print command: Default Setting None black 0.08 True 0.05 gray 8 0.005 [r, g, b, y] sphere 0.075 0.1 #808080 [500,500] (500x500 pixels) [r, g, b, y] 3 [45,65 ] [|x>, ] +x and -x [1.07,-1.07] [$y$,] +y and -y [1.07,-1.07] [|0>, |1>] +z and -z [1.07,-1.07] In [50]: b = Bloch() In [51]: print(b) Bloch data: ----------Number of points: 0 Number of vectors: 0 Bloch sphere properties: -----------------------font_color: black font_size: 20 frame_alpha: 0.2 frame_color: gray frame_width: 1 point_color: ['b', 'r', 'g', '#CC6600'] point_marker: ['o', 's', 'd', '^'] point_size: [25, 32, 35, 45] sphere_alpha: 0.2 sphere_color: #FFDDDD figsize: [5, 5] vector_color: ['g', '#CC6600', 'b', 'r'] vector_width: 3 vector_style: -|> vector_mutation: 20 view: [-60, 30] xlabel: ['$x$', ''] xlpos: [1.2, -1.2] ylabel: ['$y$', ''] ylpos: [1.2, -1.2] zlabel: ['$\\left|0\\right>$', '$\\left|1\\right>$'] zlpos: [1.2, -1.2] 95 ## Animating with the Bloch sphere The Bloch class was designed from the outset to generate animations. To animate a set of vectors or data points the basic idea is: plot the data at time t1, save the sphere, clear the sphere, plot data at t2,... The Bloch sphere will automatically number the output file based on how many times the object has been saved (this is stored in b.savenum). The easiest way to animate data on the Bloch sphere is to use the save() method and generate a series of images to convert into an animation. However, as of Matplotlib version 1.1, creating animations is built-in. We will demonstrate both methods by looking at the decay of a qubit on the bloch sphere. Example: Qubit Decay The code for calculating the expectation values for the Pauli spin operators of a qubit decay is given below. This code is common to both animation examples. from qutip import * from scipy import * def qubit_integrate(w, theta, gamma1, gamma2, psi0, tlist): # operators and the hamiltonian sx = sigmax(); sy = sigmay(); sz = sigmaz(); sm = sigmam() H = w * (cos(theta) * sz + sin(theta) * sx) # collapse operators c_op_list = [] n_th = 0.5 # temperature rate = gamma1 * (n_th + 1) if rate > 0.0: c_op_list.append(sqrt(rate) * sm) rate = gamma1 * n_th if rate > 0.0: c_op_list.append(sqrt(rate) * sm.dag()) rate = gamma2 if rate > 0.0: c_op_list.append(sqrt(rate) * sz) ## # evolve and calculate expectation values output = mesolve(H, psi0, tlist, c_op_list, [sx, sy, sz]) return output.expect[0], output.expect[1], output.expect[2] ## calculate the dynamics w = 1.0 * 2 * pi # qubit angular frequency theta = 0.2 * pi # qubit angle from sigma_z axis (toward sigma_x axis) gamma1 = 0.5 # qubit relaxation rate gamma2 = 0.2 # qubit dephasing rate # initial state a = 1.0 psi0 = (a* basis(2,0) + (1-a)*basis(2,1))/(sqrt(a**2 + (1-a)**2)) tlist = linspace(0,4,250) #expectation values for ploting sx, sy, sz = qubit_integrate(w, theta, gamma1, gamma2, psi0, tlist) ## Generating Images for Animation An example of generating images for generating an animation outside of Python is given below: b = Bloch() b.vector_color = ['r'] b.view = [-40,30] for i in range(len(sx)): b.clear() b.save(dirc='temp') #saving images to temp directory in current working directory 96 ## Directly Generating an Animation Important: Generating animations directly from Matplotlib requires installing either mencoder or ffmpeg. While either choice works on linux, it is best to choose ffmpeg when running on the Mac. If using macports just do: sudo port install ffmpeg. The code to directly generate an mp4 movie of the Qubit decay is as follows: from pylab import * import matplotlib.animation as animation from mpl_toolkits.mplot3d import Axes3D fig = figure() ax = Axes3D(fig,azim=-40,elev=30) sphere = Bloch(axes=ax) def animate(i): sphere.clear() sphere.make_sphere() return ax def init(): sphere.vector_color = ['r'] return ax ani = animation.FuncAnimation(fig, animate, np.arange(len(sx)), init_func=init, blit=True, repeat=False) ani.save('bloch_sphere.mp4', fps=20, clear_temp=True) ## 3.10 Visualization of quantum states and processes Visualization is often an important complement to a simulation of a quantum mechanical system. The first method of visualization that come to mind might be to plot the expectation values of a few selected operators. But on top of that, it can often be instructive to visualize for example the state vectors or density matices that describe the state of the system, or how the state is transformed as a function of time (see process tomography below). In this section we demonstrate how QuTiP and matplotlib can be used to perform a few types of visualizations that often can provide additional understanding of quantum system. ## Fock-basis probability distribution In quantum mechanics probability distributions plays an important role, and as in statistics, the expectation values computed from a probability distribution does not reveal the full story. For example, consider an quantum harmonic oscillator mode with Hamiltonian = , which is in a state described by its density matrix , and which on average is occupied by two photons, Tr[ ] = 2. Given this information we cannot say whether the oscillator is in a Fock state, a thermal state, a coherent state, etc. By visualizing the photon distribution in the Fock state basis important clues about the underlying state can be obtained. One convenient way to visualize a probability distribution is to use histograms. Consider the following histogram visualization of the number-basis probability distribution, which can be obtained from the diagonal of the density matrix, for a few possible oscillator states with on average occupation of two photons. First we generate the density matrices for the coherent, thermal and fock states. 97 In [1]: N = 20 In [2]: rho_coherent = coherent_dm(N, np.sqrt(2)) In [3]: rho_thermal = thermal_dm(N, 2) In [4]: rho_fock = fock_dm(N, 2) ## Next, we plot histograms of the diagonals of the density matrices: In [5]: fig, axes = plt.subplots(1, 3, figsize=(12,3)) In [6]: bar0 = axes[0].bar(np.arange(0, N)-.5, rho_coherent.diag()) In [7]: lbl0 = axes[0].set_title("Coherent state") In [8]: lim0 = axes[0].set_xlim([-.5, N]) In [9]: bar1 = axes[1].bar(np.arange(0, N)-.5, rho_thermal.diag()) In [10]: lbl1 = axes[1].set_title("Thermal state") In [11]: lim1 = axes[1].set_xlim([-.5, N]) In [12]: bar2 = axes[2].bar(np.arange(0, N)-.5, rho_fock.diag()) In [13]: lbl2 = axes[2].set_title("Fock state") In [14]: lim2 = axes[2].set_xlim([-.5, N]) In [15]: plt.show() All these states correspond to an average of two photons, but by visualizing the photon distribution in Fock basis the differences between these states are easily appreciated. One frequently need to visualize the Fock-distribution in the way described above, so QuTiP provides a convenience function for doing this, see qutip.visualization.plot_fock_distribution, and the following example: In [16]: fig, axes = plt.subplots(1, 3, figsize=(12,3)) In [17]: plot_fock_distribution(rho_coherent, fig=fig, ax=axes[0], title="Coherent state"); In [18]: plot_fock_distribution(rho_thermal, fig=fig, ax=axes[1], title="Thermal state"); In [19]: plot_fock_distribution(rho_fock, fig=fig, ax=axes[2], title="Fock state"); In [20]: fig.tight_layout() In [21]: plt.show() 98 Quasi-probability distributions The probability distribution in the number (Fock) basis only describes the occupation probabilities for a discrete set of states. A more complete phase-space probability-distribution-like function for harmonic modes are the Wigner and Husumi Q-functions, which are full descriptions of the quantum state (equivalent to the density matrix). These are called quasi-distribution functions because unlike real probability distribution functions they can for example be negative. In addition to being more complete descriptions of a state (compared to only the occupation probabilities plotted above), these distributions are also great for demonstrating if a quantum state is quantum mechanical, since for example a negative Wigner function is a definite indicator that a state is distinctly nonclassical. Wigner function In QuTiP, the Wigner function for a harmonic mode can be calculated with the function qutip.wigner.wigner. It takes a ket or a density matrix as input, together with arrays that define the ranges of the phase-space coordinates (in the x-y plane). In the following example the Wigner functions are calculated and plotted for the same three states as in the previous section. In [22]: xvec = np.linspace(-5,5,200) In [23]: W_coherent = wigner(rho_coherent, xvec, xvec) In [24]: W_thermal = wigner(rho_thermal, xvec, xvec) In [25]: W_fock = wigner(rho_fock, xvec, xvec) In [26]: # plot the results In [27]: fig, axes = plt.subplots(1, 3, figsize=(12,3)) In [28]: cont0 = axes[0].contourf(xvec, xvec, W_coherent, 100) In [29]: lbl0 = axes[0].set_title("Coherent state") In [30]: cont1 = axes[1].contourf(xvec, xvec, W_thermal, 100) In [31]: lbl1 = axes[1].set_title("Thermal state") In [32]: cont0 = axes[2].contourf(xvec, xvec, W_fock, 100) In [33]: lbl2 = axes[2].set_title("Fock state") In [34]: plt.show() 99 ## Custom Color Maps The main objective when plotting a Wigner function is to demonstrate that the underlying state is nonclassical, as indicated by negative values in the Wigner function. Therefore, making these negative values stand out in a figure is helpful for both analysis and publication purposes. Unfortunately, all of the color schemes used in Matplotlib (or any other plotting software) are linear colormaps where small negative values tend to be near the same color as the zero values, and are thus hidden. To fix this dilemma, QuTiP includes a nonlinear colormap function qutip.visualization.wigner_cmap that colors all negative values differently than positive or zero values. Below is a demonstration of how to use this function in your Wigner figures: In [35]: import matplotlib as mpl In [36]: from matplotlib import cm In [37]: psi = (basis(10, 0) + basis(10, 3) + basis(10, 9)).unit() In [38]: xvec = np.linspace(-5, 5, 500) In [39]: W = wigner(psi, xvec, xvec) In [40]: wmap = wigner_cmap(W) ## In [41]: nrm = mpl.colors.Normalize(-W.max(), W.max()) In [42]: fig, axes = plt.subplots(1, 2, figsize=(10, 4)) In [43]: plt1 = axes[0].contourf(xvec, xvec, W, 100, cmap=cm.RdBu, norm=nrm) In [44]: axes[0].set_title("Standard Colormap"); In [45]: cb1 = fig.colorbar(plt1, ax=axes[0]) In [46]: plt2 = axes[1].contourf(xvec, xvec, W, 100, cmap=wmap) ## In [47]: axes[1].set_title("Wigner Colormap"); In [48]: cb2 = fig.colorbar(plt2, ax=axes[1]) In [49]: fig.tight_layout() In [50]: plt.show() 100 Husimi Q-function The Husimi Q function is, like the Wigner function, a quasiprobability distribution for harmonic modes. It is defined as () = 1 || where | is a coherent state and = + . In QuTiP, the Husimi Q function can be computed given a state ket or density matrix using the function qutip.wigner.qfunc, as demonstrated below. In [51]: Q_coherent = qfunc(rho_coherent, xvec, xvec) In [52]: Q_thermal = qfunc(rho_thermal, xvec, xvec) In [53]: Q_fock = qfunc(rho_fock, xvec, xvec) In [54]: fig, axes = plt.subplots(1, 3, figsize=(12,3)) In [55]: cont0 = axes[0].contourf(xvec, xvec, Q_coherent, 100) In [56]: lbl0 = axes[0].set_title("Coherent state") In [57]: cont1 = axes[1].contourf(xvec, xvec, Q_thermal, 100) In [58]: lbl1 = axes[1].set_title("Thermal state") In [59]: cont0 = axes[2].contourf(xvec, xvec, Q_fock, 100) In [60]: lbl2 = axes[2].set_title("Fock state") In [61]: plt.show() 101 Visualizing operators Sometimes, it may also be useful to directly visualizing the underlying matrix representation of an operator. The density matrix, for example, is an operator whose elements can give insights about the state it represents, but one might also be interesting in plotting the matrix of an Hamiltonian to inspect the structure and relative importance of various elements. QuTiP offers a few functions for quickly visualizing matrix data in the form of histograms, qutip.visualization.matrix_histogram and qutip.visualization.matrix_histogram_complex, and as Hinton diagram of weighted squares, qutip.visualization.hinton. These functions takes a qutip.Qobj.Qobj as first argument, and optional arguments to, for example, set the axis labels and figure title (see the functions documentation for details). For example, to illustrate the use of qutip.visualization.matrix_histogram, lets visualize of the Jaynes-Cummings Hamiltonian: In [62]: N = 5 In [63]: a = tensor(destroy(N), qeye(2)) In [64]: b = tensor(qeye(N), destroy(2)) In [65]: sx = tensor(qeye(N), sigmax()) In [66]: H = a.dag() * a + sx - 0.5 * (a * b.dag() + a.dag() * b) In [67]: # visualize H In [68]: lbls_list = [[str(d) for d in range(N)], ["u", "d"]] In [69]: xlabels = [] In [70]: for inds in tomography._index_permutations([len(lbls) for lbls in lbls_list]): ....: xlabels.append("".join([lbls_list[k][inds[k]] ....: for k in range(len(lbls_list))])) ....: In [71]: fig, ax = matrix_histogram(H, xlabels, xlabels, limits=[-4,4]) In [72]: ax.view_init(azim=-55, elev=45) In [73]: plt.show() 102 Similarly, we can use the function qutip.visualization.hinton, which is used below to visualize In [74]: rho_ss = steadystate(H, [np.sqrt(0.1) * a, np.sqrt(0.4) * b.dag()]) In [75]: fig, ax = hinton(rho_ss) # xlabels=xlabels, ylabels=xlabels) In [76]: plt.show() ## Quantum process tomography Quantum process tomography (QPT) is a useful technique for characterizing experimental implementations of quantum gates involving a small number of qubits. It can also be a useful theoretical tool that can give insight 103 in how a process transforms states, and it can be used for example to study how noise or other imperfections deteriorate a gate. Whereas a fidelity or distance measure can give a single number that indicates how far from ideal a gate is, a quantum process tomography analysis can give detailed information about exactly what kind of errors various imperfections introduce. The idea is to construct a transformation matrix for a quantum process (for example a quantum gate) that describes how the density matrix of a system is transformed by the process. We can then decompose the transformation in some operator basis that represent well-defined and easily interpreted transformations of the input states. To see how this works (see e.g. [Moh08] for more details), consider a process that is described by quantum map (in ) = out , which can be written 2 (in ) = out = in , (3.20) where is the number of states of the system (that is, is represented by an [ ] matrix). Given an orthogonal 2 ## operator basis of our choice { } , which satisfies Tr[ ] = , we can write the map as (in ) = out = in . (3.21) where = * and = . Here, matrix is the transformation matrix we are after, since it describes how much in contributes to out . In a numerical simulation of a quantum process we usually do not have access to the quantum map in the form Eq. (3.20). Instead, what we usually can do is to calculate the propagator for the density matrix in superoperator form, using for example the QuTiP function qutip.propagator.propagator. We can then write ( in ) = in = out where is the vector representation of the density matrix . If we write Eq. (3.21) in superoperator form as well we obtain in = in . out = so we can identify = Now this is a linear equation systems for the 2 2 elements in . We can solve it by writing and the as a [ 4 4 ] superoperator propagator as [ 4 ] vectors, and likewise write the superoperator product matrix : 4 ## with the solution = 1 . Note that to obtain with this method we have to construct a matrix with a size that is the square of the size of the superoperator for the system. Obviously, this scales very badly with increasing system size, but this method can still be a very useful for small systems (such as system comprised of a small number of coupled qubits). Implementation in QuTiP In QuTiP, the procedure described above is implemented in the function qutip.tomography.qpt, which returns the matrix given a density matrix propagator. To illustrate how to use this function, lets consider the -SWAP gate for two qubits. In QuTiP the function qutip.gates.iswap generates the unitary transformation for the state kets: 104 ## In [77]: U_psi = iswap() To be able to use this unitary transformation matrix as input to the function qutip.tomography.qpt, we first need to convert it to a transformation matrix for the corresponding density matrix: In [78]: U_rho = spre(U_psi) * spost(U_psi.dag()) Next, we construct a list of operators that define the basis { } in the form of a list of operators for each composite system. At the same time, we also construct a list of corresponding labels that will be used when plotting the matrix. In [79]: op_basis = [[qeye(2), sigmax(), sigmay(), sigmaz()]] * 2 In [80]: op_label = [["i", "x", "y", "z"]] * 2 ## We are now ready to compute using qutip.tomography.qpt, qutip.tomography.qpt_plot_combined. ## In [81]: chi = qpt(U_rho, op_basis) In [82]: fig = qpt_plot_combined(chi, op_label, r'$i$SWAP') In [83]: plt.show() For a slightly more advanced example, where the density matrix propagator is calculated from the dynamics of a system defined by its Hamiltonian and collapse operators using the function qutip.propagator.propagator, see notebook Time-dependent master equation: Landau-Zener transitions on the tutorials section on the QuTiP web site. ## 3.11 Parallel computation Parallel map and parallel for-loop Often one is interested in the output of a given function as a single-parameter is varied. For instance, we can calculate the steady-state response of our system as the driving frequency is varied. In cases such as this, where each iteration is independent of the others, we can speedup the calculation by performing the iterations in parallel. 105 ## In QuTiP, parallel computations may be performed using the qutip.parallel.parallel_map function or the qutip.parallel.parfor (parallel-for-loop) function. To use the these functions we need to define a function of one or more variables, and the range over which one of these variables are to be evaluated. For example: In [1]: def func1(x): return x, x**2, x**3 In [2]: a, b, c = parfor(func1, range(10)) In [3]: print(a) [0 1 2 3 4 5 6 7 8 9] In [4]: print(b) [ 0 1 4 9 16 25 36 49 64 81] In [5]: print(c) [ 0 1 8 27 ## 64 125 216 343 512 729] or In [6]: result = parallel_map(func1, range(10)) In [7]: result_array = np.array(result) In [8]: print(result_array[:, 0]) [0 1 2 3 4 5 6 7 8 9] # == a ## In [9]: print(result_array[:, 1]) [ 0 1 4 9 16 25 36 49 64 81] # == b ## In [10]: print(result_array[:, 2]) # == c [ 0 1 8 27 64 125 216 343 512 729] Note that the return values are arranged differently for the qutip.parallel.parallel_map and the qutip.parallel.parfor functions, as illustrated below. In particular, the return value of qutip.parallel.parallel_map is not enforced to be NumPy arrays, which can avoid unnecessary copying if all that is needed is to iterate over the resulting list: In [11]: result = parfor(func1, range(5)) In [12]: print(result) [array([0, 1, 2, 3, 4]), array([ 0, 1, 4, 9, 16]), array([ 0, 1, 8, 27, 64])] ## In [13]: result = parallel_map(func1, range(5)) In [14]: print(result) [(0, 0, 0), (1, 1, 1), (2, 4, 8), (3, 9, 27), (4, 16, 64)] ## The qutip.parallel.parallel_map and qutip.parallel.parfor functions are not limited to just numbers, but also works for a variety of outputs: In [15]: def func2(x): return x, Qobj(x), 'a' * x In [16]: a, b, c = parfor(func2, range(5)) In [17]: print(a) [0 1 2 3 4] In [18]: print(b) [ Quantum object: dims = [[1], [1]], shape = [1, 1], type = oper, isherm = True Qobj data = [[ 0.]] Quantum object: dims = [[1], [1]], shape = [1, 1], type = oper, isherm = True 106 Qobj data = [[ 1.]] Quantum object: dims = [[1], [1]], shape = [1, 1], type = oper, isherm = True Qobj data = [[ 2.]] Quantum object: dims = [[1], [1]], shape = [1, 1], type = oper, isherm = True Qobj data = [[ 3.]] Quantum object: dims = [[1], [1]], shape = [1, 1], type = oper, isherm = True Qobj data = [[ 4.]]] In [19]: print(c) ['' 'a' 'aa' 'aaa' 'aaaa'] ## Note: New in QuTiP 3. One can also define functions with multiple input arguments and even keyword arguments. Here the qutip.parallel.parallel_map and qutip.parallel.parfor functions behaves differently: While qutip.parallel.parallel_map only iterate over the values arguments, the qutip.parallel.parfor function simultaneously iterates over all arguments: In [20]: def sum_diff(x, y, z=0): return x + y, x - y, z In [21]: parfor(sum_diff, [1, 2, 3], [4, 5, 6], z=5.0) Out[21]: [array([5, 7, 9]), array([-3, -3, -3]), array([ 5., 5., 5.])] Out[22]: [(array([5, 6, 7]), array([-3, -4, -5]), 5.0), (array([6, 7, 8]), array([-2, -3, -4]), 5.0), (array([7, 8, 9]), array([-1, -2, -3]), 5.0)] Note that the keyword arguments can be anything you like, but the keyword values are not iterated over. The keyword argument num_cpus is reserved as it sets the number of CPUs used by parfor. By default, this value is set to the total number of physical processors on your system. You can change this number to a lower value, however setting it higher than the number of CPUs will cause a drop in performance. In argument, so there is no special reserved keyword arguments. The qutip.parallel.parallel_map function also supports progressbar, using the keyword argument progress_bar which can be set to True or to an instance of qutip.ui.progressbar.BaseProgressBar. There is a function called qutip.parallel.serial_map that works as a non-parallel drop-in replacement for qutip.parallel.parallel_map, which allows easy switching between serial and parallel computation. In [23]: import time In [24]: def func(x): time.sleep(1) In [25]: result = parallel_map(func, range(50), progress_bar=True) 10.0%. Run time: 2.01s. Est. time left: 00:00:00:18 20.0%. Run time: 3.01s. Est. time left: 00:00:00:12 30.0%. Run time: 4.02s. Est. time left: 00:00:00:09 40.0%. Run time: 5.02s. Est. time left: 00:00:00:07 50.0%. Run time: 7.02s. Est. time left: 00:00:00:07 60.0%. Run time: 8.02s. Est. time left: 00:00:00:05 70.0%. Run time: 9.02s. Est. time left: 00:00:00:03 80.0%. Run time: 10.02s. Est. time left: 00:00:00:02 90.0%. Run time: 12.02s. Est. time left: 00:00:00:01 100.0%. Run time: 13.02s. Est. time left: 00:00:00:00 Total run time: 13.07s 107 Parallel processing is useful for repeated tasks such as generating plots corresponding to the dynamical evolution of your system, or simultaneously simulating different parameter configurations. IPython-based parallel_map Note: New in QuTiP 3. When QuTiP is used with IPython interpreter, there is an alternative parallel for-loop implementation in the QuTiP module qutip.ipynbtools, see qutip.ipynbtools.parallel_map. The advantage of this parallel_map implementation is based on IPythons powerful framework for parallelization, so the compute processes are not confined to run on the same host as the main process. ## 3.12 Saving QuTiP Objects and Data Sets With time-consuming calculations it is often necessary to store the results to files on disk, so it can be postprocessed and archived. In QuTiP there are two facilities for storing data: Quantum objects can be stored to files and later read back as python pickles, and numerical data (vectors and matrices) can be exported as plain text files in for example CSV (comma-separated values), TSV (tab-separated values), etc. The former method is preferred when further calculations will be performed with the data, and the latter when the calculations are completed and data is to be imported into a post-processing tool (e.g. for generating figures). To store and load arbitrary QuTiP related objects (qutip.Qobj, qutip.solver.Result, etc.) there are two functions: qutip.fileio.qsave and qutip.fileio.qload. The function qutip.fileio.qsave takes an arbitrary object as first parameter and an optional filename as second parameter (default filename is qutip_data.qu). The filename extension is always .qu. The function qutip.fileio.qload takes a mandatory filename as first argument and loads and returns the objects in the file. To illustrate how these functions can be used, consider a simple calculation of the steadystate of the harmonic oscillator: In [1]: a = destroy(10); H = a.dag() * a ; c_ops = [sqrt(0.5) * a, sqrt(0.25) * a.dag()] In [2]: rho_ss = steadystate(H, c_ops) The steadystate density matrix rho_ss is an instance of qutip.Qobj. It can be stored to a file steadystate.qu using In [4]: ls *.qu density_matrix_vs_time.qu ## and it can later be loaded again, and used in further calculations: Quantum object: dims = [[10], [10]], shape = [10, 10], type = oper, isHerm = True In [6]: a = destroy(10) In [7]: expect(a.dag() * a, rho_ss_loaded) Out[7]: 0.9902248289345064 The nice thing about the qutip.fileio.qsave and qutip.fileio.qload functions is that almost any object can be stored and load again later on. We can for example store a list of density matrices as returned by qutip.mesolve: 108 ## In [8]: a = destroy(10); H = a.dag() * a ; c_ops = [sqrt(0.5) * a, sqrt(0.25) * a.dag()] In [9]: psi0 = rand_ket(10) In [10]: times = np.linspace(0, 10, 10) In [11]: dm_list = mesolve(H, psi0, times, c_ops, []) In [12]: qsave(dm_list, 'density_matrix_vs_time') And it can then be loaded and used again, for example in an other program: Result object with mesolve data. -------------------------------states = True num_collapse = 0 In [14]: a = destroy(10) In [15]: expect(a.dag() * a, dm_list_loaded.states) Out[15]: array([ 5.47236604, 4.25321934, 3.40221147, 2.78459863, 1.99152365, 1.739766 , 1.55173281, 1.41108289, 2.32939541, 1.30577149]) The qutip.fileio.qsave and qutip.fileio.qload are great, but the file format used is only understood by QuTiP (python) programs. When data must be exported to other programs the preferred method is to store the data in the commonly used plain-text file formats. With the QuTiP functions numpy arrays and matrices to files on disk using a deliminator-separated value format (for example commaseparated values CSV). Almost any program can handle this file format. The qutip.fileio.file_data_store takes two mandatory and three optional arguments: >>> file_data_store(filename, data, numtype="complex", numformat="decimal", sep=",") where filename is the name of the file, data is the data to be written to the file (must be a numpy array), numtype (optional) is a flag indicating numerical type that can take values complex or real, numformat (optional) specifies the numerical format that can take the values exp for the format 1.0e1 and decimal for the format 10.0, and sep (optional) is an arbitrary single-character field separator (usually a tab, space, comma, semicolon, etc.). A common use for the qutip.fileio.file_data_store function is to store the expectation values of a set of operators for a sequence of times, e.g., as returned by the qutip.mesolve function, which is what the following example does: In [16]: a = destroy(10); H = a.dag() * a ; c_ops = [sqrt(0.5) * a, sqrt(0.25) * a.dag()] In [17]: psi0 = rand_ket(10) In [18]: times = np.linspace(0, 100, 100) In [19]: medata = mesolve(H, psi0, times, c_ops, [a.dag() * a, a + a.dag(), -1j * (a - a.dag())]) In [20]: shape(medata.expect) Out[20]: (3, 100) In [21]: shape(times) Out[21]: (100,) In [22]: output_data = np.vstack((times, medata.expect)) 109 ## In [23]: file_data_store('expect.dat', output_data.T) # Note the .T for transpose! In [24]: ls *.dat expect.dat # Generated by QuTiP: 100x4 complex matrix in decimal format [',' separated values]. 0.0000000000+0.0000000000j,3.0955765962+0.0000000000j,2.3114466232+0.0000000000j,-0.2208505556+0.0 1.0101010101+0.0000000000j,2.5836572767+0.0000000000j,0.8657267558+0.0000000000j,-1.7468558726+0.0 2.0202020202+0.0000000000j,2.2083149044+0.0000000000j,-0.8847004889+0.0000000000j,-1.4419458039+0. 3.0303030303+0.0000000000j,1.9242964668+0.0000000000j,-1.4729939509+0.0000000000j,-0.0149042695+0. 4.0404040404+0.0000000000j,1.7075693373+0.0000000000j,-0.6940705865+0.0000000000j,1.0812526557+0.0 5.0505050505+0.0000000000j,1.5416230338+0.0000000000j,0.4773836586+0.0000000000j,1.0153635400+0.00 6.0606060606+0.0000000000j,1.4143168556+0.0000000000j,0.9734025713+0.0000000000j,0.1185429362+0.00 7.0707070707+0.0000000000j,1.3165352694+0.0000000000j,0.5404687852+0.0000000000j,-0.6657847865+0.0 8.0808080808+0.0000000000j,1.2413698337+0.0000000000j,-0.2418480793+0.0000000000j,-0.7102105490+0. In this case we didnt really need to store both the real and imaginary parts, so instead we could use the numtype=real option: In [26]: file_data_store('expect.dat', output_data.T, numtype="real") # Generated by QuTiP: 100x4 real matrix in decimal format [',' separated values]. 0.0000000000,3.0955765962,2.3114466232,-0.2208505556 1.0101010101,2.5836572767,0.8657267558,-1.7468558726 2.0202020202,2.2083149044,-0.8847004889,-1.4419458039 3.0303030303,1.9242964668,-1.4729939509,-0.0149042695 and if we prefer scientific notation we can request that using the numformat=exp option In [28]: file_data_store('expect.dat', output_data.T, numtype="real", numformat="exp") In [29]: !head -n 5 expect.dat # Generated by QuTiP: 100x4 real matrix in exp format [',' separated values]. 0.0000000000e+00,3.0955765962e+00,2.3114466232e+00,-2.2085055556e-01 1.0101010101e+00,2.5836572767e+00,8.6572675578e-01,-1.7468558726e+00 2.0202020202e+00,2.2083149044e+00,-8.8470048890e-01,-1.4419458039e+00 3.0303030303e+00,1.9242964668e+00,-1.4729939509e+00,-1.4904269545e-02 Loading data previously stored using qutip.fileio.file_data_store (or some other software) is a even easier. Regardless of which deliminator was used, if data was stored as complex or real numbers, if it is in decimal or exponential form, the data can be loaded using the qutip.fileio.file_data_read, which only takes the filename as mandatory argument. In [31]: shape(input_data) Out[31]: (100, 4) In [32]: from pylab import * In [33]: plot(input_data[:,0], input_data[:,1]); # plot the data Out[33]: [<matplotlib.lines.Line2D at 0x10d8f65d0>] 110 (If a particularly obscure choice of deliminator was used it might be necessary to use the optional second argument, for example sep=_ if _ is the deliminator). ## 3.13 Generating Random Quantum States & Operators QuTiP includes a collection of random state generators for simulations, theorem evaluation, and code testing: Function Description rand_ket Random ket-vector rand_dm Random density matrix rand_herm Random Hermitian matrix rand_unitary Random Unitary matrix See the API documentation: Random Operators and States for details. In all cases, these functions can be called with a single parameter that indicates a matrix (rand_dm, rand_herm, rand_unitary), or a 1 vector (rand_ket), should be generated. For example: In [1]: rand_ket(5) Out[1]: Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[-0.37899439-0.03246954j] [-0.09389192-0.30281261j] [-0.41147565-0.20947105j] [-0.41769426-0.02916778j] [-0.54640563+0.26024817j]] or In [2]: rand_herm(5) Out[2]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[-0.29514824+0.j 0.00000000+0.j -0.27781445-0.15337652j -0.35652395-0.05592461j 0.00000000+0.j ] [ 0.00000000+0.j -0.55204452+0.j -0.22293747-0.12925792j -0.09264731+0.20738712j -0.71881796+0.01202871j] [-0.27781445+0.15337652j -0.22293747+0.12925792j 0.00000000+0.j -0.84636559+0.30414702j -0.47088943-0.09313568j] [-0.35652395+0.05592461j -0.09264731-0.20738712j -0.84636559-0.30414702j 111 -0.02792858+0.j -0.39742673-0.09375464j] [ 0.00000000+0.j -0.71881796-0.01202871j -0.47088943+0.09313568j -0.39742673+0.09375464j 0.00000000+0.j ]] In this previous example, we see that the generated Hermitian operator contains a fraction of elements that are identically equal to zero. The number of nonzero elements is called the density and can be controlled by calling any of the random state/operator generators with a second argument between 0 and 1. By default, the density for the operators is 0.75 where as ket vectors are completely dense (1). For example: In [3]: rand_dm(5, 0.5) Out[3]: Quantum object: dims = [[5], [5]], shape = [5, 5], type = oper, isherm = True Qobj data = [[ 0.04892987+0.j 0.00000000+0.j 0.00265679-0.0245355j 0.09885662-0.01638816j 0.00000000+0.j ] [ 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j ] [ 0.00265679+0.0245355j 0.00000000+0.j 0.24585391+0.j 0.01358542+0.04868103j 0.21507082+0.04053822j] [ 0.09885662+0.01638816j 0.00000000+0.j 0.01358542-0.04868103j 0.43862274+0.j 0.01799108+0.05080967j] [ 0.00000000+0.j 0.00000000+0.j 0.21507082-0.04053822j 0.01799108-0.05080967j 0.26659348+0.j ]] ## has roughly half nonzero elements, or equivalently a density of 0.5. Warning: In the case of a density matrix, setting the density too low will result in not enough diagonal elements to satisfy () = 1. ## Composite random objects In many cases, one is interested in generating random quantum objects that correspond to composite systems generated using the qutip.tensor.tensor function. Specifying the tensor structure of a quantum object is done using the dims keyword argument in the same fashion as one would do for a qutip.Qobj object: In [4]: rand_dm(4, 0.5, dims=[[2,2], [2,2]]) Out[4]: Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isherm = True Qobj data = [[ 0.30122934 0. 0. 0. ] [ 0. 0. 0. 0. ] [ 0. 0. 0.34938533 0. ] [ 0. 0. 0. 0.34938533]] ## 3.14 Modifying Internal QuTiP Settings User Accessible Parameters In this section we show how to modify a few of the internal parameters used by QuTiP. The settings that can be modified are given in the following table: Setting Description Options auto_herm Automatically calculate the hermic- True / False ity of quantum objects. auto_tidyup Automatically tidyup quantum ob- True / False jects. auto_tidyup_atol Tolerance used by tidyup any float value > 0 atol General tolerance any float value > 0 num_cpus Number of CPUs used for multi- int between 1 and # cpus processing. debug Show debug printouts. True / False 112 ## Example: Changing Settings The two most important settings are auto_tidyup and auto_tidyup_atol as they control whether the small elements of a quantum object should be removed, and what number should be considered as the cut-off tolerance. Modifying these, or any other parameters, is quite simple: >>> qutip.settings.auto_tidyup = False These settings will be used for the current QuTiP session only and will need to be modified again when restarting QuTiP. If running QuTiP from a script file, then place the qutip.setings.xxxx commands immediately after from qutip import * at the top of the script file. If you want to reset the parameters back to their default values then call the reset command: >>> qutip.settings.reset() Persistent Settings When QuTiP is imported, it looks for the file .qutiprc in the users home directory. If this file is found, it will be loaded and overwrite the QuTiP default settings, which allows for persistent changes in the QuTiP settings to be made. A sample .qutiprc file is show below. The syntax is a simple key-value format, where the keys and possible values are described in the table above: # QuTiP Graphics qutip_graphics="YES" # use auto tidyup auto_tidyup=True # detect hermiticity auto_herm=True # use auto tidyup absolute tolerance auto_tidyup_atol=1e-12 # number of cpus num_cpus=4 # debug debug=False 113 CHAPTER FOUR API DOCUMENTATION This chapter contains automatically generated API documentation, including a complete list of QuTiPs public classes and functions. 4.1 Classes Qobj class Qobj(inpt=None, dims=[[], []], shape=[], type=None, isherm=None, fast=False, superrep=None) A class for representing quantum objects, such as quantum operators and states. The Qobj class is the QuTiP representation of quantum operators and state vectors. This class also implements math operations +,-,* between Qobj instances (and / by a C-number), as well as a collection of common operator/state operations. The Qobj constructor optionally takes a dimension list and/or shape list as arguments. Parameters inpt : array_like Data for vector/matrix representation of the quantum object. dims : list Dimensions of object used for tensor products. shape : list Shape of underlying data structure (matrix shape). fast : bool Flag for fast qobj creation when running ode solvers. This parameter is used internally only. Attributes data dims shape type superrep isherm iscp istp iscptp isket isbra isoper issuper isoperket isoperbra ## (array_like) Sparse matrix characterizing the quantum object. (list) List of dimensions keeping track of the tensor structure. (list) Shape of the underlying data array. (str) Type of quantum object: bra, ket, oper, operator-ket, operator-bra, or super. (str) Representation used if type is super. One of super (Liouville form) or choi (Choi matrix with tr = dimension). (bool) Indicates if quantum object represents Hermitian operator. (bool) Indicates if the quantum object represents a map, and if that map is completely positive (CP). (bool) Indicates if the quantum object represents a map, and if that map is trace preserving (TP). (bool) Indicates if the quantum object represents a map that is completely positive and trace preserving (CPTP). (bool) Indicates if the quantum object represents a ket. (bool) Indicates if the quantum object represents a bra. (bool) Indicates if the quantum object represents an operator. (bool) Indicates if the quantum object represents a superoperator. (bool) Indicates if the quantum object represents an operator in column vector form. (bool) Indicates if the quantum object represents an operator in row vector form. 115 Methods conj() dag() eigenenergies(sparse=False, sort=low, eigvals=0, tol=0, maxiter=100000) eigenstates(sparse=False, sort=low, eigvals=0, tol=0, maxiter=100000) expm() full() groundstate(sparse=False,tol=0,maxiter=100000) matrix_element(bra, ket) norm(norm=tr, sparse=False, tol=0, maxiter=100000) permute(order) ptrace(sel) sqrtm() tidyup(atol=1e-12) tr() trans() transform(inpt, inverse=False) unit(norm=tr, sparse=False, tol=0, maxiter=100000) ## Conjugate of quantum object. Returns eigenenergies (eigenvalues) of a quantum object. Returns eigenenergies and eigenstates of quantum object. Matrix exponential of quantum object. Returns dense array of quantum object data attribute. Returns eigenvalue and eigenket for the groundstate of a quantum object. Returns the matrix element of operator between bra and ket vectors. Returns norm of a ket or an operator. Returns composite qobj with indices reordered. Returns quantum object for selected dimensions after performing partial trace. Matrix square root of quantum object. Removes small elements from quantum object. Trace of quantum object. Transpose of quantum object. Performs a basis transformation defined by inpt matrix. Returns normalized quantum object. checkherm() Check if the quantum object is hermitian. Returns isherm: bool Returns the new value of isherm property. conj() Conjugate operator of quantum object. dag() diag() Diagonal elements of quantum object. Returns diags: array Returns array of real values if operators is Hermitian, otherwise complex values are returned. eigenenergies(sparse=False, sort=low, eigvals=0, tol=0, maxiter=100000) Eigenenergies of a quantum object. Eigenenergies (eigenvalues) are defined for operators or superoperators only. Parameters sparse : bool Use sparse Eigensolver sort : str Sort eigenvalues low to high, or high to low. eigvals : int Number of requested eigenvalues. Default is all eigenvalues. tol : float Tolerance used by sparse Eigensolver (0=machine precision). The sparse solver may not converge if the tolerance is set too low. maxiter : int 116 ## Maximum number of iterations performed by sparse solver (if used). Returns eigvals: array Array of eigenvalues for operator. Notes The sparse eigensolver is much slower than the dense version. Use sparse only if memory requirements demand it. eigenstates(sparse=False, sort=low, eigvals=0, tol=0, maxiter=100000) Eigenstates and eigenenergies. Eigenstates and eigenenergies are defined for operators and superoperators only. Parameters sparse : bool Use sparse Eigensolver sort : str Sort eigenvalues (and vectors) low to high, or high to low. eigvals : int Number of requested eigenvalues. Default is all eigenvalues. tol : float Tolerance used by sparse Eigensolver (0 = machine precision). The sparse solver may not converge if the tolerance is set too low. maxiter : int Maximum number of iterations performed by sparse solver (if used). Returns eigvals : array Array of eigenvalues for operator. eigvecs : array Array of quantum operators representing the oprator eigenkets. Order of eigenkets is determined by order of eigenvalues. Notes The sparse eigensolver is much slower than the dense version. Use sparse only if memory requirements demand it. eliminate_states(states_inds, normalize=False) Creates a new quantum object with states in state_inds eliminated. Parameters states_inds : list of integer The states that should be removed. normalize : True / False Weather or not the new Qobj instance should be normalized (default is False). For Qobjs that represents density matrices or state vectors normalized should probably be set to True, but for Qobjs that represents operators in for example an Hamiltonian, normalize should be False. Returns q : qutip.Qobj A new instance of qutip.Qobj that contains only the states corresponding to indices that are not in state_inds. Note: Experimental. static evaluate(qobj_list, t, args) Evaluate a time-dependent quantum object in list format. For example, qobj_list = [H0, [H1, func_t]] is evaluated to Qobj(t) = H0 + H1 * func_t(t, args) 117 and qobj_list = [H0, [H1, sin(w * t)]] is evaluated to Qobj(t) = H0 + H1 * sin(args[w] * t) Parameters qobj_list : list A nested list of Qobj instances and corresponding time-dependent coefficients. t : float The time for which to evaluate the time-dependent Qobj instance. args : dictionary A dictionary with parameter values required to evaluate the time-dependent Qobj intance. Returns output : Qobj A Qobj instance that represents the value of qobj_list at time t. expm(method=None) Matrix exponential of quantum operator. Input operator must be square. Parameters method : str {dense, sparse, scipy-dense, scipy-sparse} Use set method to use to calculate the matrix exponentiation. The available choices includes dense and sparse for using QuTiPs implementation of expm using dense and sparse matrices, respectively, and scipy-dense and scipy-sparse for using the scipy.linalg.expm (dense) and scipy.sparse.linalg.expm (sparse). If no method is explicitly given a heuristic will be used to try and automatically select the most appropriate solver. Returns oper : qobj Exponentiated quantum operator. Raises TypeError Quantum operator is not square. extract_states(states_inds, normalize=False) Qobj with states in state_inds only. Parameters states_inds : list of integer The states that should be kept. normalize : True / False Weather or not the new Qobj instance should be normalized (default is False). For Qobjs that represents density matrices or state vectors normalized should probably be set to True, but for Qobjs that represents operators in for example an Hamiltonian, normalize should be False. Returns q : qutip.Qobj A new instance of qutip.Qobj that contains only the states corresponding to the indices in state_inds. Note: Experimental. full(squeeze=False) Dense array from quantum object. Returns data : array Array of complex data from quantum objects data attribute. groundstate(sparse=False, tol=0, maxiter=100000) Ground state Eigenvalue and Eigenvector. Defined for quantum operators or superoperators only. Parameters sparse : bool Use sparse Eigensolver 118 tol : float Tolerance used by sparse Eigensolver (0 = machine precision). The sparse solver may not converge if the tolerance is set too low. maxiter : int Maximum number of iterations performed by sparse solver (if used). Returns eigval : float Eigenvalue for the ground state of quantum operator. eigvec : qobj Eigenket for the ground state of quantum operator. Notes The sparse eigensolver is much slower than the dense version. Use sparse only if memory requirements demand it. matrix_element(bra, ket) Calculates a matrix element. Gives the matrix element for the quantum object sandwiched between a bra and ket vector. Parameters bra : qobj Quantum object of type bra. ket : qobj Quantum object of type ket. Returns elem : complex Complex valued matrix element. Raises TypeError Can only calculate matrix elements between a bra and ket quantum object. norm(norm=None, sparse=False, tol=0, maxiter=100000) Norm of a quantum object. Default norm is L2-norm for kets and trace-norm for operators. Other ket and operator norms may be specified using the norm and argument. Parameters norm : str Which norm to use for ket/bra vectors: L2 l2, max norm max, or for operators: trace tr, Frobius fro, one one, or max max. sparse : bool Use sparse eigenvalue solver for trace norm. Other norms are not affected by this parameter. tol : float Tolerance for sparse solver (if used) for trace norm. The sparse solver may not converge if the tolerance is set too low. maxiter : int Maximum number of iterations performed by sparse solver (if used) for trace norm. Returns norm : float The requested norm of the operator or state quantum object. Notes The sparse eigensolver is much slower than the dense version. Use sparse only if memory requirements demand it. overlap(state) Overlap between two state vectors. Gives the overlap (scalar product) for the quantum object and state state vector. Parameters state : qobj Quantum object for a state vector of type ket or bra. 119 ## Returns overlap : complex Complex valued overlap. Raises TypeError Can only calculate overlap between a bra and ket quantum objects. permute(order) Permutes a composite quantum object. Parameters order : list/array List specifying new tensor order. Returns P : qobj Permuted quantum object. ptrace(sel) Partial trace of the quantum object. Parameters sel : int/list An int or list of components to keep after partial trace. Returns oper: qobj Quantum object representing partial trace with selected components remaining. Notes This function is identical to the qutip.qobj.ptrace function that has been deprecated. sqrtm(sparse=False, tol=0, maxiter=100000) Sqrt of a quantum operator. Operator must be square. Parameters sparse : bool Use sparse eigenvalue/vector solver. tol : float Tolerance used by sparse solver (0 = machine precision). maxiter : int Maximum number of iterations used by sparse solver. Returns oper: qobj Matrix square root of operator. Raises TypeError Quantum object is not square. Notes The sparse eigensolver is much slower than the dense version. Use sparse only if memory requirements demand it. tidyup(atol=None) Removes small elements from the quantum object. Parameters atol : float Absolute tolerance used by tidyup. Default is set via qutip global settings parameters. Returns oper: qobj Quantum object with small elements removed. tr() Trace of a quantum object. Returns trace: float Returns real if operator is Hermitian, returns complex otherwise. trans() Transposed operator. 120 ## Returns oper : qobj Transpose of input operator. transform(inpt, inverse=False) Basis transform defined by input array. Input array can be a matrix defining the transformation, or a list of kets that defines the new basis. Parameters inpt : array_like A matrix or list of kets defining the transformation. inverse : bool Whether to return inverse transformation. Returns oper : qobj Operator in new basis. Notes ## This function is still in development. unit(norm=None, sparse=False, tol=0, maxiter=100000) Operator or state normalized to unity. Uses norm from Qobj.norm(). Parameters norm : str Requested norm for states / operators. sparse : bool Use sparse eigensolver for trace norm. Does not affect other norms. tol : float Tolerance used by sparse eigensolver. maxiter: int Number of maximum iterations performed by sparse eigensolver. Returns oper : qobj Normalized quantum object. eseries class eseries(q=array([], dtype=object), s=array([], dtype=float64)) Class representation of an exponential-series expansion of time-dependent quantum objects. Attributes ampl rates dims shape ## (ndarray) Array of amplitudes for exponential series. (ndarray) Array of rates for exponential series. (list) Dimensions of exponential series components (list) Shape corresponding to exponential series components Methods value(tlist) spec(wlist) tidyup() ## Evaluate an exponential series at the times listed in tlist Evaluate the spectrum of an exponential series at frequencies in wlist. Returns a tidier version of the exponential series spec(wlist) Evaluate the spectrum of an exponential series at frequencies in wlist. Parameters wlist : array_like Array/list of frequenies. Returns val_list : ndarray Values of exponential series at frequencies in wlist. 121 tidyup(*args) Returns a tidier version of exponential series. value(tlist) Evaluates an exponential series at the times listed in tlist. Parameters tlist : ndarray Times at which to evaluate exponential series. Returns val_list : ndarray Values of exponential at times in tlist. Bloch sphere class Bloch(fig=None, axes=None, view=None, figsize=None, background=False) Class for plotting data on the Bloch sphere. Valid data can be either points, vectors, or qobj objects. Attributes axes (instance {None}) User supplied Matplotlib axes for Bloch sphere animation. fig (instance {None}) User supplied Matplotlib Figure instance for plotting Bloch sphere. font_color (str {black}) Color of font used for Bloch sphere labels. font_size (int {20}) Size of font used for Bloch sphere labels. frame_alpha (float {0.1}) Sets transparency of Bloch sphere frame. frame_color (str {gray}) Color of sphere wireframe. frame_width (int {1}) Width of wireframe. point_color (list {[b,r,g,#CC6600]}) List of colors for Bloch sphere point markers to cycle through. i.e. By default, points 0 and 4 will both be blue (b). point_marker(list {[o,s,d,^]}) List of point marker shapes to cycle through. point_size (list {[25,32,35,45]}) List of point marker sizes. Note, not all point markers look the same size when plotted! sphere_alpha(float {0.2}) Transparency of Bloch sphere itself. sphere_color(str {#FFDDDD}) Color of Bloch sphere. figsize (list {[7,7]}) Figure size of Bloch sphere plot. Best to have both numbers the same; otherwise you will have a Bloch sphere that looks like a football. vec(list {[g,#CC6600,b,r]}) List of vector colors to cycle through. tor_color vec(int {5}) Width of displayed vectors. tor_width vec(str {-|>, simple, fancy, }) Vector arrowhead style (from matplotlibs arrow style). tor_style vec(int {20}) Width of vectors arrowhead. tor_mutation view (list {[-60,30]}) Azimuthal and Elevation viewing angles. xlabel (list {[$x$,]}) List of strings corresponding to +x and -x axes labels, respectively. xlpos (list {[1.1,-1.1]}) Positions of +x and -x labels respectively. ylabel (list {[$y$,]}) List of strings corresponding to +y and -y axes labels, respectively. ylpos (list {[1.2,-1.2]}) Positions of +y and -y labels respectively. zlabel (list {[r$left|0right>$,r$left|1right>$]}) List of strings corresponding to +z and -z axes labels, respectively. zlpos (list {[1.2,-1.2]}) Positions of +z and -z labels respectively. Methods Continued on next page 122 ## Table 4.1 continued from previous page clear make_sphere plot_annotations plot_axes plot_axes_labels plot_back plot_front plot_points plot_vectors render save set_label_convention show Add a text or LaTeX annotation to Bloch sphere, parametrized by a qubit state or a vector. Parameters state_or_vector : Qobj/array/list/tuple Position for the annotaion. Qobj of a qubit or a vector of 3 elements. text : str/unicode Annotation text. You can use LaTeX, but remember to use raw string e.g. r$langle x rangle$ or escape backslashes e.g. $\langle x \rangle$. **kwargs : Options as for mplot3d.axes3d.text, including: fontsize, color, horizontalalignment, verticalalignment. Add a list of data points to bloch sphere. Parameters points : array/list Collection of data points. meth : str {s, m, l} Type of points to plot, use m for multicolored, l for points connected with a line. Add a state vector Qobj to Bloch sphere. Parameters state : qobj Input state vector. kind : str {vector,point} Type of object to plot. Add a list of vectors to Bloch sphere. Parameters vectors : array/list Array with vectors of unit length or smaller. clear() Resets Bloch sphere data sets to empty. make_sphere() Plots Bloch sphere and data sets. render(fig=None, axes=None) Render the Bloch sphere and its data sets in on given figure and axes. save(name=None, format=png, dirc=None) Saves Bloch sphere to file of type format in directory dirc. Parameters name : str 123 ## Name of saved image. Must include path and format as well. i.e. /Users/Paul/Desktop/bloch.png This overrides the format and dirc arguments. format : str Format of output image. dirc : str Directory for output images. Defaults to current working directory. Returns File containing plot of Bloch sphere. set_label_convention(convention) Set x, y and z labels according to one of conventions. Parameters convention : string One of the following: - original - xyz - sx sy sz - 01 - polarization jones - polarization jones letters show() Display Bloch sphere and corresponding data sets. vector_mutation = None Sets the width of the vectors arrowhead vector_style = None Style of Bloch vectors, default = -|> (or simple) vector_width = None Width of Bloch vectors, default = 5 class Bloch3d(fig=None) Class for plotting data on a 3D Bloch sphere using mayavi. Valid data can be either points, vectors, or qobj objects corresponding to state vectors or density matrices. for a two-state system (or subsystem). Notes The use of mayavi for 3D rendering of the Bloch sphere comes with a few limitations: I) You can not embed a Bloch3d figure into a matplotlib window. II) The use of LaTex is not supported by the mayavi rendering engine. Therefore all labels must be defined using standard text. Of course you can post-process the generated figures later to add LaTeX using other software if needed. 124 Attributes fig (instance {None}) User supplied Matplotlib Figure instance for plotting Bloch sphere. font_color (str {black}) Color of font used for Bloch sphere labels. font_scale (float {0.08}) Scale for font used for Bloch sphere labels. frame (bool {True}) Draw frame for Bloch sphere frame_alpha (float {0.05}) Sets transparency of Bloch sphere frame. frame_color(str {gray}) Color of sphere wireframe. frame_num(int {8}) Number of frame elements to draw. (floats {0.005}) Width of wireframe. point_color(list {[r, g, b, y]}) List of colors for Bloch sphere point markers to cycle through. i.e. By default, points 0 and 4 will both be blue (r). point_mode(string {sphere,cone,cube,cylinder,point}) Point marker shapes. point_size (float {0.075}) Size of points on Bloch sphere. sphere_alpha (float {0.1}) Transparency of Bloch sphere itself. sphere_color (str {#808080}) Color of Bloch sphere. size (list {[500,500]}) Size of Bloch sphere plot in pixels. Best to have both numbers the same otherwise you will have a Bloch sphere that looks like a football. vec(list {[r, g, b, y]}) List of vector colors to cycle through. tor_color vec(int {3}) Width of displayed vectors. tor_width view (list {[45,65]}) Azimuthal and Elevation viewing angles. xlabel (list {[|x>, ]}) List of strings corresponding to +x and -x axes labels, respectively. xlpos (list {[1.07,-1.07]}) Positions of +x and -x labels respectively. ylabel (list {[|y>, ]}) List of strings corresponding to +y and -y axes labels, respectively. ylpos (list {[1.07,-1.07]}) Positions of +y and -y labels respectively. zlabel (list {[|0>, |1>]}) List of strings corresponding to +z and -z axes labels, respectively. zlpos (list {[1.07,-1.07]}) Positions of +z and -z labels respectively. Methods clear make_sphere plot_points plot_vectors save show Add a list of data points to bloch sphere. Parameters points : array/list Collection of data points. meth : str {s,m} Type of points to plot, use m for multicolored. Add a state vector Qobj to Bloch sphere. Parameters state : qobj Input state vector. kind : str {vector,point} Type of object to plot. 125 Add a list of vectors to Bloch sphere. Parameters vectors : array/list Array with vectors of unit length or smaller. clear() Resets the Bloch sphere data sets to empty. make_sphere() Plots Bloch sphere and data sets. plot_points() Plots points on the Bloch sphere. plot_vectors() Plots vectors on the Bloch sphere. save(name=None, format=png, dirc=None) Saves Bloch sphere to file of type format in directory dirc. Parameters name : str Name of saved image. Must include path and format as well. i.e. /Users/Paul/Desktop/bloch.png This overrides the format and dirc arguments. format : str Format of output image. Default is png. dirc : str Directory for output images. Defaults to current working directory. Returns File containing plot of Bloch sphere. show() Display the Bloch sphere and corresponding data sets. ## Solver Options and Results class Options(atol=1e-08, rtol=1e-06, method=adams, order=12, nsteps=1000, first_step=0, max_step=0, min_step=0, average_expect=True, average_states=False, tidy=True, num_cpus=0, norm_tol=0.001, norm_steps=5, rhs_reuse=False, rhs_filename=None, ntraj=500, gui=False, rhs_with_state=False, store_final_state=False, Class of options for evolution solvers such as qutip.mesolve and qutip.mcsolve. Options can be specified either as arguments to the constructor: opts = Options(order=10, ...) opts = Options() opts.order = 10 ## Returns options class to be used as options in evolution solvers. 126 Attributes atol (float {1e-8}) Absolute tolerance. rtol (float {1e-6}) Relative tolerance. method order (int {12}) Order of integrator (<=12 adams, <=5 bdf) nsteps (int {2500}) Max. number of internal steps/call. first_step (float {0}) Size of initial step (0 = automatic). min_step (float {0}) Minimum step size (0 = automatic). max_step (float {0}) Maximum step size (0 = automatic) tidy (bool {True,False}) Tidyup Hamiltonian and initial state by removing small terms. num_cpus (int) Number of cpus used by mcsolver (default = # of cpus). norm_tol (float) Tolerance used when finding wavefunction norm in mcsolve. norm_steps (int) Max. number of steps used to find wavefunction norm to within norm_tol in mcsolve. aver(bool {False}) Average states values over trajectories in stochastic solvers. age_states aver(bool {True}) Average expectation values over trajectories for stochastic solvers. age_expect mc_corr_eps(float {1e-10}) Arbitrarily small value for eliminating any divide-by-zero errors in correlation calculations when using mcsolve. ntraj (int {500}) Number of trajectories in stochastic solvers. rhs_reuse (bool {False,True}) Reuse Hamiltonian data. rhs_with_state (bool {False,True}) Whether or not to include the state in the Hamiltonian function callback signature. rhs_filename(str) Name for compiled Cython file. seeds (ndarray) Array containing random number seeds for mcsolver. store_final_state (bool {False, True}) Whether or not to store the final state of the evolution in the result class. store_states (bool {False, True}) Whether or not to store the state vectors or density matrices in the result class, even if expectation values operators are given. If no expectation are provided, then states are stored by default and this option has no effect. class Result Class for storing simulation results from any of the dynamics solvers. Attributes solver (str) Which solver was used [e.g., mesolve, mcsolve, brmesolve, ...] times (list/array) Times at which simulation data was collected. expect (list/array) Expectation values (if requested) for simulation. states (array) State of the simulation (density matrix or ket) evaluated at times. num_expect(int) Number of expectation value operators in simulation. num_collapse (int) Number of collapse operators in simualation. ntraj (int/list) Number of trajectories (for stochastic solvers). A list indicates that averaging of expectation values was done over a subset of total number of trajectories. col_times (list) Times at which state collpase occurred. Only for Monte Carlo solver. col_which (list) Which collapse operator was responsible for each collapse in col_times. Only for Monte Carlo solver. class StochasticSolverOptions(H=None, state0=None, times=None, c_ops=[], sc_ops=[], e_ops=[], m_ops=None, args=None, ntraj=1, nsubsteps=1, d1=None, d2=None, d2_len=1, dW_factors=None, rhs=None, generate_A_ops=None, generate_noise=None, homogeneous=True, solver=None, method=None, distribution=normal, store_measurement=False, noise=None, normalize=True, options=None, progress_bar=None, map_func=None, map_kwargs=None) Class of options for stochastic solvers such as qutip.stochastic.ssesolve, qutip.stochastic.smesolve, etc. Options can be specified either as arguments to the constructor: 127 ## or by changing the class attributes after creation: sso = StochasticSolverOptions() sso.nsubsteps = 1000 ## The stochastic solvers qutip.stochastic.ssesolve, qutip.stochastic.smesolve, qutip.stochastic.ssepdpsolve and qutip.stochastic.smepdpsolve all take the same keyword arguments as the constructor of these class, and internally they use these arguments to construct an instance of this class, so it is rarely needed to explicitly create an instance of this class. 128 Attributes H state0 times c_ops sc_ops e_ops m_ops args ntraj nsubsteps d1 d2 d2_len dW_factors rhs generate_A_ops generate_noise homogeneous solver ## (qutip.Qobj) System Hamiltonian. (qutip.Qobj) Initial state vector (ket) or density matrix. (list / array) List of times for . Must be uniformly spaced. (list of qutip.Qobj) List of deterministic collapse operators. (list of qutip.Qobj) List of stochastic collapse operators. Each stochastic collapse operator will give a deterministic and stochastic contribution to the equation of motion according to how the d1 and d2 functions are defined. (list of qutip.Qobj) Single operator or list of operators for which to evaluate expectation values. (list of qutip.Qobj) List of operators representing the measurement operators. The expected format is a nested list with one measurement operator for each stochastic increament, for each stochastic collapse operator. (dict / list) List of dictionary of additional problem-specific parameters. (int) Number of trajectors. (int) Number of sub steps between each time-spep given in times. (function) Function for calculating the operator-valued coefficient to the deterministic increment dt. (function) Function for calculating the operator-valued coefficient to the stochastic increment(s) dW_n, where n is in [0, d2_len[. (int (default 1)) The number of stochastic increments in the process. (array) Array of length d2_len, containing scaling factors for each measurement operator in m_ops. (function) Function for calculating the deterministic and stochastic contributions to the right-hand side of the stochastic differential equation. This only needs to be specified when implementing a custom SDE solver. (function) Function that generates a list of pre-computed operators or super- operators. These precomputed operators are used in some d1 and d2 functions. (function) Function for generate an array of pre-computed noise signal. ## (bool (True)) Wheter or not the stochastic process is homogenous. Inhomogenous processes are only supported for poisson distributions. (string) Name of the solver method to use for solving the stochastic equations. Valid values are: euler-maruyama, fast-euler-maruyama, milstein, fast-milstein, platen. method (string (homodyne, heterodyne, photocurrent)) The name of the type of measurement process that give rise to the stochastic equation to solve. Specifying a method with this keyword argument is a short-hand notation for using pre-defined d1 and d2 functions for the corresponding stochastic processes. distribution (string (normal, poission)) The name of the distribution used for the stochastic increments. store_measurements (bool (default False)) Whether or not to store the measurement results in the qutip.solver.SolverResult instance returned by the solver. noise (array) Vector specifying the noise. normalize (bool (default True)) Whether or not to normalize the wave function during the evolution. options (qutip.solver.Options) Generic solver options. map_func: A map function or managing the calls to single-trajactory solvers. function map_kwargs: Optional keyword arguments to the map_func function function. dictionary progress_bar (qutip.ui.BaseProgressBar) Optional progress bar class instance. Distribution functions class Distribution(data=None, xvecs=[], xlabels=[]) A class for representation spatial distribution functions. 129 The Distribution class can be used to prepresent spatial distribution functions of arbitray dimension (although only 1D and 2D distributions are used so far). It is indented as a base class for specific distribution function, and provide implementation of basic functions that are shared among all Distribution functions, such as visualization, calculating marginal distributions, etc. Parameters data : array_like Data for the distribution. The dimensions must match the lengths of the coordinate arrays in xvecs. xvecs : list List of arrays that spans the space for each coordinate. xlabels : list List of labels for each coordinate. Methods marginal project visualize visualize_1d visualize_2d_colormap visualize_2d_surface marginal(dim=0) Calculate the marginal distribution function along the dimension dim. Return a new Distribution instance describing this reduced- dimensionality distribution. Parameters dim : int The dimension (coordinate index) along which to obtain the marginal distribution. Returns d : Distributions A new instances of Distribution that describes the marginal distribution. project(dim=0) Calculate the projection (max value) distribution function along the dimension dim. Return a new Distribution instance describing this reduced-dimensionality distribution. Parameters dim : int The dimension (coordinate index) along which to obtain the projected distribution. Returns d : Distributions A new instances of Distribution that describes the projection. visualize(fig=None, ax=None, figsize=(8, 6), colorbar=True, cmap=None, style=colormap, show_xlabel=True, show_ylabel=True) Visualize the data of the distribution in 1D or 2D, depending on the dimensionality of the underlaying distribution. Parameters: fig [matplotlib Figure instance] If given, use this figure instance for the visualization, ax [matplotlib Axes instance] If given, render the visualization using this axis instance. figsize [tuple] Size of the new Figure instance, if one needs to be created. colorbar: Bool Whether or not the colorbar (in 2D visualization) should be used. cmap: matplotlib colormap instance If given, use this colormap for 2D visualizations. style [string] Type of visualization: colormap (default) or surface. Returns fig, ax : tuple A tuple of matplotlib figure and axes instances. class WignerDistribution(rho=None, extent=[[-5, 5], [-5, 5]], steps=250) 130 Methods marginal project update visualize visualize_1d visualize_2d_colormap visualize_2d_surface class QDistribution(rho=None, extent=[[-5, 5], [-5, 5]], steps=250) Methods marginal project update visualize visualize_1d visualize_2d_colormap visualize_2d_surface class TwoModeQuadratureCorrelation(state=None, theta1=0.0, theta2=0.0, extent=[[-5, 5], [-5, 5]], steps=250) Methods marginal project update update_psi update_rho visualize visualize_1d visualize_2d_colormap visualize_2d_surface update(state) calculate probability distribution for quadrature measurement outcomes given a two-mode wavefunction or density matrix update_psi(psi) calculate probability distribution for quadrature measurement outcomes given a two-mode wavefunction update_rho(rho) calculate probability distribution for quadrature measurement outcomes given a two-mode density matrix class HarmonicOscillatorWaveFunction(psi=None, omega=1.0, extent=[-5, 5], steps=250) Methods 131 marginal project update visualize visualize_1d visualize_2d_colormap visualize_2d_surface update(psi) Calculate the wavefunction for the given state of an harmonic oscillator class HarmonicOscillatorProbabilityFunction(rho=None, omega=1.0, extent=[-5, 5], steps=250) Methods marginal project update visualize visualize_1d visualize_2d_colormap visualize_2d_surface update(rho) Calculate the probability function for the given state of an harmonic oscillator (as density matrix) ## Quantum information processing class Gate(name, targets=None, controls=None, arg_value=None, arg_label=None) Representation of a quantum gate, with its required parametrs, and target and control qubits. class QubitCircuit(N, reverse_states=True) Representation of a quantum program/algorithm, maintaining a sequence of gates. Attributes png svg Methods latex_code propagators qasm remove_gate resolve_gates reverse_circuit 132 ## add_1q_gate(name, start=0, end=None, qubits=None, arg_value=None, arg_label=None) Adds a single qubit gate with specified parameters on a variable number of qubits in the circuit. By default, it applies the given gate to all the qubits in the register. Parameters name: String Gate name. start: Integer Starting location of qubits. end: Integer Last qubit for the gate. qubits: List Specific qubits for applying gates. arg_value: Float Argument value(phi). arg_label: String Label for gate representation. Adds a block of a qubit circuit to the main circuit. Globalphase gates are not added. Parameters qc: QubitCircuit The circuit block to be added to the main circuit. start: Integer The qubit on which the first gate is applied. Adds a gate with specified parameters to the circuit. Parameters name: String Gate name. targets: List Gate targets. controls: List Gate controls. arg_value: Float Argument value(phi). arg_label: String Label for gate representation. Method to resolve two qubit gates with non-adjacent control/s or target/s in terms of gates with adjacent interactions. Returns qc: QubitCircuit Returns QubitCircuit of resolved gates for the qubit circuit in the desired basis. propagators() Propagator matrix calculator for N qubits returning the individual steps as unitary matrices operating from left to right. Returns U_list: list Returns list of unitary matrices for the qubit circuit. remove_gate(index=None, name=None, remove=first) Removes a gate with from a specific index or the first, last or all instances of a particular gate. Parameters index: Integer Location of gate to be removed. name: String Gate name to be removed. remove: String 133 ## If first or all gate are to be removed. resolve_gates(basis=[CNOT, RX, RY, RZ]) Unitary matrix calculator for N qubits returning the individual steps as unitary matrices operating from left to right in the specified basis. Parameters basis: list. Basis of the resolved circuit. Returns qc: QubitCircuit Returns QubitCircuit of resolved gates for the qubit circuit in the desired basis. reverse_circuit() Reverses an entire circuit of unitary gates. Returns qc: QubitCircuit Returns QubitCircuit of resolved gates for the qubit circuit in the desired basis. class CircuitProcessor(N, correct_global_phase) Base class for representation of the physical implementation of a quantum program/algorithm on a specified qubit system. Methods eliminate_auxillary_modes get_ops_and_u get_ops_labels optimize_circuit plot_pulses pulse_matrix run run_state Function to take a quantum circuit/algorithm and convert it into the optimal form/basis for the desired physical system. Parameters qc: QubitCircuit Takes the quantum circuit to be implemented. setup: String Takes the nature of the spin chain; linear or circular. Returns qc: QubitCircuit The resolved circuit representation. get_ops_and_u() Returns the Hamiltonian operators and corresponding values by stacking them together. get_ops_labels() Returns the Hamiltonian operators and corresponding labels by stacking them together. Translates an abstract quantum circuit to its corresponding Hamiltonian for a specific model. Parameters qc: QubitCircuit Takes the quantum circuit to be implemented. optimize_circuit(qc) Function to take a quantum circuit/algorithm and convert it into the optimal form/basis for the desired physical system. Parameters qc: QubitCircuit Takes the quantum circuit to be implemented. 134 ## Returns qc: QubitCircuit The optimal circuit representation. plot_pulses() Maps the physical interaction between the circuit components for the desired physical system. Returns fig, ax: Figure Maps the physical interaction between the circuit components. pulse_matrix() Generates the pulse matrix for the desired physical system. Returns t, u, labels: Returns the total time and label for every operation. run(qc=None) Generates the propagator matrix by running the Hamiltonian for the appropriate time duration for the desired physical system. Parameters qc: QubitCircuit Takes the quantum circuit to be implemented. Returns U_list: list The propagator matrix obtained from the physical implementation. run_state(qc=None, states=None) Generates the propagator matrix by running the Hamiltonian for the appropriate time duration for the desired physical system with the given initial state of the qubit register. Parameters qc: QubitCircuit Takes the quantum circuit to be implemented. states: Qobj Initial state of the qubits in the register. Returns U_list: list The propagator matrix obtained from the physical implementation. class SpinChain(N, correct_global_phase=True, sx=None, sz=None, sxsy=None) Representation of the physical implementation of a quantum program/algorithm on a spin chain qubit system. Methods eliminate_auxillary_modes get_ops_and_u get_ops_labels optimize_circuit plot_pulses pulse_matrix run run_state Method to resolve 2 qubit gates with non-adjacent control/s or target/s in terms of gates with adjacent interactions for linear/circular spin chain system. Parameters qc: QubitCircuit The circular spin chain circuit to be resolved setup: Boolean Linear of Circular spin chain setup Returns qc: QubitCircuit 135 Returns QubitCircuit of resolved gates for the qubit circuit in the desired basis. class LinearSpinChain(N, correct_global_phase=True, sx=None, sz=None, sxsy=None) Representation of the physical implementation of a quantum program/algorithm on a spin chain qubit system arranged in a linear formation. It is a sub-class of SpinChain. Methods eliminate_auxillary_modes get_ops_and_u get_ops_labels optimize_circuit plot_pulses pulse_matrix run run_state class CircularSpinChain(N, correct_global_phase=True, sx=None, sz=None, sxsy=None) Representation of the physical implementation of a quantum program/algorithm on a spin chain qubit system arranged in a circular formation. It is a sub-class of SpinChain. Methods eliminate_auxillary_modes get_ops_and_u get_ops_labels optimize_circuit plot_pulses pulse_matrix run run_state class DispersivecQED(N, correct_global_phase=True, Nres=None, deltamax=None, epsmax=None, w0=None, wq=None, eps=None, delta=None, g=None) Representation of the physical implementation of a quantum program/algorithm on a dispersive cavity-QED system. Methods dispersive_gate_correction eliminate_auxillary_modes get_ops_and_u get_ops_labels optimize_circuit plot_pulses pulse_matrix run Continued on next page 136 ## Table 4.15 continued from previous page run_state dispersive_gate_correction(qc1, rwa=True) Method to resolve ISWAP and SQRTISWAP gates in a cQED system by adding single qubit gates to get the correct output matrix. Parameters qc: Qobj The circular spin chain circuit to be resolved rwa: Boolean Specify if RWA is used or not. Returns qc: QubitCircuit Returns QubitCircuit of resolved gates for the qubit circuit in the desired basis. Optimal control class GRAPEResult(u=None, H_t=None, U_f=None) Class for representing the result of a GRAPE simulation. Attributes u (array) GRAPE control pulse matrix. H_t (time-dependent Hamiltonian) The time-dependent Hamiltonian that realize the GRAPE pulse sequence. U_f (Qobj) The final unitary transformation that is realized by the evolution of the system with the GRAPE generated pulse sequences. class Dynamics(optimconfig) This is a base class only. See subclass descriptions and choose an appropriate one for the application. Note that initialize_controls must be called before any of the methods can be used. 137 138 Attributes log_level (integer) level of messaging output from the logger. Options are attributes of qutip.logging, in decreasing levels of messaging, are: DEBUG_INTENSE, DEBUG_VERBOSE, DEBUG, INFO, WARN, ERROR, CRITICAL Anything WARN or above is effectively quiet execution, assuming everything runs as expected. The default NOTSET implies that the level will be taken from the QuTiP settings file, which by default is WARN Note value should be set using set_log_level stats (Stats) Attributes of which give performance stats for the optimisation set to None to reduce overhead of calculating stats. Note it is (usually) shared with the Optimizer object tslot_computer (TimeslotComputer (subclass instance)) Used to manage when the timeslot dynamics generators, propagators, gradients etc are updated prop_computer (PropagatorComputer (subclass instance)) Used to compute the propagators and their fid_computer (FidelityComputer (subclass instance)) Used to computer the fidelity error and the num_tslots (integer) Number of timeslots, aka timeslices num_ctrls (integer) Number of controls. Note this is set when get_num_ctrls is called based on the length of ctrl_dyn_gen evo_time (float) Total time for the evolution tau (array[num_tslots] of float) Duration of each timeslot Note that if this is set before initialize_controls is called then num_tslots and evo_time are calculated from tau, otherwise tau is generated from num_tslots and evo_time, that is equal size time slices time (array[num_tslots+1] of float) Cumulative time for the evolution, that is the time at the start of each time slice drift_dyn_gen (Qobj) Drift or system dynamics generator Matrix defining the underlying dynamics of the system ctrl_dyn_gen (List of Qobj) Control dynamics generator: ctrl_dyn_gen () List of matrices defining the control dynamics initial (Qobj) Starting state / gate The matrix giving the initial state / gate, i.e. at time 0 Typically the identity target (Qobj) Target state / gate: The matrix giving the desired state / gate for the evolution ctrl_amps (array[num_tslots, num_ctrls] of float) Control amplitudes The amplitude (scale factor) for each control in each timeslot ini(float) Scale factor applied to be applied the control amplitudes when they are tial_ctrl_scaling initialised This is used by the PulseGens rather than in any fucntions in this class self.initial_ctrl_offset Linear offset applied to be applied the control amplitudes when they are initialised = 0.0 This is used by the PulseGens rather than in any fucntions in this class dyn_gen (List of Qobj) Dynamics generators the combined drift and control dynamics generators for each timeslot prop (list of Qobj) Propagators - used to calculate time evolution from one timeslot to the next Array of matrices that give the gradient with respect to the control amplitudes in a timeslot Note this attribute is only created when the selected PropagatorComputer is evo_init2t (List of Qobj) Forward evolution (or propagation) the time evolution operator from the initial state / gate to the specified timeslot as generated by the dyn_gen evo_t2end (List of Qobj) Onward evolution (or propagation) the time evolution operator from the specified timeslot to end of the evolution time as generated by the dyn_gen evo_t2targ (List of Qobj) Backward List of Qobj propagation the overlap of the onward propagation with the inverse of the target. Note this is only used (so far) by the unitary dynamics fidelity evo_current (Boolean) Used to flag that the dynamics used to calculate the evolution operators is current. It is set to False when the amplitudes change decomp_curr (List of boolean) Indicates whether the diagonalisation for the timeslot is fresh, it is set to false when the dyn_gen for the timeslot is changed Only used when the PropagatorComputer uses diagonalisation dyn_gen_eigenvectors (List of array[drift_dyn_gen.shape]) Eigenvectors of the dynamics generators Used for 139 calculating the propagators and their gradients Only used when the PropagatorComputer uses diagonalisation prop_eigen (List of array[drift_dyn_gen.shape]) Propagator in diagonalised basis of the combined dynamics generator Used for calculating the propagators and their gradients Only used Methods check_ctrls_initialized clear combine_dyn_gen compute_evolution ensure_decomp_curr flag_system_changed get_amp_times get_ctrl_dyn_gen get_drift_dim get_dyn_gen get_num_ctrls get_owd_evo_target init_time_slots initialize_controls reset save_amps set_log_level spectral_decomp update_ctrl_amps combine_dyn_gen(k) Computes the dynamics generator for a given timeslot The is the combined Hamiltion for unitary systems compute_evolution() Recalculate the time evolution operators Dynamics generators (e.g. Hamiltonian) and prop (propagators) are calculated as necessary Actual work is completed by the recompute_evolution method of the timeslot computer ensure_decomp_curr(k) Checks to see if the diagonalisation has been completed since the last update of the dynamics generators (after the amplitude update) If not then the diagonlisation is completed flag_system_changed() Flag eveolution, fidelity and gradients as needing recalculation get_ctrl_dyn_gen(j) Get the dynamics generator for the control Not implemented in the base class. Choose a subclass get_drift_dim() Returns the size of the matrix that defines the drift dynamics that is assuming the drift is NxN, then this returns N get_dyn_gen(k) Get the combined dynamics generator for the timeslot Not implemented in the base class. Choose a subclass get_num_ctrls() calculate the of controls from the length of the control list sets the num_ctrls property, which can be used alternatively subsequently get_owd_evo_target() Get the inverse of the target. Used for calculating the backward evolution init_time_slots() Generate the timeslot duration array tau based on the evo_time and num_tslots attributes, unless the tau attribute is already set in which case this step in ignored Generate the cumulative time array time based on the tau values initialize_controls(amps, init_tslots=True) Set the initial control amplitudes and time slices Note this must be called after the configuration is complete before any dynamics can be calculated 140 ## save_amps(file_name=None, times=None, amps=None, verbose=False) Save a file with the current control amplitudes in each timeslot The first column in the file will be the start time of the slot Parameters file_name : string Name of the file If None given the def_amps_fname attribuite will be used times : List type (or string) List / array of the start times for each slot If None given this will be retrieved through get_amp_times() If exclude then times will not be saved in the file, just the amplitudes amps : Array[num_tslots, num_ctrls] Amplitudes to be saved If None given the ctrl_amps attribute will be used verbose : Boolean If True then an info message will be logged set_log_level(lvl) Set the log_level attribute and set the level of the logger that is call logger.setLevel(lvl) spectral_decomp(k) Calculate the diagonalization of the dynamics generator generating lists of eigenvectors, propagators in the diagonalised basis, and the factormatrix used in calculating the propagator gradient Not implemented in this base class, because the method is specific to the matrix type update_ctrl_amps(new_amps) Determine if any amplitudes have changed. If so, then mark the timeslots as needing recalculation The actual work is completed by the compare_amps method of the timeslot computer class DynamicsUnitary(optimconfig) This is the subclass to use for systems with dynamics described by unitary matrices. E.g. closed systems with Hermitian Hamiltonians Note a matrix diagonalisation is used to compute the exponent The eigen decomposition is also used to calculate the propagator gradient. The method is taken from DYNAMO (see Attributes drift_ham(Qobj) This is the drift Hamiltonian for unitary dynamics It is mapped to drift_dyn_gen during initialize_controls ctrl_ham(List of Qobj) These are the control Hamiltonians for unitary dynamics It is mapped to ctrl_dyn_gen during initialize_controls H (List of Qobj) The combined drift and control Hamiltonians for each timeslot These are the dynamics generators for unitary dynamics. It is mapped to dyn_gen during initialize_controls Methods check_ctrls_initialized clear combine_dyn_gen compute_evolution ensure_decomp_curr flag_system_changed get_amp_times get_ctrl_dyn_gen get_drift_dim get_dyn_gen get_num_ctrls get_owd_evo_target init_time_slots initialize_controls Continued on next page 141 ## Table 4.17 continued from previous page reset save_amps set_log_level spectral_decomp update_ctrl_amps get_ctrl_dyn_gen(j) Get the dynamics generator for the control including the -i factor get_dyn_gen(k) Get the combined dynamics generator for the timeslot including the -i factor spectral_decomp(k) Calculates the diagonalization of the dynamics generator generating lists of eigenvectors, propagators in the diagonalised basis, and the factormatrix used in calculating the propagator gradient class DynamicsSymplectic(optimconfig) Symplectic systems This is the subclass to use for systems where the dynamics is described by symplectic matrices, e.g. coupled oscillators, quantum optics Attributes ## omega (array[drift_dyn_gen.shape]) matrix used in the calculation of propagators (time evolution) with symplectic systems. Methods check_ctrls_initialized clear combine_dyn_gen compute_evolution ensure_decomp_curr flag_system_changed get_amp_times get_ctrl_dyn_gen get_drift_dim get_dyn_gen get_num_ctrls get_omega get_owd_evo_target init_time_slots initialize_controls reset save_amps set_log_level spectral_decomp update_ctrl_amps get_ctrl_dyn_gen(j) Get the dynamics generator for the control multiplied by omega get_dyn_gen(k) Get the combined dynamics generator for the timeslot multiplied by omega class PulseGen(dyn=None) Pulse generator Base class for all Pulse generators The object can optionally be instantiated with a Dynamics object, in which case the timeslots and amplitude scaling and offset are copied from that. Otherwise the 142 class can be used independently by setting: tau (array of timeslot durations) or num_tslots and pulse_time for equally spaced timeslots Attributes num_tslots (integer) Number of timeslots, aka timeslices (copied from Dynamics if given) pulse_time (float) total duration of the pulse (copied from Dynamics.evo_time if given) scal(float) linear scaling applied to the pulse (copied from Dynamics.initial_ctrl_scaling if given) ing offset (float) linear offset applied to the pulse (copied from Dynamics.initial_ctrl_offset if given) tau (array[num_tslots] of float) Duration of each timeslot (copied from Dynamics if given) lbound (float) Lower boundary for the pulse amplitudes Note that the scaling and offset attributes can be used to fully bound the pulse for all generators except some of the random ones This bound (if set) may result in additional shifting / scaling Default is -Inf ubound (float) Upper boundary for the pulse amplitudes Note that the scaling and offset attributes can be used to fully bound the pulse for all generators except some of the random ones This bound (if set) may result in additional shifting / scaling Default is Inf peri(boolean) True if the pulse generator produces periodic pulses odic ran(boolean) True if the pulse generator produces random pulses dom Methods gen_pulse init_pulse reset gen_pulse() returns the pulse as an array of vales for each timeslot Must be implemented by subclass init_pulse() Initialise the pulse parameters reset() reset attributes to default values class PulseGenRandom(dyn=None) Generates random pulses as simply random values for each timeslot Methods gen_pulse init_pulse reset gen_pulse() Generate a pulse of random values between 1 and -1 Values are scaled using the scaling property and shifted using the offset property Returns the pulse as an array of vales for each timeslot class PulseGenZero(dyn=None) Generates a flat pulse Methods 143 gen_pulse init_pulse reset gen_pulse() Generate a pulse with the same value in every timeslot. The value will be zero, unless the offset is not zero, in which case it will be the offset class PulseGenLinear(dyn=None) Generates linear pulses Attributes start_val end_val (float) Gradient of the line. Note this is calculated from the start_val and end_val if these are given (float) Start point of the line. That is the starting amplitude (float) End point of the line. That is the amplitude at the start of the last timeslot Methods gen_pulse init_pulse reset Generate a linear pulse using either the gradient and start value or using the end point to calulate the gradient Note that the scaling and offset parameters are still applied, so unless these values are the default 1.0 and 0.0, then the actual gradient etc will be different Returns the pulse as an array of vales for each timeslot Calulate the gradient if pulse is defined by start and end point values reset() reset attributes to default values class PulseGenLinear(dyn=None) Generates linear pulses Attributes start_val end_val (float) Gradient of the line. Note this is calculated from the start_val and end_val if these are given (float) Start point of the line. That is the starting amplitude (float) End point of the line. That is the amplitude at the start of the last timeslot Methods gen_pulse init_pulse reset Generate a linear pulse using either the gradient and start value or using the end point to calulate the gradient Note that the scaling and offset parameters are still applied, so unless these values are the 144 default 1.0 and 0.0, then the actual gradient etc will be different Returns the pulse as an array of vales for each timeslot Calulate the gradient if pulse is defined by start and end point values reset() reset attributes to default values class PulseGenPeriodic(dyn=None) Intermediate class for all periodic pulse generators All of the periodic pulses range from -1 to 1 All have a start phase that can be set between 0 and 2pi Attributes num_waves (float) Number of complete waves (cycles) that occur in the pulse. wavelen and freq calculated from this if it is given wavelen (float) Wavelength of the pulse (assuming the speed is 1) freq is calculated from this if it is given freq (float) Frequency of the pulse start_phase (float) Phase of the pulse signal when t=0 Methods gen_pulse init_pulse reset init_pulse(num_waves=None, wavelen=None, freq=None, start_phase=None) Calculate the wavelength, frequency, number of waves etc from the each other and the other parameters If num_waves is given then the other parameters are worked from this Otherwise if the wavelength is given then it is the driver Otherwise the frequency is used to calculate wavelength and num_waves reset() reset attributes to default values class PulseGenSine(dyn=None) Generates sine wave pulses Methods gen_pulse init_pulse reset gen_pulse(num_waves=None, wavelen=None, freq=None, start_phase=None) Generate a sine wave pulse If no params are provided then the class object attributes are used. If they are provided, then these will reinitialise the object attribs. returns the pulse as an array of vales for each timeslot class PulseGenSquare(dyn=None) Generates square wave pulses Methods gen_pulse Continued on next page 145 ## Table 4.26 continued from previous page init_pulse reset gen_pulse(num_waves=None, wavelen=None, freq=None, start_phase=None) Generate a square wave pulse If no parameters are pavided then the class object attributes are used. If they are provided, then these will reinitialise the object attribs class PulseGenSaw(dyn=None) Generates saw tooth wave pulses Methods gen_pulse init_pulse reset gen_pulse(num_waves=None, wavelen=None, freq=None, start_phase=None) Generate a saw tooth wave pulse If no parameters are pavided then the class object attributes are used. If they are provided, then these will reinitialise the object attribs class PulseGenTriangle(dyn=None) Generates triangular wave pulses Methods gen_pulse init_pulse reset gen_pulse(num_waves=None, wavelen=None, freq=None, start_phase=None) Generate a sine wave pulse If no parameters are pavided then the class object attributes are used. If they are provided, then these will reinitialise the object attribs 4.2 Functions Manipulation and Creation of States and Operators Quantum States basis(N, n=0, offset=0) Generates the vector representation of a Fock state. Parameters N : int Number of Fock states in Hilbert space. n : int Integer corresponding to desired number state, defaults to 0 if omitted. offset : int (default 0) The lowest number state that is included in the finite number state representation of the state. Returns state : qobj Qobj representing the requested number state |n>. 146 Notes ## A subtle incompatibility with the quantum optics toolbox: In QuTiP: basis(N, 0) = ground state ## but in the qotoolbox: basis(N, 1) = ground state Examples >>> basis(5,2) Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 0.+0.j] [ 0.+0.j] [ 1.+0.j] [ 0.+0.j] [ 0.+0.j]] ## coherent(N, alpha, offset=0, method=operator) Generates a coherent state with eigenvalue alpha. Constructed using displacement operator on vacuum state. Parameters N : int Number of Fock states in Hilbert space. alpha : float/complex Eigenvalue of coherent state. offset : int (default 0) The lowest number state that is included in the finite number state representation of the state. Using a non-zero offset will make the default method analytic. method : string {operator, analytic} Method for generating coherent state. Returns state : qobj Qobj quantum object for coherent state Notes Select method operator (default) or analytic. With the operator method, the coherent state is generated by displacing the vacuum state using the displacement operator defined in the truncated Hilbert space of size N. This method guarantees that the resulting state is normalized. With analytic method the coherent state is generated using the analytical formula for the coherent state coefficients in the Fock basis. This method does not guarantee that the state is normalized if truncated to a small number of Fock states, but would in that case give more accurate coefficients. Examples >>> coherent(5,0.25j) Quantum object: dims = [[5], [1]], shape = [5, 1], type = ket Qobj data = [[ 9.69233235e-01+0.j ] [ 0.00000000e+00+0.24230831j] [ -4.28344935e-02+0.j ] [ 0.00000000e+00-0.00618204j] [ 7.80904967e-04+0.j ]] 147 ## coherent_dm(N, alpha, offset=0, method=operator) Density matrix representation of a coherent state. Constructed via outer product of qutip.states.coherent Parameters N : int Number of Fock states in Hilbert space. alpha : float/complex Eigenvalue for coherent state. offset : int (default 0) The lowest number state that is included in the finite number state representation of the state. method : string {operator, analytic} Method for generating coherent density matrix. Returns dm : qobj Density matrix representation of coherent state. Notes Select method operator (default) or analytic. With the operator method, the coherent density matrix is generated by displacing the vacuum state using the displacement operator defined in the truncated Hilbert space of size N. This method guarantees that the resulting density matrix is normalized. With analytic method the coherent density matrix is generated using the analytical formula for the coherent state coefficients in the Fock basis. This method does not guarantee that the state is normalized if truncated to a small number of Fock states, but would in that case give more accurate coefficients. Examples >>> coherent_dm(3,0.25j) Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 0.93941695+0.j 0.00000000-0.23480733j -0.04216943+0.j ] [ 0.00000000+0.23480733j 0.05869011+0.j 0.00000000-0.01054025j] [-0.04216943+0.j 0.00000000+0.01054025j 0.00189294+0.j ]] ## fock(N, n=0, offset=0) Bosonic Fock (number) state. Same as qutip.states.basis. Parameters N : int Number of states in the Hilbert space. n : int int for desired number state, defaults to 0 if omitted. Returns Requested number state |. Examples >>> fock(4,3) Quantum object: dims = [[4], [1]], shape = [4, 1], type = ket Qobj data = [[ 0.+0.j] [ 0.+0.j] [ 0.+0.j] [ 1.+0.j]] 148 ## fock_dm(N, n=0, offset=0) Density matrix representation of a Fock state Constructed via outer product of qutip.states.fock. Parameters N : int Number of Fock states in Hilbert space. n : int int for desired number state, defaults to 0 if omitted. Returns dm : qobj Density matrix representation of Fock state. Examples >>> fock_dm(3,1) Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 1.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j]] ket2dm(Q) Takes input ket or bra vector and returns density matrix formed by outer product. Parameters Q : qobj Ket or bra type quantum object. Returns dm : qobj Density matrix formed by outer product of Q. Examples >>> x=basis(3,2) >>> ket2dm(x) Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 1.+0.j]] qutrit_basis() Basis states for a three level system (qutrit) Returns qstates : array Array of qutrit basis vectors thermal_dm(N, n, method=operator) Density matrix for a thermal state of n particles Parameters N : int Number of basis states in Hilbert space. n : float Expectation value for number of particles in thermal state. method : string {operator, analytic} string that sets the method used to generate the thermal state probabilities Returns dm : qobj Thermal state density matrix. 149 Notes The operator method (default) generates the thermal state using the truncated number operator num(N). This is the method that should be used in computations. The analytic method uses the analytic coefficients derived in an infinite Hilbert space. The analytic form is not necessarily normalized, if truncated too aggressively. Examples >>> thermal_dm(5, 1) Quantum object: dims = [[5], [5]], shape = [5, 5], Qobj data = [[ 0.51612903 0. 0. 0. [ 0. 0.25806452 0. 0. [ 0. 0. 0.12903226 0. [ 0. 0. 0. 0.06451613 [ 0. 0. 0. 0. >>> thermal_dm(5, 1, Quantum object: dims Qobj data = [[ 0.5 0. [ 0. 0.25 [ 0. 0. [ 0. 0. [ 0. 0. ## type = oper, isHerm = True 0. ] 0. ] 0. ] 0. ] 0.03225806]] 'analytic') = [[5], [5]], shape = [5, 5], type = oper, isHerm = True 0. 0. 0.125 0. 0. 0. 0. 0. 0.0625 0. 0. ] 0. ] 0. ] 0. ] 0.03125]] phase_basis(N, m, phi0=0) Basis vector for the mth phase of the Pegg-Barnett phase operator. Parameters N : int Number of basis vectors in Hilbert space. m : int Integer corresponding to the mth discrete phase phi_m=phi0+2*pi*m/N phi0 : float (default=0) Reference phase angle. Returns state : qobj Ket vector for mth Pegg-Barnett phase operator basis state. Notes The Pegg-Barnett basis states form a complete set over the truncated Hilbert space. state_number_enumerate(dims, excitations=None, state=None, idx=0) An iterator that enumerate all the state number arrays (quantum numbers on the form [n1, n2, n3, ...]) for a system with dimensions given by dims. Example: >>> for state in state_number_enumerate([2,2]): >>> print(state) [ 0. 0.] [ 0. 1.] [ 1. 0.] [ 1. 1.] ## Parameters dims : list or array The quantum state dimensions array, as it would appear in a Qobj. state : list 150 ## Current state in the iteration. Used internally. excitations : integer (None) Restrict state space to states with excitation numbers below or equal to this value. idx : integer Current index in the iteration. Used internally. Returns state_number : list Successive state number arrays that can be used in loops and other iterations, using standard state enumeration by definition. state_number_index(dims, state) Return the index of a quantum state corresponding to state, given a system with dimensions given by dims. Example: >>> state_number_index([2, 2, 2], [1, 1, 0]) 6.0 ## Parameters dims : list or array The quantum state dimensions array, as it would appear in a Qobj. state : list State number array. Returns idx : list The index of the state given by state in standard enumeration ordering. state_index_number(dims, index) Return a quantum number representation given a state index, for a system of composite structure defined by dims. Example: >>> state_index_number([2, 2, 2], 6) [1, 1, 0] ## Parameters dims : list or array The quantum state dimensions array, as it would appear in a Qobj. index : integer The index of the state in standard enumeration ordering. Returns state : list The state number array corresponding to index index in standard enumeration ordering. state_number_qobj(dims, state) Return a Qobj representation of a quantum state specified by the state array state. Example: >>> state_number_qobj([2, 2, 2], [1, 0, 1]) Quantum object: dims = [[2, 2, 2], [1, 1, 1]], shape = [8, 1], type = ket Qobj data = [[ 0.] [ 0.] [ 0.] [ 0.] [ 0.] [ 1.] [ 0.] [ 0.]] ## Parameters dims : list or array The quantum state dimensions array, as it would appear in a Qobj. 151 state : list State number array. Returns state : qutip.Qobj.qobj The state as a qutip.Qobj.qobj instance. enr_state_dictionaries(dims, excitations) Return the number of states, and lookup-dictionaries for translating a state tuple to a state index, and vice versa, for a system with a given number of components and maximum number of excitations. Parameters dims: list A list with the number of states in each sub-system. excitations : integer The maximum numbers of dimension Returns nstates, state2idx, idx2state: integer, dict, dict The number of states nstates, a dictionary for looking up state indices from a state tuple, and a dictionary for looking up state state tuples from state indices. enr_thermal_dm(dims, excitations, n) Generate the density operator for a thermal state in the excitation-number- restricted state space defined by the dims and exciations arguments. See the documentation for enr_fock for a more detailed description of these arguments. The temperature of each mode in dims is specified by the average number of excitatons n. Parameters dims : list A list of the dimensions of each subsystem of a composite quantum system. excitations : integer The maximum number of excitations that are to be included in the state space. n : integer The average number of exciations in the thermal state. n can be a float (which then applies to each mode), or a list/array of the same length as dims, in which each element corresponds specifies the temperature of the corresponding mode. Returns dm : Qobj Thermal state density matrix. enr_fock(dims, excitations, state) Generate the Fock state representation in a excitation-number restricted state space. The dims argument is a list of integers that define the number of quantums states of each component of a composite quantum system, and the excitations specifies the maximum number of excitations for the basis states that are to be included in the state space. The state argument is a tuple of integers that specifies the state (in the number basis representation) for which to generate the Fock state representation. Parameters dims : list A list of the dimensions of each subsystem of a composite quantum system. excitations : integer The maximum number of excitations that are to be included in the state space. state : list of integers The state in the number basis representation. Returns ket : Qobj A Qobj instance that represent a Fock state in the exication-number- restricted state space defined by dims and exciations. Quantum Operators This module contains functions for generating Qobj representation of a variety of commonly occuring quantum operators. create(N, offset=0) Creation (raising) operator. Parameters N : int 152 ## Dimension of Hilbert space. Returns oper : qobj Qobj for raising operator. offset : int (default 0) The lowest number state that is included in the finite number state representation of the operator. Examples >>> create(4) Quantum object: dims = [[4], [4]], Qobj data = [[ 0.00000000+0.j 0.00000000+0.j [ 1.00000000+0.j 0.00000000+0.j [ 0.00000000+0.j 1.41421356+0.j [ 0.00000000+0.j 0.00000000+0.j ## shape = [4, 4], type = oper, isHerm = False 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j 1.73205081+0.j 0.00000000+0.j] 0.00000000+0.j] 0.00000000+0.j] 0.00000000+0.j]] destroy(N, offset=0) Destruction (lowering) operator. Parameters N : int Dimension of Hilbert space. offset : int (default 0) The lowest number state that is included in the finite number state representation of the operator. Returns oper : qobj Qobj for lowering operator. Examples >>> destroy(4) Quantum object: dims = [[4], [4]], Qobj data = [[ 0.00000000+0.j 1.00000000+0.j [ 0.00000000+0.j 0.00000000+0.j [ 0.00000000+0.j 0.00000000+0.j [ 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j 1.41421356+0.j 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j] 0.00000000+0.j] 1.73205081+0.j] 0.00000000+0.j]] ## displace(N, alpha, offset=0) Single-mode displacement operator. Parameters N : int Dimension of Hilbert space. alpha : float/complex Displacement amplitude. offset : int (default 0) The lowest number state that is included in the finite number state representation of the operator. Returns oper : qobj Displacement operator. Examples 153 >>> displace(4,0.25) Quantum object: dims = [[4], [4]], shape = [4, 4], type = oper, isHerm = False Qobj data = [[ 0.96923323+0.j -0.24230859+0.j 0.04282883+0.j -0.00626025+0.j] [ 0.24230859+0.j 0.90866411+0.j -0.33183303+0.j 0.07418172+0.j] [ 0.04282883+0.j 0.33183303+0.j 0.84809499+0.j -0.41083747+0.j] [ 0.00626025+0.j 0.07418172+0.j 0.41083747+0.j 0.90866411+0.j]] jmat(j, *args) Higher-order spin operators: Parameters j : float Spin of operator args : str Which operator to return x,y,z,+,-. [x,y,z] Returns jmat : qobj/list qobj for requested spin operator(s). Notes ## If no args input, then returns array of [x,y,z] operators. Examples >>> jmat(1) [ Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 0. 0.70710678 0. ] [ 0.70710678 0. 0.70710678] [ 0. 0.70710678 0. ]] Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 0.+0.j 0.+0.70710678j 0.+0.j ] [ 0.-0.70710678j 0.+0.j 0.+0.70710678j] [ 0.+0.j 0.-0.70710678j 0.+0.j ]] Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 1. 0. 0.] [ 0. 0. 0.] [ 0. 0. -1.]]] num(N, offset=0) Quantum object for number operator. Parameters N : int The dimension of the Hilbert space. offset : int (default 0) The lowest number state that is included in the finite number state representation of the operator. Returns oper: qobj Qobj for number operator. Examples 154 >>> num(4) Quantum object: dims = [[4], [4]], shape = [4, 4], type = oper, isHerm = True Qobj data = [[0 0 0 0] [0 1 0 0] [0 0 2 0] [0 0 0 3]] qeye(N) Identity operator Parameters N : int or list of ints Dimension of Hilbert space. If provided as a list of ints, then the dimension is the product over this list, but the dims property of the new Qobj are set to this list. Returns oper : qobj Identity operator Qobj. Examples >>> qeye(3) Quantum object: dims = [[3], [3]], shape = [3, 3], type = oper, isHerm = True Qobj data = [[ 1. 0. 0.] [ 0. 1. 0.] [ 0. 0. 1.]] identity(N) Identity operator. Alternative name to qeye. Parameters N : int or list of ints Dimension of Hilbert space. If provided as a list of ints, then the dimension is the product over this list, but the dims property of the new Qobj are set to this list. Returns oper : qobj Identity operator Qobj. qutrit_ops() Operators for a three level system (qutrit). Returns opers: array array of qutrit operators. sigmam() Annihilation operator for Pauli spins. Examples >>> sigmam() Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = False Qobj data = [[ 0. 0.] [ 1. 0.]] sigmap() Creation operator for Pauli spins. Examples 155 >>> sigmam() Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = False Qobj data = [[ 0. 1.] [ 0. 0.]] sigmax() Pauli spin 1/2 sigma-x operator Examples >>> sigmax() Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = False Qobj data = [[ 0. 1.] [ 1. 0.]] sigmay() Pauli spin 1/2 sigma-y operator. Examples >>> sigmay() Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = True Qobj data = [[ 0.+0.j 0.-1.j] [ 0.+1.j 0.+0.j]] sigmaz() Pauli spin 1/2 sigma-z operator. Examples >>> sigmaz() Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = True Qobj data = [[ 1. 0.] [ 0. -1.]] squeeze(N, z, offset=0) Single-mode Squeezing operator. Parameters N : int Dimension of hilbert space. z : float/complex Squeezing parameter. offset : int (default 0) The lowest number state that is included in the finite number state representation of the operator. Returns oper : qutip.qobj.Qobj Squeezing operator. Examples 156 ## >>> squeeze(4, 0.25) Quantum object: dims = [[4], [4]], Qobj data = [[ 0.98441565+0.j 0.00000000+0.j [ 0.00000000+0.j 0.95349007+0.j [-0.17585742+0.j 0.00000000+0.j [ 0.00000000+0.j -0.30142443+0.j ## shape = [4, 4], type = oper, isHerm = False 0.17585742+0.j 0.00000000+0.j 0.98441565+0.j 0.00000000+0.j 0.00000000+0.j] 0.30142443+0.j] 0.00000000+0.j] 0.95349007+0.j]] squeezing(a1, a2, z) Generalized squeezing operator. () = exp ( ( )) 1 * 1 2 1 2 2 Parameters a1 : qutip.qobj.Qobj Operator 1. a2 : qutip.qobj.Qobj Operator 2. z : float/complex Squeezing parameter. Returns oper : qutip.qobj.Qobj Squeezing operator. phase(N, phi0=0) Single-mode Pegg-Barnett phase operator. Parameters N : int Number of basis states in Hilbert space. phi0 : float Reference phase. Returns oper : qobj Phase operator with respect to reference phase. Notes ## The Pegg-Barnett phase operator is Hermitian on a truncated Hilbert space. enr_destroy(dims, excitations) Generate annilation operators for modes in a excitation-number-restricted state space. For example, consider a system consisting of 4 modes, each with 5 states. The total hilbert space size is 5**4 = 625. If we are only interested in states that contain up to 2 excitations, we only need to include states such as (0, 0, 0, 0) (0, 0, 0, 1) (0, 0, 0, 2) (0, 0, 1, 0) (0, 0, 1, 1) (0, 0, 2, 0) ... This function creates annihilation operators for the 4 modes that act within this state space: a1, a2, a3, a4 = enr_destroy([5, 5, 5, 5], excitations=2) From this point onwards, the annihiltion operators a1, ..., a4 can be used to setup a Hamiltonian, collapse operators and expectation-value operators, etc., following the usual pattern. Parameters dims : list A list of the dimensions of each subsystem of a composite quantum system. excitations : integer The maximum number of excitations that are to be included in the state space. Returns a_ops : list of qobj A list of annihilation operators for each mode in the composite quantum system described by dims. 157 enr_identity(dims, excitations) Generate the identity operator for the excitation-number restricted state space defined by the dims and exciations arguments. See the docstring for enr_fock for a more detailed description of these arguments. Parameters dims : list A list of the dimensions of each subsystem of a composite quantum system. excitations : integer The maximum number of excitations that are to be included in the state space. state : list of integers The state in the number basis representation. Returns op : Qobj A Qobj instance that represent the identity operator in the exication-numberrestricted state space defined by dims and exciations. Random Operators and States This module is a collection of random state and operator generators. The sparsity of the ouput Qobjs is controlled by varing the density parameter. rand_dm(N, density=0.75, pure=False, dims=None) Creates a random NxN density matrix. Parameters N : int Shape of output density matrix. density : float Density between [0,1] of output density matrix. dims : list Dimensions of quantum object. Used for specifying tensor structure. Default is dims=[[N],[N]]. Returns oper : qobj NxN density matrix quantum operator. Notes For small density matrices., choosing a low density will result in an error as no diagonal elements will be generated such that () = 1. rand_herm(N, density=0.75, dims=None) Creates a random NxN sparse Hermitian quantum object. Uses = + + where is a randomly generated quantum operator with a given density. Parameters N : int Shape of output quantum operator. density : float Density between [0,1] of output Hermitian operator. dims : list Dimensions of quantum object. Used for specifying tensor structure. Default is dims=[[N],[N]]. Returns oper : qobj NxN Hermitian quantum operator. rand_ket(N, density=1, dims=None) Creates a random Nx1 sparse ket vector. Parameters N : int Number of rows for output quantum operator. density : float Density between [0,1] of output ket state. 158 dims : list Dimensions of quantum object. Used for specifying tensor structure. Default is dims=[[N],[1]]. Returns oper : qobj Nx1 ket state quantum operator. rand_unitary(N, density=0.75, dims=None) Creates a random NxN sparse unitary quantum object. Uses exp() where H is a randomly generated Hermitian operator. Parameters N : int Shape of output quantum operator. density : float Density between [0,1] of output Unitary operator. dims : list Dimensions of quantum object. Used for specifying tensor structure. Default is dims=[[N],[N]]. Returns oper : qobj NxN Unitary quantum operator. Three-Level Atoms This module provides functions that are useful for simulating the three level atom with QuTiP. A three level atom (qutrit) has three states, which are linked by dipole transitions so that 1 <-> 2 <-> 3. Depending on there relative energies they are in the ladder, lambda or vee configuration. The structure of the relevant operators is the same for any of the three configurations: Lambda: -------|three> | | | -------|two> | | | -------|one> Vee: |two> ------/ \ / \ / \ / \ / \ / / ------|one> \ -------|three> |one> ------\ \ \ \ |three> ------/ / / / / / \ / ------|two> References Notes ## Contributed by Markus Baden, Oct. 07, 2011 three_level_basis() Basis states for a three level atom. Returns states : array array of three level atom basis vectors. three_level_ops() Operators for a three level system (qutrit) Returns ops : array array of three level operators. 159 ## Superoperators and Liouvillians operator_to_vector(op) Create a vector representation of a quantum operator given the matrix representation. vector_to_operator(op) Create a matrix representation given a quantum operator in vector form. liouvillian(H, c_ops=[], data_only=False, chi=None) Assembles the Liouvillian superoperator from a Hamiltonian and a list of collapse operators. Like liouvillian, but with an experimental implementation which avoids creating extra Qobj instances, which can be Parameters H : qobj System Hamiltonian. c_ops : array_like A list or array of collapse operators. Returns L : qobj Liouvillian superoperator. spost(A) Superoperator formed from post-multiplication by operator A Parameters A : qobj Quantum operator for post multiplication. Returns super : qobj Superoperator formed from input qauntum object. spre(A) Superoperator formed from pre-multiplication by operator A. Parameters A : qobj Quantum operator for pre-multiplication. Returns super :qobj Superoperator formed from input quantum object. sprepost(A, B) Superoperator formed from pre-multiplication by operator A and post- multiplication of operator B. Parameters A : Qobj Quantum operator for pre-multiplication. B : Qobj Quantum operator for post-multiplication. Returns super : Qobj Superoperator formed from input quantum objects. Lindblad dissipator (generalized) for a single pair of collapse operators (a, b), or for a single collapse operator (a) when b is not specified: 1 1 [, ] = 2 2 Parameters a : qobj Left part of collapse operator. b : qobj (optional) Right part of collapse operator. If not specified, b defaults to a. Returns D : qobj 160 Superoperator Representations This module implements transformations between superoperator representations, including supermatrix, Kraus, Choi and Chi (process) matrix formalisms. to_choi(q_oper) Converts a Qobj representing a quantum map to the Choi representation, such that the trace of the returned operator is equal to the dimension of the system. Parameters q_oper : Qobj Superoperator to be converted to Choi representation. Returns choi : Qobj A quantum object representing the same map as q_oper, such that choi.superrep == "choi". Raises TypeError: if the given quantum object is not a map, or cannot be converted to Choi representation. to_super(q_oper) Converts a Qobj representing a quantum map to the supermatrix (Liouville) representation. Parameters q_oper : Qobj Superoperator to be converted to supermatrix representation. Returns superop : Qobj A quantum object representing the same map as q_oper, such that superop.superrep == "super". Raises TypeError: if the given quantum object is not a map, or cannot be converted to supermatrix representation. to_kraus(q_oper) Converts a Qobj representing a quantum map to a list of quantum objects, each representing an operator in the Kraus decomposition of the given map. Parameters q_oper : Qobj Superoperator to be converted to Kraus representation. Returns kraus_ops : list of Qobj A list of quantum objects, each representing a Kraus operator in the decomposition of q_oper. Raises TypeError: if the given quantum object is not a map, or cannot be decomposed into Kraus operators. ## Functions acting on states and operators Tensor Module for the creation of composite quantum objects via the tensor product. tensor(*args) Calculates the tensor product of input operators. Parameters args : array_like list or array of quantum objects for tensor product. Returns obj : qobj A composite quantum object. Examples 161 ## >>> tensor([sigmax(), sigmax()]) Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = True Qobj data = [[ 0.+0.j 0.+0.j 0.+0.j 1.+0.j] [ 0.+0.j 0.+0.j 1.+0.j 0.+0.j] [ 0.+0.j 1.+0.j 0.+0.j 0.+0.j] [ 1.+0.j 0.+0.j 0.+0.j 0.+0.j]] super_tensor(*args) Calculates the tensor product of input superoperators, by tensoring together the underlying Hilbert spaces on which each vectorized operator acts. Parameters args : array_like list or array of quantum objects with type="super". Returns obj : qobj A composite quantum object. composite(*args) Given two or more operators, kets or bras, returns the Qobj corresponding to a composite system over each argument. For ordinary operators and vectors, this is the tensor product, while for superoperators and vectorized operators, this is the column-reshuffled tensor product. If a mix of Qobjs supported on Hilbert and Liouville spaces are passed in, the former are promoted. Ordinary operators are assumed to be unitaries, and are promoted using to_super, while kets and bras are promoted by taking their projectors and using operator_to_vector(ket2dm(arg)). tensor_contract(qobj, *pairs) Contracts a qobj along one or more index pairs. Note that this uses dense representations and thus should not be used for very large Qobjs. Parameters pairs : tuple One or more tuples (i, j) indicating that the i and j dimensions of the original qobj should be contracted. Returns cqobj : Qobj The original Qobj with all named index pairs contracted away. Expectation Values expect(oper, state) Calculates the expectation value for operator(s) and state(s). Parameters oper : qobj/array-like A single or a list or operators for expectation value. state : qobj/array-like A single or a list of quantum states or density matrices. Returns expt : float/complex/array-like Expectation value. real if oper is Hermitian, complex otherwise. A (nested) array of expectaction values of state or operator are arrays. Examples ## >>> expect(num(4), basis(4, 3)) 3 variance(oper, state) Variance of an operator for the given state vector or density matrix. Parameters oper : qobj Operator for expectation value. state : qobj/list 162 ## A single or list of quantum states or density matrices.. Returns var : float Variance of operator oper for given state. Partial Transpose Return the partial transpose of a Qobj instance rho, where mask is an array/list with length that equals the number of components of rho (that is, the length of rho.dims[0]), and the values in mask indicates whether or not the corresponding subsystem is to be transposed. The elements in mask can be boolean or integers 0 or 1, where True/1 indicates that the corresponding subsystem should be tranposed. Parameters rho : qutip.qobj A density matrix. A mask that selects which subsystems should be transposed. method : str choice of method, dense or sparse. The default method is dense. The sparse implementation can be faster for large and sparse systems (hundreds of quantum states). Returns rho_pr: qutip.qobj A density matrix with the selected subsystems transposed. Entropy Functions concurrence(rho) Calculate the concurrence entanglement measure for a two-qubit state. Parameters state : qobj Ket, bra, or density matrix for a two-qubit state. Returns concur : float Concurrence References [R2] entropy_conditional(rho, selB, base=2.718281828459045, sparse=False) Calculates the conditional entropy (|) = (, ) () of a slected density matrix component. Parameters rho : qobj Density matrix of composite object selB : int/list Selected components for density matrix B base : {e,2} Base of logarithm. sparse : {False,True} Use sparse eigensolver. Returns ent_cond : float Value of conditional entropy entropy_linear(rho) Linear entropy of a density matrix. Parameters rho : qobj sensity matrix or ket/bra vector. Returns entropy : float Linear entropy of rho. 163 Examples >>> rho=0.5*fock_dm(2,0)+0.5*fock_dm(2,1) >>> entropy_linear(rho) 0.5 ## entropy_mutual(rho, selA, selB, base=2.718281828459045, sparse=False) Calculates the mutual information S(A:B) between selection components of a system density matrix. Parameters rho : qobj Density matrix for composite quantum systems selA : int/list int or list of first selected density matrix components. selB : int/list int or list of second selected density matrix components. base : {e,2} Base of logarithm. sparse : {False,True} Use sparse eigensolver. Returns ent_mut : float Mutual information between selected components. entropy_vn(rho, base=2.718281828459045, sparse=False) Von-Neumann entropy of density matrix Parameters rho : qobj Density matrix. base : {e,2} Base of logarithm. sparse : {False,True} Use sparse eigensolver. Returns entropy : float Von-Neumann entropy of rho. Examples >>> rho=0.5*fock_dm(2,0)+0.5*fock_dm(2,1) >>> entropy_vn(rho,2) 1.0 ## Density Matrix Metrics This module contains a collection of functions for calculating metrics (distance measures) between states and operators. fidelity(A, B) Calculates the fidelity (pseudo-metric) between two density matrices. See: Nielsen & Chuang, Quantum Computation and Quantum Information Parameters A : qobj Density matrix or state vector. B : qobj Density matrix or state vector with same dimensions as A. Returns fid : float Fidelity pseudo-metric between A and B. 164 Examples >>> x = fock_dm(5,3) >>> y = coherent_dm(5,1) >>> fidelity(x,y) 0.24104350624628332 ## tracedist(A, B, sparse=False, tol=0) Calculates the trace distance between two density matrices.. See: Nielsen & Chuang, Quantum Computation and Quantum Information Parameters A : qobj Density matrix or state vector. B : qobj Density matrix or state vector with same dimensions as A. tol : float Tolerance used by sparse eigensolver, if used. (0=Machine precision) sparse : {False, True} Use sparse eigensolver. Returns tracedist : float Trace distance between A and B. Examples >>> x=fock_dm(5,3) >>> y=coherent_dm(5,1) >>> tracedist(x,y) 0.9705143161472971 bures_dist(A, B) Returns the Bures distance between two density matrices A & B. The Bures distance ranges from 0, for states with unit fidelity, to sqrt(2). Parameters A : qobj Density matrix or state vector. B : qobj Density matrix or state vector with same dimensions as A. Returns dist : float Bures distance between density matrices. bures_angle(A, B) Returns the Bures Angle between two density matrices A & B. The Bures angle ranges from 0, for states with unit fidelity, to pi/2. Parameters A : qobj Density matrix or state vector. B : qobj Density matrix or state vector with same dimensions as A. Returns angle : float Bures angle between density matrices. hilbert_dist(A, B) Returns the Hilbert-Schmidt distance between two density matrices A & B. Parameters A : qobj Density matrix or state vector. 165 B : qobj Density matrix or state vector with same dimensions as A. Returns dist : float Hilbert-Schmidt distance between density matrices. Notes ## See V. Vedral and M. B. Plenio, Phys. Rev. A 57, 1619 (1998). average_gate_fidelity(oper) Given a Qobj representing the supermatrix form of a map, returns the average gate fidelity (pseudo-metric) of that map. Parameters A : Qobj Quantum object representing a superoperator. Returns fid : float Fidelity pseudo-metric between A and the identity superoperator. process_fidelity(U1, U2, normalize=True) Calculate the process fidelity given two process operators. Continous Variables This module contains a collection functions for calculating continuous variable quantities from fock-basis representation of the state of multi-mode fields. correlation_matrix(basis, rho=None) Given a basis set of operators {} , calculate the correlation matrix: = Parameters basis : list of qutip.qobj.Qobj List of operators that defines the basis for the correlation matrix. rho : qutip.qobj.Qobj Density matrix for which to calculate the correlation matrix. If rho is None, then a matrix of correlation matrix operators is returned instead of expectation values of those operators. Returns corr_mat: array A 2-dimensional array of correlation values or operators. covariance_matrix(basis, rho, symmetrized=True) Given a basis set of operators {} , calculate the covariance matrix: = 1 + 2 ## or, if of the optional argument symmetrized=False, = Parameters basis : list of qutip.qobj.Qobj List of operators that defines the basis for the covariance matrix. rho : qutip.qobj.Qobj Density matrix for which to calculate the covariance matrix. symmetrized : bool Flag indicating whether the symmetrized (default) or non-symmetrized correlation matrix is to be calculated. Returns corr_mat: array A 2-dimensional array of covariance values. 166 ## correlation_matrix_field(a1, a2, rho=None) Calculate the correlation matrix for given field operators 1 and 2 . If a density matrix is given the expectation values are calculated, otherwise a matrix with operators is returned. Parameters a1 : qutip.qobj.Qobj Field operator for mode 1. a2 : qutip.qobj.Qobj Field operator for mode 2. rho : qutip.qobj.Qobj Density matrix for which to calculate the covariance matrix. Returns cov_mat: array of complex numbers or qutip.qobj.Qobj A 2-dimensional array of covariance values, or, if rho=0, a matrix of operators. Calculate the quadrature correlation matrix with given field operators 1 and 2 . If a density matrix is given the expectation values are calculated, otherwise a matrix with operators is returned. Parameters a1 : qutip.qobj.Qobj Field operator for mode 1. a2 : qutip.qobj.Qobj Field operator for mode 2. rho : qutip.qobj.Qobj Density matrix for which to calculate the covariance matrix. Returns corr_mat: array of complex numbers or qutip.qobj.Qobj A 2-dimensional array of covariance values for the field quadratures, or, if rho=0, a matrix of operators. wigner_covariance_matrix(a1=None, a2=None, R=None, rho=None) Calculate the Wigner covariance matrix = 21 ( + ), given the quadrature correlation matrix = , where = (1 , 1 , 2 , 2 ) is the vector with quadrature operators for the two modes. Alternatively, if R = None, and if annihilation operators a1 and a2 for the two modes are supplied instead, the quadrature correlation matrix is constructed from the annihilation operators before then the covariance matrix is calculated. Parameters a1 : qutip.qobj.Qobj Field operator for mode 1. a2 : qutip.qobj.Qobj Field operator for mode 2. R : array rho : qutip.qobj.Qobj Density matrix for which to calculate the covariance matrix. Returns cov_mat: array A 2-dimensional array of covariance values. logarithmic_negativity(V) Calculate the logarithmic negativity given the symmetrized covariance matrix, see qutip.continous_variables.covariance_matrix. Note that the two-mode field state that is described by V must be Gaussian for this function to applicable. Parameters V : 2d array The covariance matrix. Returns N: float, the logarithmic negativity for the two-mode Gaussian state that is described by the the Wigner covariance matrix V. 167 ## Dynamics and Time-Evolution Schrdinger Equation This module provides solvers for the unitary Schrodinger equation. sesolve(H, rho0, tlist, e_ops, args={}, options=None, progress_bar=<qutip.ui.progressbar.BaseProgressBar object at 0x105876c90>) Schrodinger equation evolution of a state vector for a given Hamiltonian. Evolve the state vector or density matrix (rho0) using a given Hamiltonian (H), by integrating the set of ordinary differential equations that define the system. The output is either the state vector at arbitrary points in time (tlist), or the expectation values of the supplied operators (e_ops). If e_ops is a callback function, it is invoked for each time in tlist with time and the state as arguments, and the function does not use any return values. Parameters H : qutip.qobj system Hamiltonian, or a callback function for time-dependent Hamiltonians. rho0 : qutip.qobj initial density matrix or state vector (ket). tlist : list / array list of times for . e_ops : list of qutip.qobj / callback function single single operator or list of operators for which to evaluate expectation values. args : dictionary dictionary of parameters for time-dependent Hamiltonians and collapse operators. options : qutip.Qdeoptions with options for the ODE solver. Returns output: qutip.solver An instance of the class qutip.solver, which contains either an array of expectation values for the times specified by tlist, or an array or state vectors or density matrices corresponding to the times in tlist [if e_ops is an empty list], or nothing if a callback function was given inplace of operators for which to calculate the expectation values. Master Equation This module provides solvers for the Lindblad master equation and von Neumann equation. mesolve(H, rho0, tlist, c_ops, e_ops, args={}, options=None, progress_bar=None) Master equation evolution of a density matrix for a given Hamiltonian and set of collapse operators, or a Liouvillian. Evolve the state vector or density matrix (rho0) using a given Hamiltonian (H) and an [optional] set of collapse operators (c_ops), by integrating the set of ordinary differential equations that define the system. In the absence of collapse operators the system is evolved according to the unitary evolution of the Hamiltonian. The output is either the state vector at arbitrary points in time (tlist), or the expectation values of the supplied operators (e_ops). If e_ops is a callback function, it is invoked for each time in tlist with time and the state as arguments, and the function does not use any return values. If either H or the Qobj elements in c_ops are superoperators, they will be treated as direct contributions to the total system Liouvillian. This allows to solve master equations that are not on standard Lindblad form by passing a custom Liouvillian in place of either the H or c_ops elements. Time-dependent operators For time-dependent problems, H and c_ops can be callback functions that takes two arguments, time and args, and returns the Hamiltonian or Liouvillian for the system at that point in time (callback format). Alternatively, H and c_ops can be a specified in a nested-list format where each element in the list is a list of length 2, containing an operator (qutip.qobj) at the first element and where the second element is either a string (list string format), a callback function (list callback format) that evaluates to the time-dependent coefficient for the corresponding operator, or a NumPy array (list array format) which specifies the value of the coefficient to the corresponding operator for each value of t in tlist. 168 Examples H = [[H0, sin(w*t)], [H1, sin(2*w*t)]] H = [[H0, f0_t], [H1, f1_t]] where f0_t and f1_t are python functions with signature f_t(t, args). H = [[H0, np.sin(w*tlist)], [H1, np.sin(2*w*tlist)]] In the list string format and list callback format, the string expression and the callback function must evaluate to a real or complex number (coefficient for the corresponding operator). In all cases of time-dependent operators, args is a dictionary of parameters that is used when evaluating operators. It is passed to the callback functions as second argument. Additional options to mesolve can be set via the options argument, which should be an instance of qutip.solver.Options. Many ODE integration options can be set this way, and the store_states and store_final_state options can be used to store states even though expectation values are requested via the e_ops argument. Note: If an element in the list-specification of the Hamiltonian or the list of collapse operators are in superoperator form it will be added to the total Liouvillian of the problem with out further transformation. This allows for using mesolve for solving master equations that are not on standard Lindblad form. Note: On using callback function: mesolve transforms all qutip.qobj objects to sparse matrices before handing the problem to the integrator function. In order for your callback function to work correctly, pass all qutip.qobj objects that are used in constructing the Hamiltonian via args. mesolve will check for qutip.qobj in args and handle the conversion to sparse matrices. All other qutip.qobj objects that are not passed via args will be passed on to the integrator in scipy which will raise an NotImplemented exception. Parameters H : qutip.Qobj System Hamiltonian, or a callback function for time-dependent Hamiltonians, or alternatively a system Liouvillian. rho0 : qutip.Qobj initial density matrix or state vector (ket). tlist : list / array list of times for . c_ops : list of qutip.Qobj single collapse operator, or list of collapse operators, or a list of Liouvillian superoperators. e_ops : list of qutip.Qobj / callback function single single operator or list of operators for which to evaluate expectation values. args : dictionary dictionary of parameters for time-dependent Hamiltonians and collapse operators. options : qutip.Options with options for the solver. progress_bar: BaseProgressBar Optional instance of BaseProgressBar, or a subclass thereof, for showing the progress of the simulation. Returns result: qutip.Result An instance of the class qutip.Result, which contains either an array result.expect of expectation values for the times specified by tlist, or an array result.states of state vectors or density matrices corresponding to the times in tlist [if e_ops is an empty list], or nothing if a callback function was given in place of operators for which to calculate the expectation values. 169 ## Monte Carlo Evolution mcsolve(H, psi0, tlist, c_ops, e_ops, ntraj=None, args={}, options=None, progress_bar=True, map_func=None, map_kwargs=None) Monte Carlo evolution of a state vector | for a given Hamiltonian and sets of collapse operators, and possibly, operators for calculating expectation values. Options for the underlying ODE solver are given by the Options class. mcsolve supports time-dependent Hamiltonians and collapse operators using either Python functions of strings to represent time-dependent coefficients. Note that, the system Hamiltonian MUST have at least one constant term. As an example of a time-dependent problem, consider a Hamiltonian with two terms H0 and H1, where H1 is time-dependent with coefficient sin(w*t), and collapse operators C0 and C1, where C1 is timedependent with coeffcient exp(-a*t). Here, w and a are constant arguments with values W and A. Using the Python function time-dependent format requires two Python functions, one for each collapse coefficient. Therefore, this problem could be expressed as: def H1_coeff(t,args): return sin(args['w']*t) def C1_coeff(t,args): return exp(-args['a']*t) H = [H0, [H1, H1_coeff]] c_ops = [C0, [C1, C1_coeff]] args={'a': A, 'w': W} ## or in String (Cython) format we could write: H = [H0, [H1, 'sin(w*t)']] c_ops = [C0, [C1, 'exp(-a*t)']] args={'a': A, 'w': W} Constant terms are preferably placed first in the Hamiltonian and collapse operator lists. Parameters H : qutip.Qobj System Hamiltonian. psi0 : qutip.Qobj Initial state vector tlist : array_like Times at which results are recorded. ntraj : int Number of trajectories to run. c_ops : array_like single collapse operator or list or array of collapse operators. e_ops : array_like single operator or list or array of operators for calculating expectation values. args : dict Arguments for time-dependent Hamiltonian and collapse operator terms. options : Options Instance of ODE solver options. progress_bar: BaseProgressBar Optional instance of BaseProgressBar, or a subclass thereof, for showing the progress of the simulation. Set to None to disable the progress bar. 170 map_func: function A map function for managing the calls to the single-trajactory solver. map_kwargs: dictionary Optional keyword arguments to the map_func function. Returns results : qutip.solver.Result Object storing all results from the simulation. Note: It is possible to reuse the random number seeds from a previous run of the mcsolver by passing the output Result object seeds via the Options class, i.e. Options(seeds=prev_result.seeds). mcsolve_f90(H, psi0, tlist, c_ops, e_ops, ntraj=None, options=<qutip.solver.Options instance at 0x1048d6b48>, sparse_dms=True, serial=False, ptrace_sel=[], calc_entropy=False) Monte-Carlo wave function solver with fortran 90 backend. Usage is identical to qutip.mcsolve, for problems without explicit time-dependence, and with some optional input: Parameters H : qobj System Hamiltonian. psi0 : qobj Initial state vector tlist : array_like Times at which results are recorded. ntraj : int Number of trajectories to run. c_ops : array_like list or array of collapse operators. e_ops : array_like list or array of operators for calculating expectation values. options : Options Instance of solver options. sparse_dms : boolean If averaged density matrices are returned, they will be stored as sparse (Compressed Row Format) matrices during computation if sparse_dms = True (default), and dense matrices otherwise. Dense matrices might be preferable for smaller systems. serial : boolean If True (default is False) the solver will not make use of the multiprocessing module, and simply run in serial. ptrace_sel: list This optional argument specifies a list of components to keep when returning a partially traced density matrix. This can be convenient for large systems where memory becomes a problem, but you are only interested in parts of the density matrix. calc_entropy : boolean If ptrace_sel is specified, calc_entropy=True will have the solver return the averaged entropy over trajectories in results.entropy. This can be interpreted as a measure of entanglement. See Phys. Rev. Lett. 93, 120408 (2004), Phys. Rev. A 86, 022310 (2012). Returns results : Result Object storing all results from simulation. Exponential Series essolve(H, rho0, tlist, c_op_list, e_ops) Evolution of a state vector or density matrix (rho0) for a given Hamiltonian (H) and set of collapse operators (c_op_list), by expressing the ODE as an exponential series. The output is either the state vector at arbitrary points in time (tlist), or the expectation values of the supplied operators (e_ops). 171 Parameters H : qobj/function_type System Hamiltonian. rho0 : qutip.qobj Initial state density matrix. tlist : list/array list of times for . c_op_list : list of qutip.qobj list of qutip.qobj collapse operators. e_ops : list of qutip.qobj list of qutip.qobj operators for which to evaluate expectation values. Returns expt_array : array Expectation values of wavefunctions/density matrices for the times specified in tlist. Note: This solver does not support time-dependent Hamiltonians. ode2es(L, rho0) Creates an exponential series that describes the time evolution for the initial density matrix (or state vector) rho0, given the Liouvillian (or Hamiltonian) L. Parameters L : qobj Liouvillian of the system. rho0 : qobj Initial state vector or density matrix. Returns eseries : qutip.eseries eseries represention of the system dynamics. Bloch-Redfield Master Equation brmesolve(H, psi0, tlist, a_ops, e_ops=[], spectra_cb=[], c_ops=None, tions=<qutip.solver.Options instance at 0x105963320>) Solve the dynamics for a system using the Bloch-Redfield master equation. args={}, op- ## Note: This solver does not currently support time-dependent Hamiltonians. Parameters H : qutip.Qobj System Hamiltonian. rho0 / psi0: :class:qutip.Qobj Initial density matrix or state vector (ket). tlist : list / array List of times for . a_ops : list of qutip.qobj List of system operators that couple to bath degrees of freedom. e_ops : list of qutip.qobj / callback function List of operators for which to evaluate expectation values. c_ops : list of qutip.qobj List of system collapse operators. args : dictionary Placeholder for future implementation, kept for API consistency. options : qutip.solver.Options Options for the solver. Returns result: qutip.solver.Result 172 ## An instance of the class qutip.solver.Result, which contains either an array of expectation values, for operators given in e_ops, or a list of states for the times specified by tlist. bloch_redfield_tensor(H, a_ops, spectra_cb, c_ops=None, use_secular=True) Calculate the Bloch-Redfield tensor for a system given a set of operators and corresponding spectral functions that describes the systems coupling to its environment. Note: This tensor generation requires a time-independent Hamiltonian. Parameters H : qutip.qobj System Hamiltonian. a_ops : list of qutip.qobj List of system operators that couple to the environment. spectra_cb : list of callback functions List of callback functions that evaluate the noise power spectrum at a given frequency. c_ops : list of qutip.qobj List of system collapse operators. use_secular : bool Flag (True of False) that indicates if the secular approximation should be used. Returns R, kets: qutip.Qobj, list of qutip.Qobj R is the Bloch-Redfield tensor and kets is a list eigenstates of the Hamiltonian. bloch_redfield_solve(R, ekets, rho0, tlist, e_ops=[], options=None) Evolve the ODEs defined by Bloch-Redfield master equation. The Bloch-Redfield tensor can be calculated by the function bloch_redfield_tensor. Parameters R : qutip.qobj Bloch-Redfield tensor. ekets : array of qutip.qobj Array of kets that make up a basis tranformation for the eigenbasis. rho0 : qutip.qobj Initial density matrix. tlist : list / array List of times for . e_ops : list of qutip.qobj / callback function List of operators for which to evaluate expectation values. options : qutip.Qdeoptions Options for the ODE solver. Returns output: qutip.solver An instance of the class qutip.solver, which contains either an array of expectation values for the times specified by tlist. Floquet States and Floquet-Markov Master Equation fmmesolve(H, rho0, tlist, c_ops, e_ops=[], spectra_cb=[], T=None, args={}, options=<qutip.solver.Options instance at 0x105963290>, floquet_basis=True, kmax=5) Solve the dynamics for the system using the Floquet-Markov master equation. Note: This solver currently does not support multiple collapse operators. Parameters H : qutip.qobj system Hamiltonian. rho0 / psi0 : qutip.qobj 173 ## initial density matrix or state vector (ket). tlist : list / array list of times for . c_ops : list of qutip.qobj list of collapse operators. e_ops : list of qutip.qobj / callback function list of operators for which to evaluate expectation values. spectra_cb : list callback functions List of callback functions that compute the noise power spectrum as a function of frequency for the collapse operators in c_ops. T : float The period of the time-dependence of the hamiltonian. The default value None indicates that the tlist spans a single period of the driving. args : dictionary dictionary of parameters for time-dependent Hamiltonians and collapse operators. This dictionary should also contain an entry w_th, which is the temperature of the environment (if finite) in the energy/frequency units of the Hamiltonian. For example, if the Hamiltonian written in units of 2pi GHz, and the temperature is given in K, use the following conversion >>> >>> >>> >>> ## temperature = 25e-3 # unit K h = 6.626e-34 kB = 1.38e-23 args['w_th'] = temperature * (kB / h) * 2 * pi * 1e-9 options : qutip.solver options for the ODE solver. k_max : int The truncation of the number of sidebands (default 5). Returns output : qutip.solver An instance of the class qutip.solver, which contains either an array of expectation values for the times specified by tlist. floquet_modes(H, T, args=None, sort=False, U=None) Calculate the initial Floquet modes Phi_alpha(0) for a driven system with period T. Returns a list of qutip.qobj instances representing the Floquet modes and a list of corresponding quasienergies, sorted by increasing quasienergy in the interval [-pi/T, pi/T]. The optional parameter sort decides if the output is to be sorted in increasing quasienergies or not. Parameters H : qutip.qobj system Hamiltonian, time-dependent with period T args : dictionary dictionary with variables required to evaluate H T : float The period of the time-dependence of the hamiltonian. The default value None indicates that the tlist spans a single period of the driving. U : qutip.qobj The propagator for the time-dependent Hamiltonian with period T. If U is None (default), it will be calculated from the Hamiltonian H using qutip.propagator.propagator. Returns output : list of kets, list of quasi energies Two lists: the Floquet modes as kets and the quasi energies. floquet_modes_t(f_modes_0, f_energies, t, H, T, args=None) Calculate the Floquet modes at times tlist Phi_alpha(tlist) propagting the initial Floquet modes Phi_alpha(0) Parameters f_modes_0 : list of qutip.qobj (kets) 174 Floquet modes at f_energies : list Floquet energies. t : float The time at which to evaluate the floquet modes. H : qutip.qobj system Hamiltonian, time-dependent with period T args : dictionary dictionary with variables required to evaluate H T : float The period of the time-dependence of the hamiltonian. Returns output : list of kets The Floquet modes as kets at time floquet_modes_table(f_modes_0, f_energies, tlist, H, T, args=None) Pre-calculate the Floquet modes for a range of times spanning the floquet period. Can later be used as a table to look up the floquet modes for any time. Parameters f_modes_0 : list of qutip.qobj (kets) Floquet modes at f_energies : list Floquet energies. tlist : array The list of times at which to evaluate the floquet modes. H : qutip.qobj system Hamiltonian, time-dependent with period T T : float The period of the time-dependence of the hamiltonian. args : dictionary dictionary with variables required to evaluate H Returns output : nested list A nested list of Floquet modes as kets for each time in tlist floquet_modes_t_lookup(f_modes_table_t, t, T) Lookup the floquet mode at time t in the pre-calculated table of floquet modes in the first period of the time-dependence. Parameters f_modes_table_t : nested list of qutip.qobj (kets) A lookup-table of Floquet modes at times precalculated by qutip.floquet.floquet_modes_table. t : float The time for which to evaluate the Floquet modes. T : float The period of the time-dependence of the hamiltonian. Returns output : nested list A list of Floquet modes as kets for the time that most closely matching the time t in the supplied table of Floquet modes. floquet_states_t(f_modes_0, f_energies, t, H, T, args=None) Evaluate the floquet states at time t given the initial Floquet modes. Parameters f_modes_t : list of qutip.qobj (kets) A list of initial Floquet modes (for time = 0). f_energies : array The Floquet energies. 175 t : float The time for which to evaluate the Floquet states. H : qutip.qobj System Hamiltonian, time-dependent with period T. T : float The period of the time-dependence of the hamiltonian. args : dictionary Dictionary with variables required to evaluate H. Returns output : list A list of Floquet states for the time . floquet_wavefunction_t(f_modes_0, f_energies, f_coeff, t, H, T, args=None) Evaluate the wavefunction for a time t using the Floquet state decompositon, given the initial Floquet modes. Parameters f_modes_t : list of qutip.qobj (kets) A list of initial Floquet modes (for time = 0). f_energies : array The Floquet energies. f_coeff : array The coefficients for Floquet decomposition of the initial wavefunction. t : float The time for which to evaluate the Floquet states. H : qutip.qobj System Hamiltonian, time-dependent with period T. T : float The period of the time-dependence of the hamiltonian. args : dictionary Dictionary with variables required to evaluate H. Returns output : qutip.qobj The wavefunction for the time . floquet_state_decomposition(f_states, f_energies, psi) Decompose the wavefunction psi (typically an initial state) in terms of the Floquet states, = (0). Parameters f_states : list of qutip.qobj (kets) A list of Floquet modes. f_energies : array The Floquet energies. psi : qutip.qobj The wavefunction to decompose in the Floquet state basis. Returns output : array The coefficients in the Floquet state decomposition. fsesolve(H, psi0, tlist, e_ops=[], T=None, args={}, Tsteps=100) Solve the Schrodinger equation using the Floquet formalism. Parameters H : qutip.qobj.Qobj System Hamiltonian, time-dependent with period T. psi0 : qutip.qobj Initial state vector (ket). tlist : list / array list of times for . e_ops : list of qutip.qobj / callback function list of operators for which to evaluate expectation values. If this list is empty, the state vectors for each time in tlist will be returned instead of expectation values. 176 T : float The period of the time-dependence of the hamiltonian. args : dictionary Dictionary with variables required to evaluate H. Tsteps : integer The number of time steps in one driving period for which to precalculate the Floquet modes. Tsteps should be an even number. Returns output : qutip.solver.Result An instance of the class qutip.solver.Result, which contains either an array of expectation values or an array of state vectors, for the times specified by tlist. Stochastic Schrdinger Equation and Master Equation This module contains functions for solving stochastic schrodinger and master equations. The API should not be considered stable, and is subject to change when we work more on optimizing this module for performance and features. smesolve(H, rho0, times, c_ops, sc_ops, e_ops, **kwargs) Solve stochastic master equation. Dispatch to specific solvers depending on the value of the solver keyword argument. Parameters H : qutip.Qobj System Hamiltonian. rho0 : qutip.Qobj Initial density matrix or state vector (ket). times : list / array List of times for . Must be uniformly spaced. c_ops : list of qutip.Qobj Deterministic collapse operator which will contribute with a standard Lindblad type of dissipation. sc_ops : list of qutip.Qobj List of stochastic collapse operators. Each stochastic collapse operator will give a deterministic and stochastic contribution to the eqaution of motion according to how the d1 and d2 functions are defined. e_ops : list of qutip.Qobj / callback function single single operator or list of operators for which to evaluate expectation values. kwargs : dictionary Optional keyword arguments. See qutip.stochastic.StochasticSolverOptions. Returns output: qutip.solver.SolverResult An instance of the class qutip.solver.SolverResult. ssesolve(H, psi0, times, sc_ops, e_ops, **kwargs) Solve the stochastic Schrdinger equation. Dispatch to specific solvers depending on the value of the solver keyword argument. Parameters H : qutip.Qobj System Hamiltonian. psi0 : qutip.Qobj Initial state vector (ket). times : list / array List of times for . Must be uniformly spaced. sc_ops : list of qutip.Qobj List of stochastic collapse operators. Each stochastic collapse operator will give a deterministic and stochastic contribution to the equation of motion according to how the d1 and d2 functions are defined. 177 ## e_ops : list of qutip.Qobj Single operator or list of operators for which to evaluate expectation values. kwargs : dictionary Optional keyword arguments. See qutip.stochastic.StochasticSolverOptions. Returns output: qutip.solver.SolverResult An instance of the class qutip.solver.SolverResult. smepdpsolve(H, rho0, times, c_ops, e_ops, **kwargs) A stochastic (piecewse deterministic process) PDP solver for density matrix evolution. Parameters H : qutip.Qobj System Hamiltonian. rho0 : qutip.Qobj Initial density matrix. times : list / array List of times for . Must be uniformly spaced. c_ops : list of qutip.Qobj Deterministic collapse operator which will contribute with a standard Lindblad type of dissipation. sc_ops : list of qutip.Qobj List of stochastic collapse operators. Each stochastic collapse operator will give a deterministic and stochastic contribution to the eqaution of motion according to how the d1 and d2 functions are defined. e_ops : list of qutip.Qobj / callback function single single operator or list of operators for which to evaluate expectation values. kwargs : dictionary Optional keyword arguments. See qutip.stochastic.StochasticSolverOptions. Returns output: qutip.solver.SolverResult An instance of the class qutip.solver.SolverResult. ssepdpsolve(H, psi0, times, c_ops, e_ops, **kwargs) A stochastic (piecewse deterministic process) PDP solver for wavefunction evolution. For most purposes, use qutip.mcsolve instead for quantum trajectory simulations. Parameters H : qutip.Qobj System Hamiltonian. psi0 : qutip.Qobj Initial state vector (ket). times : list / array List of times for . Must be uniformly spaced. c_ops : list of qutip.Qobj Deterministic collapse operator which will contribute with a standard Lindblad type of dissipation. e_ops : list of qutip.Qobj / callback function single single operator or list of operators for which to evaluate expectation values. kwargs : dictionary Optional keyword arguments. See qutip.stochastic.StochasticSolverOptions. Returns output: qutip.solver.SolverResult An instance of the class qutip.solver.SolverResult. 178 Correlation Functions correlation(H, state0, tlist, taulist, c_ops, a_op, b_op, solver=me, reverse=False, args=None, options=<qutip.solver.Options instance at 0x105963a70>) Calculate the two-operator two-time correlation function: ( + )() along two time axes using the quantum regression theorem and the evolution solver indicated by the solver parameter. Parameters H : qutip.qobj.Qobj system Hamiltonian. state0 [qutip.qobj.Qobj] Initial state density matrix (0 ) or state vector (0 ). If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. tlist [list / array] list of times for . tlist must be positive and contain the element 0. When taking steady-steady correlations only one tlist value is necessary, i.e. :math:t ightarrow infty; here tlist is automatically set, ignoring user input. taulist [list / array] list of times for . taulist must be positive and contain the element 0. c_ops [list of qutip.qobj.Qobj] list of collapse operators. a_op [qutip.qobj.Qobj] operator A. b_op [qutip.qobj.Qobj] operator B. reverse [bool] If True, calculate ()( + ) instead of ( + )(). solver [str] choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options [qutip.solver.Options] solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_mat: array An 2-dimensional array (matrix) of correlation values for the times specified by tlist (first index) and taulist (second index). If tlist is None, then a 1-dimensional array of correlation values is returned instead. correlation_ss(H, taulist, c_ops, a_op, b_op, solver=me, reverse=False, args=None, options=<qutip.solver.Options instance at 0x105963a28>) Calculate the two-operator two-time correlation function: lim ( + )() along one time axis (given steady-state initial conditions) using the quantum regression theorem and the evolution solver indicated by the solver parameter. Parameters H : qutip.qobj.Qobj system Hamiltonian. taulist : list / array list of times for . taulist must be positive and contain the element 0. c_ops : list of qutip.qobj.Qobj list of collapse operators. a_op : qutip.qobj.Qobj operator A. b_op : qutip.qobj.Qobj operator B. reverse : bool If True, calculate lim ()( + ) instead of lim ( + )(). 179 solver : str choice of solver (me for master-equation and es for exponential series) options : qutip.solver.Options solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_vec: array An array of correlation values for the times specified by tlist. References ## See, Gardiner, Quantum Noise, Section 5.2. correlation_2op_1t(H, state0, taulist, c_ops, a_op, b_op, solver=me, reverse=False, args=None, options=<qutip.solver.Options instance at 0x1059637e8>) Calculate the two-operator two-time correlation function: :math: left<A(t+tau)B(t)right> along one time axis using the quantum regression theorem and the evolution solver indicated by the solver parameter. Parameters H : qutip.qobj.Qobj system Hamiltonian. state0 : qutip.qobj.Qobj Initial state density matrix (0 ) or state vector (0 ). If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. taulist : list / array list of times for . taulist must be positive and contain the element 0. c_ops : list of qutip.qobj.Qobj list of collapse operators. a_op : qutip.qobj.Qobj operator A. b_op : qutip.qobj.Qobj operator B. reverse : bool If True, calculate ()( + ) instead of ( + )(). solver : str choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options : qutip.solver.Options solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_vec: array An array of correlation values for the times specified by tlist. References ## See, Gardiner, Quantum Noise, Section 5.2. correlation_2op_2t(H, state0, tlist, taulist, c_ops, a_op, b_op, solver=me, reverse=False, args=None, options=<qutip.solver.Options instance at 0x1059638c0>) Calculate the two-operator two-time correlation function: ( + )() along two time axes using the quantum regression theorem and the evolution solver indicated by the solver parameter. Parameters H : qutip.qobj.Qobj system Hamiltonian. 180 ## state0 [qutip.qobj.Qobj] Initial state density matrix 0 or state vector 0 . If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. tlist [list / array] list of times for . tlist must be positive and contain the element 0. When taking steady-steady correlations only one tlist value is necessary, i.e. :math:t ightarrow infty; here tlist is automatically set, ignoring user input. taulist [list / array] list of times for . taulist must be positive and contain the element 0. c_ops [list of qutip.qobj.Qobj] list of collapse operators. a_op [qutip.qobj.Qobj] operator A. b_op [qutip.qobj.Qobj] operator B. reverse [bool] If True, calculate ()( + ) instead of ( + )(). solver [str] choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options [qutip.solver.Options] solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_mat: array An 2-dimensional array (matrix) of correlation values for the times specified by tlist (first index) and taulist (second index). If tlist is None, then a 1-dimensional array of correlation values is returned instead. correlation_3op_1t(H, state0, taulist, c_ops, a_op, b_op, c_op, solver=me, args=None, options=<qutip.solver.Options instance at 0x105963908>) Calculate the three-operator two-time correlation function: ()( + )() along one time axis using the quantum regression theorem and the evolution solver indicated by the solver parameter. Note: it is not possibly to calculate a physically meaningful correlation of this form where :math: tau<0. Parameters H : qutip.qobj.Qobj system Hamiltonian. rho0 : qutip.qobj.Qobj Initial state density matrix (0 ) or state vector (0 ). If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. taulist : list / array list of times for . taulist must be positive and contain the element 0. c_ops : list of qutip.qobj.Qobj list of collapse operators. a_op : qutip.qobj.Qobj operator A. b_op : qutip.qobj.Qobj operator B. c_op : qutip.qobj.Qobj operator C. solver : str choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options : qutip.solver.Options solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_vec: array An array of correlation values for the times specified by taulist 181 References ## See, Gardiner, Quantum Noise, Section 5.2. correlation_3op_2t(H, state0, tlist, taulist, c_ops, a_op, b_op, c_op, solver=me, args=None, options=<qutip.solver.Options instance at 0x105963950>) Calculate the three-operator two-time correlation function: ()( + )() along two time axes using the quantum regression theorem and the evolution solver indicated by the solver parameter. Note: it is not possibly to calculate a physically meaningful correlation of this form where :math: tau<0. Parameters H : qutip.qobj.Qobj system Hamiltonian, or a callback function for time-dependent Hamiltonians. rho0 [qutip.qobj.Qobj] Initial state density matrix 0 or state vector 0 . If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. tlist [list / array] list of times for . tlist must be positive and contain the element 0. When taking steady-steady correlations only one tlist value is necessary, i.e. :math:t ightarrow infty; here tlist is automatically set, ignoring user input. taulist [list / array] list of times for . taulist must be positive and contain the element 0. c_ops [list of qutip.qobj.Qobj] list of collapse operators. (does not accept time dependence) a_op [qutip.qobj.Qobj] operator A. b_op [qutip.qobj.Qobj] operator B. c_op [qutip.qobj.Qobj] operator C. solver [str] choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options [qutip.solver.Options] solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_mat: array An 2-dimensional array (matrix) of correlation values for the times specified by tlist (first index) and taulist (second index). If tlist is None, then a 1-dimensional array of correlation values is returned instead. correlation_4op_1t(H, state0, taulist, c_ops, a_op, b_op, c_op, d_op, solver=me, args=None, options=<qutip.solver.Options instance at 0x105963ab8>) Calculate the four-operator two-time correlation function: ()( + )( + )() along one time axis using the quantum regression theorem and the evolution solver indicated by the solver parameter. Note: it is not possibly to calculate a physically meaningful correlation of this form where < 0. Parameters H : qutip.qobj.Qobj system Hamiltonian. rho0 : qutip.qobj.Qobj Initial state density matrix (0 ) or state vector (0 ). If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. taulist : list / array list of times for . taulist must be positive and contain the element 0. c_ops : list of qutip.qobj.Qobj list of collapse operators. 182 a_op : qutip.qobj.Qobj operator A. b_op : qutip.qobj.Qobj operator B. c_op : qutip.qobj.Qobj operator C. d_op : qutip.qobj.Qobj operator D. solver : str choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options : qutip.solver.Options solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_vec: array An array of correlation values for the times specified by taulist References ## See, Gardiner, Quantum Noise, Section 5.2. correlation_4op_2t(H, state0, tlist, taulist, c_ops, a_op, b_op, c_op, d_op, solver=me, args=None, options=<qutip.solver.Options instance at 0x105963b00>) Calculate the four-operator two-time correlation function: ()( + )( + )() along two time axes using the quantum regression theorem and the evolution solver indicated by the solver parameter. Note: it is not possibly to calculate a physically meaningful correlation of this form where < 0. Parameters H : qutip.qobj.Qobj system Hamiltonian, or a callback function for time-dependent Hamiltonians. rho0 [qutip.qobj.Qobj] Initial state density matrix 0 or state vector 0 . If state0 is None, then the steady state will be used as the initial state. The steady-state is only implemented for the me and es solvers. tlist [list / array] list of times for . tlist must be positive and contain the element 0. When taking steady-steady correlations only one tlist value is necessary, i.e. :math:t ightarrow infty; here tlist is automatically set, ignoring user input. taulist [list / array] list of times for . taulist must be positive and contain the element 0. c_ops [list of qutip.qobj.Qobj] list of collapse operators. (does not accept time dependence) a_op [qutip.qobj.Qobj] operator A. b_op [qutip.qobj.Qobj] operator B. c_op [qutip.qobj.Qobj] operator C. d_op [qutip.qobj.Qobj] operator D. solver [str] choice of solver (me for master-equation, mc for Monte Carlo, and es for exponential series) options [qutip.solver.Options] solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns corr_mat: array 183 ## An 2-dimensional array (matrix) of correlation values for the times specified by tlist (first index) and taulist (second index). If tlist is None, then a 1-dimensional array of correlation values is returned instead. spectrum(H, wlist, c_ops, a_op, b_op, solver=es, use_pinv=False) Calculate the spectrum of the correlation function lim ( + )(), i.e., the Fourier transform of the correlation function: () = lim ( + )() . using the solver indicated by the solver parameter. Note: this spectrum is only defined for stationary statistics (uses steady state rho0) Parameters H : qutip.qobj system Hamiltonian. wlist : list / array list of frequencies for . c_ops : list of qutip.qobj list of collapse operators. a_op : qutip.qobj operator A. b_op : qutip.qobj operator B. solver : str choice of solver (es for exponential series and pi for psuedo-inverse) use_pinv : bool For use with the pi solver: if True use numpys pinv method, otherwise use a generic solver Returns spectrum: array An array with spectrum () for the frequencies specified in wlist. spectrum_ss(H, wlist, c_ops, a_op, b_op) Calculate the spectrum of the correlation function lim ( + )(), i.e., the Fourier transform of the correlation function: () = lim ( + )() . using an eseries based solver Note: this spectrum is only defined for stationary statistics (uses steady state rho0). Parameters H : qutip.qobj system Hamiltonian. wlist : list / array list of frequencies for . c_ops : list of qutip.qobj list of collapse operators. a_op : qutip.qobj operator A. b_op : qutip.qobj operator B. use_pinv : bool If True use numpys pinv method, otherwise use a generic solver Returns spectrum: array An array with spectrum () for the frequencies specified in wlist. 184 ## spectrum_pi(H, wlist, c_ops, a_op, b_op, use_pinv=False) Calculate the spectrum of the correlation function lim ( + )(), i.e., the Fourier transform of the correlation function: () = lim ( + )() . using a psuedo-inverse method. Note: this spectrum is only defined for stationary statistics (uses steady state rho0) Parameters H : qutip.qobj system Hamiltonian. wlist : list / array list of frequencies for . c_ops : list of qutip.qobj list of collapse operators. a_op : qutip.qobj operator A. b_op : qutip.qobj operator B. use_pinv : bool If True use numpys pinv method, otherwise use a generic solver Returns spectrum: array An array with spectrum () for the frequencies specified in wlist. spectrum_correlation_fft(taulist, y) Calculate the power spectrum corresponding to a two-time correlation function using FFT. Parameters tlist : list / array list/array of times which the correlation function is given. y : list / array list/array of correlations corresponding to time delays . Returns w, S : tuple Returns an array of angular frequencies w and the corresponding one-sided power spectrum S(w). coherence_function_g1(H, taulist, c_ops, a_op, solver=me, args=None, tions=<qutip.solver.Options instance at 0x105963998>) Calculate the normalized first-order quantum coherence function: op- ( + )() ()() (1) ( ) = lim using the quantum regression theorem and the evolution solver indicated by the solver parameter. Note: g1 is only defined for stationary statistics (uses steady state). Parameters H : qutip.qobj.Qobj system Hamiltonian. taulist : list / array list of times for . taulist must be positive and contain the element 0. c_ops : list of qutip.qobj.Qobj list of collapse operators. a_op : qutip.qobj.Qobj The annihilation operator of the mode. solver : str choice of solver (me for master-equation and es for exponential series) options : qutip.solver.Options 185 solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns g1: array The normalized first-order coherence function. coherence_function_g2(H, taulist, c_ops, a_op, solver=me, args=None, tions=<qutip.solver.Options instance at 0x1059639e0>) Calculate the normalized second-order quantum coherence function: op- () ( + )( + )() ()()2 (2) ( ) = lim using the quantum regression theorem and the evolution solver indicated by the solver parameter. Note: g2 is only defined for stationary statistics (uses steady state rho0). Parameters H : qutip.qobj.Qobj system Hamiltonian. taulist : list / array list of times for . taulist must be positive and contain the element 0. c_ops : list of qutip.qobj.Qobj list of collapse operators. a_op : qutip.qobj.Qobj The annihilation operator of the mode. solver : str choice of solver (me for master-equation and es for exponential series) options : qutip.solver.Options solver options class. ntraj is taken as a two-element list because the mc correlator calls mcsolve() recursively; by default, ntraj=[20, 100]. mc_corr_eps prevents divide-by-zero errors in the mc correlator; by default, mc_corr_eps=1e-10. Returns g2: array The normalized second-order coherence function. Module contains functions for solving for the steady state density matrix of open quantum systems defined by a Liouvillian or Hamiltonian and a list of collapse operators. Calculates the steady state for quantum evolution subject to the supplied Hamiltonian or Liouvillian operator and (if given a Hamiltonian) a list of collapse operators. If the user passes a Hamiltonian then it, along with the list of collapse operators, will be converted into a Parameters A : qobj A Hamiltonian or Liouvillian operator. c_op_list : list A list of collapse operators. method : str {direct, eigen, iterative-gmres, iterative-lgmres, iterative-bicgstab, svd, power} Method for solving the underlying linear equation. Direct LU solver direct (default), sparse eigenvalue problem eigen, iterative GMRES method iterativegmres, iterative LGMRES method iterative-lgmres, iterative BICGSTAB method iterative-bicgstab, SVD svd (dense), or inverse-power method power. return_info : bool, optional, default = False Return a dictionary of solver-specific infomation about the solution and how it was obtained. 186 ## sparse : bool, optional, default = True Solve for the steady state using sparse algorithms. If set to False, the underlying Liouvillian operator will be converted into a dense matrix. Use only for smaller systems. use_rcm : bool, optional, default = False Use reverse Cuthill-Mckee reordering to minimize fill-in in the LU factorization of the Liouvillian. use_wbm : bool, optional, default = False Use Weighted Bipartite Matching reordering to make the Liouvillian diagonally dominant. This is useful for iterative preconditioners only, and is set to True by default when finding a preconditioner. weight : float, optional Sets the size of the elements used for adding the unity trace condition to the linear solvers. This is set to the average abs value of the Liouvillian elements if not specified by the user. use_umfpack : bool {False, True} Use umfpack solver instead of SuperLU. For SciPy 0.14+, this option requires installing scikits.umfpack. x0 : ndarray, optional ITERATIVE ONLY. Initial guess for solution vector. maxiter : int, optional, default=1000 ITERATIVE ONLY. Maximum number of iterations to perform. tol : float, optional, default=1e-9 ITERATIVE ONLY. Tolerance used for terminating solver. permc_spec : str, optional, default=COLAMD ITERATIVE ONLY. Column ordering used internally by superLU for the direct LU decomposition method. Options include COLAMD and NATURAL. If using RCM then this is set to NATURAL automatically unless explicitly specified. use_precond : bool optional, default = False ITERATIVE ONLY. Use an incomplete sparse LU decomposition as a preconditioner for the iterative GMRES and BICG solvers. Speeds up convergence time by orders of magnitude in many cases. M : {sparse matrix, dense matrix, LinearOperator}, optional ITERATIVE ONLY. Preconditioner for A. The preconditioner should approximate the inverse of A. Effective preconditioning can dramatically improve the rate of convergence for iterative methods. If no preconditioner is given and use_precond = True, then one is generated automatically. fill_factor : float, optional, default = 100 ITERATIVE ONLY. Specifies the fill ratio upper bound (>=1) of the iLU preconditioner. Lower values save memory at the cost of longer execution times and a possible singular factorization. drop_tol : float, optional, default = 1e-4 ITERATIVE ONLY. Sets the threshold for the magnitude of preconditioner elements that should be dropped. Can be reduced for a courser factorization at the cost of an increased number of iterations, and a possible singular factorization. diag_pivot_thresh : float, optional, default = None ITERATIVE ONLY. Sets the threshold between [0,1] for which diagonal elements are considered acceptable pivot points when using a preconditioner. A value of zero forces the pivot to be the diagonal element. ILU_MILU : str, optional, default = smilu_2 ITERATIVE ONLY. Selects the incomplete LU decomposition method algoithm used in creating the preconditoner. Should only be used by advanced users. Returns dm : qobj 187 ## info : dict, optional Dictionary containing solver-specific information about the solution. Notes The SVD method works only for dense operators (i.e. small systems). build_preconditioner(A, c_op_list=[], **kwargs) Constructs a iLU preconditioner necessary for solving for the steady state density matrix using the iterative linear solvers in the steadystate function. Parameters A : qobj A Hamiltonian or Liouvillian operator. c_op_list : list A list of collapse operators. return_info : bool, optional, default = False Return a dictionary of solver-specific infomation about the solution and how it was obtained. use_rcm : bool, optional, default = False Use reverse Cuthill-Mckee reordering to minimize fill-in in the LU factorization of the Liouvillian. use_wbm : bool, optional, default = False Use Weighted Bipartite Matching reordering to make the Liouvillian diagonally dominant. This is useful for iterative preconditioners only, and is set to True by default when finding a preconditioner. weight : float, optional Sets the size of the elements used for adding the unity trace condition to the linear solvers. This is set to the average abs value of the Liouvillian elements if not specified by the user. permc_spec : str, optional, default=COLAMD Column ordering used internally by superLU for the direct LU decomposition method. Options include COLAMD and NATURAL. If using RCM then this is set to NATURAL automatically unless explicitly specified. fill_factor : float, optional, default = 100 Specifies the fill ratio upper bound (>=1) of the iLU preconditioner. Lower values save memory at the cost of longer execution times and a possible singular factorization. drop_tol : float, optional, default = 1e-4 Sets the threshold for the magnitude of preconditioner elements that should be dropped. Can be reduced for a courser factorization at the cost of an increased number of iterations, and a possible singular factorization. diag_pivot_thresh : float, optional, default = None Sets the threshold between [0,1] for which diagonal elements are considered acceptable pivot points when using a preconditioner. A value of zero forces the pivot to be the diagonal element. ILU_MILU : str, optional, default = smilu_2 Selects the incomplete LU decomposition method algoithm used in creating the preconditoner. Should only be used by advanced users. Returns lu : object Returns a SuperLU object representing iLU preconditioner. info : dict, optional Dictionary containing solver-specific information. 188 Propagators propagator(H, t, c_op_list, args=None, options=None, sparse=False, progress_bar=None) Calculate the propagator U(t) for the density matrix or wave function such that () = ()(0) or v () = ()v (0) where v is the vector representation of the density matrix. Parameters H : qobj or list Hamiltonian as a Qobj instance of a nested list of Qobjs and coefficients in the liststring or list-function format for time-dependent Hamiltonians (see description in qutip.mesolve). t : float or array-like Time or list of times for which to evaluate the propagator. c_op_list : list List of qobj collapse operators. args : list/array/dictionary Parameters to callback functions for time-dependent Hamiltonians and collapse operators. options : qutip.Options with options for the ODE solver. progress_bar: BaseProgressBar Optional instance of BaseProgressBar, or a subclass thereof, for showing the progress of the simulation. By default no progress bar is used, and if set to True a TextProgressBar will be used. Returns a : qobj Instance representing the propagator (). Find the steady state for successive applications of the propagator . Parameters U : qobj Operator representing the propagator. Returns a : qobj Instance representing the steady-state density matrix. Time-dependent problems rhs_generate(H, c_ops, args={}, options=<qutip.solver.Options instance at 0x10569e200>, name=None, cleanup=True) Generates the Cython functions needed for solving the dynamics of a given system using the mesolve function inside a parfor loop. Parameters H : qobj System Hamiltonian. c_ops : list list of collapse operators. args : dict Arguments for time-dependent Hamiltonian and collapse operator terms. options : Options Instance of ODE solver options. name: str Name of generated RHS cleanup: bool Whether the generated cython file should be automatically removed or not. 189 Notes Using this function with any solver other than the mesolve function will result in an error. rhs_clear() Resets the string-format time-dependent Hamiltonian parameters. Returns Nothing, just clears data from internal config module. Visualization Pseudoprobability Functions qfunc(state, xvec, yvec, g=1.4142135623730951) Q-function of a given state vector or density matrix at points xvec + i * yvec. Parameters state : qobj A state vector or density matrix. xvec : array_like x-coordinates at which to calculate the Wigner function. yvec : array_like y-coordinates at which to calculate the Wigner function. g : float Scaling factor for a = 0.5 * g * (x + iy), default g = sqrt(2). Returns Q : array Values representing the Q-function calculated over the specified range [xvec,yvec]. wigner(psi, xvec, yvec, method=iterative, g=1.4142135623730951, parfor=False) Wigner function for a state vector or density matrix at points xvec + i * yvec. Parameters state : qobj A state vector or density matrix. xvec : array_like x-coordinates at which to calculate the Wigner function. yvec : array_like y-coordinates at which to calculate the Wigner function. Does not apply to the fft method. g : float Scaling factor for a = 0.5 * g * (x + iy), default g = sqrt(2). method : string {iterative, laguerre, fft} Select method iterative, laguerre, or fft, where iterative uses an iterative method to evaluate the Wigner functions for density matrices | >< |, while laguerre uses the Laguerre polynomials in scipy for the same task. The fft method evaluates the Fourier transform of the density matrix. The iterative method is default, and in general recommended, but the laguerre method is more efficient for very sparse density matrices (e.g., superpositions of Fock states in a large Hilbert space). The fft method is the preferred method for dealing with density matrices that have a large number of excitations (>~50). parfor : bool {False, True} Flag for calculating the Laguerre polynomial based Wigner function method=laguerre in parallel using the parfor function. Returns W : array Values representing the Wigner function calculated over the specified range [xvec,yvec]. yvex : array FFT ONLY. Returns the y-coordinate values calculated via the Fourier transform. 190 Notes The fft method accepts only an xvec input for the x-coordinate. The y-coordinates are calculated internally. References Ulf Leonhardt, Measuring the Quantum State of Light, (Cambridge University Press, 1997) Graphs and Visualization Functions for visualizing results of quantum dynamics simulations, visualizations of quantum states and processes. hinton(rho, xlabels=None, ylabels=None, title=None, ax=None, cmap=None, label_top=True) Draws a Hinton diagram for visualizing a density matrix or superoperator. Parameters rho : qobj Input density matrix or superoperator. xlabels : list of strings or False list of x labels ylabels : list of strings or False list of y labels title : string title of the plot (optional) ax : a matplotlib axes instance The axes context in which the plot will be drawn. cmap : a matplotlib colormap instance Color map to use when plotting. label_top : bool If True, x-axis labels will be placed on top, otherwise they will appear below the plot. Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. Raises ValueError Input argument is not a quantum object. matrix_histogram(M, xlabels=None, ylabels=None, title=None, limits=None, colorbar=True, fig=None, ax=None) Draw a histogram for the matrix M, with the given x and y labels and title. Parameters M : Matrix of Qobj The matrix to visualize xlabels : list of strings list of x labels ylabels : list of strings list of y labels title : string title of the plot (optional) limits : list/array with two float numbers The z-axis limits [min, max] (optional) ax : a matplotlib axes instance The axes context in which the plot will be drawn. Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. Raises ValueError 191 ## Input argument is not valid. matrix_histogram_complex(M, xlabels=None, ylabels=None, title=None, limits=None, phase_limits=None, colorbar=True, fig=None, ax=None, threshold=None) Draw a histogram for the amplitudes of matrix M, using the argument of each element for coloring the bars, with the given x and y labels and title. Parameters M : Matrix of Qobj The matrix to visualize xlabels : list of strings list of x labels ylabels : list of strings list of y labels title : string title of the plot (optional) limits : list/array with two float numbers The z-axis limits [min, max] (optional) phase_limits : list/array with two float numbers The phase-axis (colorbar) limits [min, max] (optional) ax : a matplotlib axes instance The axes context in which the plot will be drawn. threshold: float (None) Threshold for when bars of smaller height should be transparent. If not set, all bars are colored according to the color map. Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. Raises ValueError Input argument is not valid. plot_energy_levels(H_list, N=0, labels=None, show_ylabels=False, figsize=(8, 12), fig=None, ax=None) Plot the energy level diagrams for a list of Hamiltonians. Include up to N energy levels. For each element in H_list, the energy levels diagram for the cummulative Hamiltonian sum(H_list[0:n]) is plotted, where n is the index of an element in H_list. Parameters H_list : List of Qobj A list of Hamiltonians. labels [List of string] A list of labels for each Hamiltonian show_ylabels [Bool (default False)] Show y labels to the left of energy levels of the initial Hamiltonian. N [int] The number of energy levels to plot figsize [tuple (int,int)] The size of the figure (width, height). fig [a matplotlib Figure instance] The Figure canvas in which the plot will be drawn. ax [a matplotlib axes instance] The axes context in which the plot will be drawn. Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. Raises ValueError Input argument is not valid. plot_fock_distribution(rho, offset=0, fig=None, ax=None, figsize=(8, 6), title=None, unit_y_range=True) Plot the Fock distribution for a density matrix (or ket) that describes an oscillator mode. Parameters rho : qutip.qobj.Qobj 192 ## The density matrix (or ket) of the state to visualize. fig : a matplotlib Figure instance The Figure canvas in which the plot will be drawn. ax : a matplotlib axes instance The axes context in which the plot will be drawn. title : string An optional title for the figure. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_wigner_fock_distribution(rho, fig=None, axes=None, figsize=(8, 4), cmap=None, alpha_max=7.5, colorbar=False, method=iterative, projection=2d) Plot the Fock distribution and the Wigner function for a density matrix (or ket) that describes an oscillator mode. Parameters rho : qutip.qobj.Qobj The density matrix (or ket) of the state to visualize. fig : a matplotlib Figure instance The Figure canvas in which the plot will be drawn. axes : a list of two matplotlib axes instances The axes context in which the plot will be drawn. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). cmap : a matplotlib cmap instance The colormap. alpha_max : float The span of the x and y coordinates (both [-alpha_max, alpha_max]). colorbar : bool Whether (True) or not (False) a colorbar should be attached to the Wigner function graph. method : string {iterative, laguerre, fft} The method used for calculating the wigner function. See the documentation for qutip.wigner for details. projection: string {2d, 3d} Specify whether the Wigner function is to be plotted as a contour graph (2d) or surface plot (3d). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_wigner(rho, fig=None, ax=None, figsize=(8, 4), cmap=None, alpha_max=7.5, colorbar=False, method=iterative, projection=2d) Plot the the Wigner function for a density matrix (or ket) that describes an oscillator mode. Parameters rho : qutip.qobj.Qobj The density matrix (or ket) of the state to visualize. fig : a matplotlib Figure instance The Figure canvas in which the plot will be drawn. ax : a matplotlib axes instance The axes context in which the plot will be drawn. figsize : (width, height) 193 The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). cmap : a matplotlib cmap instance The colormap. alpha_max : float The span of the x and y coordinates (both [-alpha_max, alpha_max]). colorbar : bool Whether (True) or not (False) a colorbar should be attached to the Wigner function graph. method : string {iterative, laguerre, fft} The method used for calculating the wigner function. See the documentation for qutip.wigner for details. projection: string {2d, 3d} Specify whether the Wigner function is to be plotted as a contour graph (2d) or surface plot (3d). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. sphereplot(theta, phi, values, fig=None, ax=None, save=False) Plots a matrix of values on a sphere Parameters theta : float Angle with respect to z-axis phi : float Angle in x-y plane values : array Data set to be plotted fig : a matplotlib Figure instance The Figure canvas in which the plot will be drawn. ax : a matplotlib axes instance The axes context in which the plot will be drawn. save : bool {False , True} Whether to save the figure or not Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_schmidt(ket, splitting=None, labels_iteration=(3, 2), theme=light, fig=None, ax=None, figsize=(6, 6)) Plotting scheme related to Schmidt decomposition. Converts a state into a matrix (A_ij -> A_i^j), where rows are first particles and columns - last. Parameters ket : Qobj Pure state for plotting. splitting : int Plot for a number of first particles versus the rest. If not given, it is (number of particles + 1) // 2. theme : light (default) or dark Set coloring theme for mapping complex values into colors. See: complex_array_to_rgb. labels_iteration : int or pair of ints (default (3,2)) Number of particles to be shown as tick labels, for first (vertical) and last (horizontal) particles, respectively. fig : a matplotlib figure instance 194 ## The figure canvas on which the plot will be drawn. ax : a matplotlib axis instance The axis context in which the plot will be drawn. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_qubism(ket, theme=light, how=pairs, grid_iteration=1, legend_iteration=0, fig=None, ax=None, figsize=(6, 6)) Qubism plot for pure states of many qudits. Works best for spin chains, especially with even number of particles of the same dimension. Allows to see entanglement between first 2*k particles and the rest. More information: J. Rodriguez-Laguna, P. Migdal, M. Ibanez Berganza, M. Lewenstein, G. Sierra, Qubism: self-similar visualization of many-body wavefunctions, New J. Phys. 14 053028 (2012), arXiv:1112.3560, http://dx.doi.org/10.1088/1367-2630/14/5/053028 (open access) Parameters ket : Qobj Pure state for plotting. theme : light (default) or dark Set coloring theme for mapping complex values into colors. See: complex_array_to_rgb. how : pairs (default), pairs_skewed or before_after Type of Qubism plotting. Options: pairs - typical coordinates, pairs_skewed - for ferromagnetic/antriferromagnetic plots, before_after - related to Schmidt plot grid_iteration : int (default 1) Helper lines to be drawn on plot. Show tiles for 2*grid_iteration particles vs all others. legend_iteration : int (default 0) or grid_iteration or all Show labels for first 2*legend_iteration particles. Option grid_iteration sets the same number of particles as for grid_iteration. Option all makes label for all particles. Typically it should be 0, 1, 2 or perhaps 3. fig : a matplotlib figure instance The figure canvas on which the plot will be drawn. ax : a matplotlib axis instance The axis context in which the plot will be drawn. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_expectation_values(results, ylabels=[], title=None, show_legend=False, fig=None, axes=None, figsize=(8, 4)) Visualize the results (expectation values) for an evolution solver. results is assumed to be an instance of Result, or a list of Result instances. Parameters results : (list of) qutip.solver.Result List of results objects returned by any of the QuTiP evolution solvers. ylabels : list of strings The y-axis labels. List should be of the same length as results. title : string 195 ## The title of the figure. show_legend : bool Whether or not to show the legend. fig : a matplotlib Figure instance The Figure canvas in which the plot will be drawn. axes : a matplotlib axes instance The axes context in which the plot will be drawn. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_spin_distribution_2d(P, THETA, PHI, fig=None, ax=None, figsize=(8, 8)) Plot a spin distribution function (given as meshgrid data) with a 2D projection where the surface of the unit sphere is mapped on the unit disk. Parameters P : matrix Distribution values as a meshgrid matrix. THETA : matrix Meshgrid matrix for the theta coordinate. PHI : matrix Meshgrid matrix for the phi coordinate. fig : a matplotlib figure instance The figure canvas on which the plot will be drawn. ax : a matplotlib axis instance The axis context in which the plot will be drawn. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. plot_spin_distribution_3d(P, THETA, PHI, fig=None, ax=None, figsize=(8, 6)) Plots a matrix of values on a sphere Parameters P : matrix Distribution values as a meshgrid matrix. THETA : matrix Meshgrid matrix for the theta coordinate. PHI : matrix Meshgrid matrix for the phi coordinate. fig : a matplotlib figure instance The figure canvas on which the plot will be drawn. ax : a matplotlib axis instance The axis context in which the plot will be drawn. figsize : (width, height) The size of the matplotlib figure (in inches) if it is to be created (that is, if no fig and ax arguments are passed). Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. 196 ## orbital(theta, phi, *args) Calculates an angular wave function on a sphere. psi = orbital(theta,phi,ket1,ket2,...) calculates the angular wave function on a sphere at the mesh of points defined by theta and phi which is (, ) where are the coefficients specified by the list of kets. Each ket has 2l+1 ## components for some integer l. Parameters theta : list/array Polar angles phi : list/array Azimuthal angles args : list/array list of ket vectors. Returns array for angular wave function Quantum Process Tomography qpt(U, op_basis_list) Calculate the quantum process tomography chi matrix for a given (possibly nonunitary) transformation matrix U, which transforms a density matrix in vector form according to: vec(rho) = U * vec(rho0) or rho = vec2mat(U * mat2vec(rho0)) U can be calculated for an open quantum system using the QuTiP propagator function. Parameters U : Qobj Transformation operator. Can be calculated using QuTiP propagator function. op_basis_list : list A list of Qobjs representing the basis states. Returns chi : array QPT chi matrix qpt_plot(chi, lbls_list, title=None, fig=None, axes=None) Visualize the quantum process tomography chi matrix. Plot the real and imaginary parts separately. Parameters chi : array Input QPT chi matrix. lbls_list : list List of labels for QPT plot axes. title : string Plot title. fig : figure instance User defined figure instance used for generating QPT plot. axes : list of figure axis instance User defined figure axis instance (list of two axes) used for generating QPT plot. Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. qpt_plot_combined(chi, lbls_list, title=None, fig=None, ax=None, figsize=(8, 6), threshold=None) Visualize the quantum process tomography chi matrix. Plot bars with height and color corresponding to the absolute value and phase, respectively. Parameters chi : array Input QPT chi matrix. lbls_list : list List of labels for QPT plot axes. 197 title : string Plot title. fig : figure instance User defined figure instance used for generating QPT plot. ax : figure axis instance User defined figure axis instance used for generating QPT plot (alternative to the fig argument). threshold: float (None) Threshold for when bars of smaller height should be transparent. If not set, all bars are colored according to the color map. Returns fig, ax : tuple A tuple of the matplotlib figure and axes instances used to produce the figure. ## Quantum Information Processing Gates rx(phi, N=None, target=0) Single-qubit rotation for operator sigmax with angle phi. Returns result : qobj Quantum object for operator describing the rotation. ry(phi, N=None, target=0) Single-qubit rotation for operator sigmay with angle phi. Returns result : qobj Quantum object for operator describing the rotation. rz(phi, N=None, target=0) Single-qubit rotation for operator sigmaz with angle phi. Returns result : qobj Quantum object for operator describing the rotation. sqrtnot(N=None, target=0) Single-qubit square root NOT gate. Returns result : qobj Quantum object for operator describing the square root NOT gate. snot(N=None, target=0) Quantum object representing the SNOT (Hadamard) gate. Returns snot_gate : qobj Quantum object representation of SNOT gate. Examples >>> snot() Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = True Qobj data = [[ 0.70710678+0.j 0.70710678+0.j] [ 0.70710678+0.j -0.70710678+0.j]] ## phasegate(theta, N=None, target=0) Returns quantum object representing the phase shift gate. Parameters theta : float Phase rotation angle. Returns phase_gate : qobj Quantum object representation of phase shift gate. 198 Examples >>> phasegate(pi/4) Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = False Qobj data = [[ 1.00000000+0.j 0.00000000+0.j ] [ 0.00000000+0.j 0.70710678+0.70710678j]] ## cphase(theta, N=2, control=0, target=1) Returns quantum object representing the phase shift gate. Parameters theta : float Phase rotation angle. N : integer The number of qubits in the target space. control : integer The index of the control qubit. target : integer The index of the target qubit. Returns U : qobj Quantum object representation of controlled phase gate. cnot(N=None, control=0, target=1) Quantum object representing the CNOT gate. Returns cnot_gate : qobj Quantum object representation of CNOT gate Examples >>> cnot() Quantum object: dims = Qobj data = [[ 1.+0.j 0.+0.j [ 0.+0.j 1.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = True 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j] 0.+0.j] 1.+0.j] 0.+0.j]] ## csign(N=None, control=0, target=1) Quantum object representing the CSIGN gate. Returns csign_gate : qobj Quantum object representation of CSIGN gate Examples >>> csign() Quantum object: dims = Qobj data = [[ 1.+0.j 0.+0.j [ 0.+0.j 1.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = True 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j] 0.+0.j] 0.+0.j] -1.+0.j]] ## berkeley(N=None, targets=[0, 1]) Quantum object representing the Berkeley gate. Returns berkeley_gate : qobj Quantum object representation of Berkeley gate 199 Examples >>> berkeley() Quantum object: dims = Qobj data = [[ cos(pi/8).+0.j [ 0.+0.j [ 0.+0.j [ 0.+sin(pi/8).j [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = True 0.+0.j cos(3pi/8).+0.j 0.+sin(3pi/8).j 0.+0.j 0.+0.j 0.+sin(3pi/8).j cos(3pi/8).+0.j 0.+0.j 0.+sin(pi/8).j] 0.+0.j] 0.+0.j] cos(pi/8).+0.j]] ## swapalpha(alpha, N=None, targets=[0, 1]) Quantum object representing the SWAPalpha gate. Returns swapalpha_gate : qobj Quantum object representation of SWAPalpha gate Examples >>> swapalpha(alpha) Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = True Qobj data = [[ 1.+0.j 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.5*(1 + exp(j*pi*alpha) 0.5*(1 - exp(j*pi*alpha) 0.+0.j] [ 0.+0.j 0.5*(1 - exp(j*pi*alpha) 0.5*(1 + exp(j*pi*alpha) 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j 1.+0.j]] ## swap(N=None, targets=[0, 1]) Quantum object representing the SWAP gate. Returns swap_gate : qobj Quantum object representation of SWAP gate Examples >>> swap() Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = True Qobj data = [[ 1.+0.j 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 1.+0.j 0.+0.j] [ 0.+0.j 1.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j 1.+0.j]] ## iswap(N=None, targets=[0, 1]) Quantum object representing the iSWAP gate. Returns iswap_gate : qobj Quantum object representation of iSWAP gate Examples >>> iswap() Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = False Qobj data = [[ 1.+0.j 0.+0.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+1.j 0.+0.j] [ 0.+0.j 0.+1.j 0.+0.j 0.+0.j] [ 0.+0.j 0.+0.j 0.+0.j 1.+0.j]] 200 ## sqrtswap(N=None, targets=[0, 1]) Quantum object representing the square root SWAP gate. Returns sqrtswap_gate : qobj Quantum object representation of square root SWAP gate sqrtiswap(N=None, targets=[0, 1]) Quantum object representing the square root iSWAP gate. Returns sqrtiswap_gate : qobj Quantum object representation of square root iSWAP gate Examples >>> sqrtiswap() Quantum object: dims = [[2, 2], [2, 2]], shape = [4, 4], type = oper, isHerm = False Qobj data = [[ 1.00000000+0.j 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j] [ 0.00000000+0.j 0.70710678+0.j 0.00000000-0.70710678j 0.00000000+0.j] [ 0.00000000+0.j 0.00000000-0.70710678j 0.70710678+0.j 0.00000000+0.j] [ 0.00000000+0.j 0.00000000+0.j 0.00000000+0.j 1.00000000+0.j]] ## fredkin(N=None, control=0, targets=[1, 2]) Quantum object representing the Fredkin gate. Returns fredkin_gate : qobj Quantum object representation of Fredkin gate. Examples >>> fredkin() Quantum object: dims = Qobj data = [[ 1.+0.j 0.+0.j [ 0.+0.j 1.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [[2, 2, 2], [2, 2, 2]], shape = [8, 8], type = oper, isHerm = True 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 1.+0.j]] ## toffoli(N=None, controls=[0, 1], target=2) Quantum object representing the Toffoli gate. Returns toff_gate : qobj Quantum object representation of Toffoli gate. Examples >>> toffoli() Quantum object: dims = Qobj data = [[ 1.+0.j 0.+0.j [ 0.+0.j 1.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [ 0.+0.j 0.+0.j [[2, 2, 2], [2, 2, 2]], shape = [8, 8], type = oper, isHerm = True 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 0.+0.j] 201 [ 0.+0.j [ 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 0.+0.j 1.+0.j 1.+0.j] 0.+0.j]] ## rotation(op, phi, N=None, target=0) Single-qubit rotation for operator op with angle phi. Returns result : qobj Quantum object for operator describing the rotation. controlled_gate(U, N=2, control=0, target=1, control_value=1) Create an N-qubit controlled gate from a single-qubit gate U with the given control and target qubits. Parameters U : Qobj Arbitrary single-qubit gate. N : integer The number of qubits in the target space. control : integer The index of the first control qubit. target : integer The index of the target qubit. control_value : integer (1) The state of the control qubit that activates the gate U. Returns result : qobj Quantum object representing the controlled-U gate. globalphase(theta, N=1) Returns quantum object representing the global phase shift gate. Parameters theta : float Phase rotation angle. Returns phase_gate : qobj Quantum object representation of global phase shift gate. Examples >>> phasegate(pi/4) Quantum object: dims = [[2], [2]], shape = [2, 2], type = oper, isHerm = False Qobj data = [[ 0.70710678+0.70710678j 0.00000000+0.j] [ 0.00000000+0.j 0.70710678+0.70710678j]] Quantum object representing the N-qubit Hadamard gate. Returns q : qobj Quantum object representation of the N-qubit Hadamard gate. gate_sequence_product(U_list, left_to_right=True) Calculate the overall unitary matrix for a given list of unitary operations Parameters U_list : list List of gates implementing the quantum circuit. left_to_right: Boolean Check if multiplication is to be done from left to right. Returns U_overall: qobj Overall unitary matrix of a given quantum circuit. 202 gate_expand_1toN(U, N, target) Create a Qobj representing a one-qubit gate that act on a system with N qubits. Parameters U : Qobj The one-qubit gate N : integer The number of qubits in the target space. target : integer The index of the target qubit. Returns gate : qobj Quantum object representation of N-qubit gate. gate_expand_2toN(U, N, control=None, target=None, targets=None) Create a Qobj representing a two-qubit gate that act on a system with N qubits. Parameters U : Qobj The two-qubit gate N : integer The number of qubits in the target space. control : integer The index of the control qubit. target : integer The index of the target qubit. targets : list List of target qubits. Returns gate : qobj Quantum object representation of N-qubit gate. gate_expand_3toN(U, N, controls=[0, 1], target=2) Create a Qobj representing a three-qubit gate that act on a system with N qubits. Parameters U : Qobj The three-qubit gate N : integer The number of qubits in the target space. controls : list The list of the control qubits. target : integer The index of the target qubit. Returns gate : qobj Quantum object representation of N-qubit gate. Qubits qubit_states(N=1, states=[0]) Function to define initial state of the qubits. Parameters N: Integer Number of qubits in the register. states: List Initial state of each qubit. Returns qstates: Qobj List of qubits. 203 Algorithms qft(N=1) Quantum Fourier Transform operator on N qubits. Parameters N : int Number of qubits. Returns QFT: qobj Quantum Fourier transform operator. qft_steps(N=1, swapping=True) Quantum Fourier Transform operator on N qubits returning the individual steps as unitary matrices operating from left to right. Parameters N: int Number of qubits. swap: boolean Flag indicating sequence of swap gates to be applied at the end or not. Returns U_step_list: list of qobj List of Hadamard and controlled rotation gates implementing QFT. qft_gate_sequence(N=1, swapping=True) Quantum Fourier Transform operator on N qubits returning the gate sequence. Parameters N: int Number of qubits. swap: boolean Flag indicating sequence of swap gates to be applied at the end or not. Returns qc: instance of QubitCircuit Gate sequence of Hadamard and controlled rotation gates implementing QFT. Optimal control This module contains functions that implement the GRAPE algorithm for calculating pulse sequences for quantum systems. plot_grape_control_fields(times, u, labels, uniform_axes=False) Plot a series of plots showing the GRAPE control fields given in the given control pulse matrix u. Parameters times : array Time coordinate array. u : array Control pulse matrix. labels : list List of labels for each control pulse sequence in the control pulse matrix. uniform_axes : bool Whether or not to plot all pulse sequences using the same y-axis scale. grape_unitary(U, H0, H_ops, R, times, eps=None, u_start=None, u_limits=None, interp_kind=linear, use_interp=False, alpha=None, beta=None, phase_sensitive=True, progress_bar=<qutip.ui.progressbar.BaseProgressBar object at 0x107b6fd90>) Calculate control pulses for the Hamiltonian operators in H_ops so that the unitary U is realized. Experimental: Work in progress. Parameters U : Qobj Target unitary evolution operator. H0 : Qobj Static Hamiltonian (that cannot be tuned by the control fields). 204 ## H_ops: list of Qobj A list of operators that can be tuned in the Hamiltonian via the control fields. R : int Number of GRAPE iterations. time : array / list Array of time coordinates for control pulse evalutation. u_start : array Optional array with initial control pulse values. Returns Instance of GRAPEResult, which contains the control pulses calculated with GRAPE, a time-dependent Hamiltonian that is defined by the control pulses, as well as the resulting propagator. grape_unitary_adaptive(U, H0, H_ops, R, times, eps=None, u_start=None, u_limits=None, interp_kind=linear, use_interp=False, alpha=None, beta=None, phase_sensitive=False, overlap_terminate=1.0, progress_bar=<qutip.ui.progressbar.BaseProgressBar object at 0x107be58d0>) Calculate control pulses for the Hamiltonian operators in H_ops so that the unitary U is realized. Experimental: Work in progress. Parameters U : Qobj Target unitary evolution operator. H0 : Qobj Static Hamiltonian (that cannot be tuned by the control fields). H_ops: list of Qobj A list of operators that can be tuned in the Hamiltonian via the control fields. R : int Number of GRAPE iterations. time : array / list Array of time coordinates for control pulse evalutation. u_start : array Optional array with initial control pulse values. Returns Instance of GRAPEResult, which contains the control pulses calculated with GRAPE, a time-dependent Hamiltonian that is defined by the control pulses, as well as the resulting propagator. Wrapper functions that will manage the creation of the objects, build the configuration, and execute the algorithm required to optimise a set of ctrl pulses for a given (quantum) system. The fidelity error is some measure of distance of the system evolution from the given target evolution in the time allowed for the evolution. The functions minimise this fidelity error wrt the piecewise control amplitudes in the timeslots optimize_pulse(drift, ctrls, initial, target, num_tslots=None, evo_time=None, tau=None, max_iter=500, max_wall_time=180, optim_alg=LBFGSB, max_metric_corr=10, accuracy_factor=10000000.0, dyn_type=GEN_MAT, prop_type=DEF, fid_type=DEF, phase_option=None, fid_err_scale_factor=None, amp_update_mode=ALL, init_pulse_type=RND, pulse_scaling=1.0, pulse_offset=0.0, log_level=0, out_file_ext=None, gen_stats=False) Optimise a control pulse to minimise the fidelity error. The dynamics of the system in any given timeslot are governed by the combined dynamics generator, i.e. the sum of the drift+ctrl_amp[j]*ctrls[j] The control pulse is an [n_ts, len(ctrls)] array of piecewise amplitudes Starting from an intital (typically random) pulse, a multivariable optimisation algorithm attempts to determines the optimal values for the control pulse to minimise the fidelity error The fidelity error is some measure of distance of the system evolution from the given target evolution in the time allowed for the evolution. Parameters drift : Qobj the underlying dynamics generator of the system 205 ## ctrls : List of Qobj a list of control dynamics generators. These are scaled by the amplitudes to alter the overall dynamics initial : Qobj starting point for the evolution. Typically the identity matrix target : Qobj target transformation, e.g. gate or state, for the time evolution num_tslots : integer or None number of timeslots. None implies that timeslots will be given in the tau array evo_time : float or None total time for the evolution None implies that timeslots will be given in the tau array tau : array[num_tslots] of floats or None durations for the timeslots. if this is given then num_tslots and evo_time are dervived from it None implies that timeslot durations will be equal and calculated as evo_time/num_tslots amp_lbound : float or list of floats lower boundaries for the control amplitudes Can be a scalar value applied to all controls or a list of bounds for each control amp_ubound : float or list of floats upper boundaries for the control amplitudes Can be a scalar value applied to all controls or a list of bounds for each control fid_err_targ : float Fidelity error target. Pulse optimisation will terminate when the fidelity error falls below this value Minimum gradient. When the sum of the squares of the gradients wrt to the control amplitudes falls below this value, the optimisation terminates, assuming local minima max_iter : integer Maximum number of iterations of the optimisation algorithm max_wall_time : float Maximum allowed elapsed time for the optimisation algorithm optim_alg : string Multi-variable optimisation algorithm options are BFGS, LBFGSB (see Optimizer classes for details) max_metric_corr : integer The maximum number of variable metric corrections used to define the limited memory matrix. That is the number of previous gradient values that are used to approximate the Hessian see the scipy.optimize.fmin_l_bfgs_b documentation for description of m argument (used only in L-BFGS-B) accuracy_factor : float Determines the accuracy of the result. Typical values for accuracy_factor are: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy scipy.optimize.fmin_l_bfgs_b factr argument. (used only in L-BFGS-B) dyn_type : string Dynamics type, i.e. the type of matrix used to describe the dynamics. Options are UNIT, GEN_MAT, SYMPL (see Dynamics classes for details) prop_type : string Propagator type i.e. the method used to calculate the propagtors and propagtor gradient for each timeslot options are DEF, APPROX, DIAG, FRECHET, AUG_MAT DEF will use the default for the specific dyn_type (see PropagatorComputer classes for details) fid_type : string 206 Fidelity error (and fidelity error gradient) computation method Options are DEF, UNIT, TRACEDIFF, TD_APPROX DEF will use the default for the specific dyn_type (See FideliyComputer classes for details) phase_option : string determines how global phase is treated in fidelity calculations (fid_type=UNIT only). Options: PSU - global phase ignored SU - global phase included fid_err_scale_factor : float (used in TRACEDIFF FidelityComputer and subclasses only) The fidelity error calculated is of some arbitary scale. This factor can be used to scale the fidelity error such that it may represent some physical measure If None is given then it is caculated as 1/2N, where N is the dimension of the drift. amp_update_mode : string determines whether propagators are calculated Options: DEF, ALL, DYNAMIC (needs work) DEF will use the default for the specific dyn_type (See TimeslotComputer classes for details) init_pulse_type : string type / shape of pulse(s) used to initialise the the control amplitudes. Options include: RND, LIN, ZERO, SINE, SQUARE, TRIANGLE, SAW (see PulseGen classes for details) pulse_scaling : float Linear scale factor for generated pulses By default initial pulses are generated with amplitudes in the range (-1.0, 1.0). These will be scaled by this parameter pulse_offset : float Line offset for the pulse. That is this value will be added to any initial pulses generated. log_level : integer level of messaging output from the logger. Options are attributes of qutip.logging, in decreasing levels of messaging, are: DEBUG_INTENSE, DEBUG_VERBOSE, DEBUG, INFO, WARN, ERROR, CRITICAL Anything WARN or above is effectively quiet execution, assuming everything runs as expected. The default NOTSET implies that the level will be taken from the QuTiP settings file, which by default is WARN out_file_ext : string or None files containing the initial and final control pulse amplitudes are saved to the current directory. The default name will be postfixed with this extension Setting this to None will suppress the output of files gen_stats : boolean if set to True then statistics for the optimisation run will be generated - accessible through attributes of the stats object Returns Returns instance of OptimResult, which has attributes giving the reason for termination, final fidelity error, final evolution final amplitudes, statistics etc optimize_pulse_unitary(H_d, H_c, U_0, U_targ, num_tslots=None, evo_time=None, tau=None, amp_lbound=-inf, amp_ubound=inf, fid_err_targ=1e10, max_iter=500, max_wall_time=180, optim_alg=LBFGSB, max_metric_corr=10, accuracy_factor=10000000.0, phase_option=PSU, amp_update_mode=ALL, init_pulse_type=RND, pulse_scaling=1.0, pulse_offset=0.0, log_level=0, out_file_ext=.txt, gen_stats=False) Optimise a control pulse to minimise the fidelity error, assuming that the dynamics of the system are generated by unitary operators. This function is simply a wrapper for optimize_pulse, where the appropriate options for unitary dynamics are chosen and the parameter names are in the format familiar to unitary dynamics The dynamics of the system in any given timeslot are governed by the combined Hamiltonian, i.e. 207 the sum of the H_d + ctrl_amp[j]*H_c[j] The control pulse is an [n_ts, len(ctrls)] array of piecewise amplitudes Starting from an intital (typically random) pulse, a multivariable optimisation algorithm attempts to determines the optimal values for the control pulse to minimise the fidelity error The maximum fidelity for a unitary system is 1, i.e. when the time evolution resulting from the pulse is equivalent to the target. And therefore the fidelity error is 1 - fidelity Parameters H_d : Qobj Drift (aka system) the underlying Hamiltonian of the system H_c : Qobj a list of control Hamiltonians. These are scaled by the amplitudes to alter the overall dynamics U_0 : Qobj starting point for the evolution. Typically the identity matrix U_targ : Qobj target transformation, e.g. gate or state, for the time evolution num_tslots : integer or None number of timeslots. None implies that timeslots will be given in the tau array evo_time : float or None total time for the evolution None implies that timeslots will be given in the tau array tau : array[num_tslots] of floats or None durations for the timeslots. if this is given then num_tslots and evo_time are dervived from it None implies that timeslot durations will be equal and calculated as evo_time/num_tslots amp_lbound : float or list of floats lower boundaries for the control amplitudes Can be a scalar value applied to all controls or a list of bounds for each control amp_ubound : float or list of floats upper boundaries for the control amplitudes Can be a scalar value applied to all controls or a list of bounds for each control fid_err_targ : float Fidelity error target. Pulse optimisation will terminate when the fidelity error falls below this value Minimum gradient. When the sum of the squares of the gradients wrt to the control amplitudes falls below this value, the optimisation terminates, assuming local minima max_iter : integer Maximum number of iterations of the optimisation algorithm max_wall_time : float Maximum allowed elapsed time for the optimisation algorithm optim_alg : string Multi-variable optimisation algorithm options are BFGS, LBFGSB (see Optimizer classes for details) max_metric_corr : integer The maximum number of variable metric corrections used to define the limited memory matrix. That is the number of previous gradient values that are used to approximate the Hessian see the scipy.optimize.fmin_l_bfgs_b documentation for description of m argument (used only in L-BFGS-B) accuracy_factor : float Determines the accuracy of the result. Typical values for accuracy_factor are: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy scipy.optimize.fmin_l_bfgs_b factr argument. (used only in L-BFGS-B) phase_option : string 208 ## determines how global phase is treated in fidelity calculations (fid_type=UNIT only). Options: PSU - global phase ignored SU - global phase included amp_update_mode : string determines whether propagators are calculated Options: DEF, ALL, DYNAMIC (needs work) DEF will use the default for the specific dyn_type (See TimeslotComputer classes for details) init_pulse_type : string type / shape of pulse(s) used to initialise the the control amplitudes. Options include: RND, LIN, ZERO, SINE, SQUARE, TRIANGLE, SAW (see PulseGen classes for details) pulse_scaling : float Linear scale factor for generated pulses By default initial pulses are generated with amplitudes in the range (-1.0, 1.0). These will be scaled by this parameter pulse_offset : float Line offset for the pulse. That is this value will be added to any initial pulses generated. log_level : integer level of messaging output from the logger. Options are attributes of qutip.logging, in decreasing levels of messaging, are: DEBUG_INTENSE, DEBUG_VERBOSE, DEBUG, INFO, WARN, ERROR, CRITICAL Anything WARN or above is effectively quiet execution, assuming everything runs as expected. The default NOTSET implies that the level will be taken from the QuTiP settings file, which by default is WARN out_file_ext : string or None files containing the initial and final control pulse amplitudes are saved to the current directory. The default name will be postfixed with this extension Setting this to None will suppress the output of files gen_stats : boolean if set to True then statistics for the optimisation run will be generated - accessible through attributes of the stats object Returns Returns instance of OptimResult, which has attributes giving the reason for termination, final fidelity error, final evolution final amplitudes, statistics etc create_pulse_optimizer(drift, ctrls, initial, target, num_tslots=None, evo_time=None, tau=None, amp_lbound=-inf, amp_ubound=inf, fid_err_targ=1e10, max_iter=500, max_wall_time=180, optim_alg=LBFGSB, max_metric_corr=10, accuracy_factor=10000000.0, dyn_type=GEN_MAT, prop_type=DEF, fid_type=DEF, phase_option=None, fid_err_scale_factor=None, amp_update_mode=ALL, init_pulse_type=RND, pulse_scaling=1.0, pulse_offset=0.0, log_level=0, gen_stats=False) Generate the objects of the appropriate subclasses required for the pulse optmisation based on the parameters given Note this method may be preferable to calling optimize_pulse if more detailed configuration is required before running the optmisation algorthim, or the algorithm will be run many times, for instances when trying to finding global the optimum or minimum time optimisation Parameters drift : Qobj the underlying dynamics generator of the system ctrls : List of Qobj a list of control dynamics generators. These are scaled by the amplitudes to alter the overall dynamics initial : Qobj 209 ## starting point for the evolution. Typically the identity matrix target : Qobj target transformation, e.g. gate or state, for the time evolution num_tslots : integer or None number of timeslots. None implies that timeslots will be given in the tau array evo_time : float or None total time for the evolution None implies that timeslots will be given in the tau array tau : array[num_tslots] of floats or None durations for the timeslots. if this is given then num_tslots and evo_time are dervived from it None implies that timeslot durations will be equal and calculated as evo_time/num_tslots amp_lbound : float or list of floats lower boundaries for the control amplitudes Can be a scalar value applied to all controls or a list of bounds for each control amp_ubound : float or list of floats upper boundaries for the control amplitudes Can be a scalar value applied to all controls or a list of bounds for each control fid_err_targ : float Fidelity error target. Pulse optimisation will terminate when the fidelity error falls below this value Minimum gradient. When the sum of the squares of the gradients wrt to the control amplitudes falls below this value, the optimisation terminates, assuming local minima max_iter : integer Maximum number of iterations of the optimisation algorithm max_wall_time : float Maximum allowed elapsed time for the optimisation algorithm optim_alg : string Multi-variable optimisation algorithm options are BFGS, LBFGSB (see Optimizer classes for details) max_metric_corr : integer The maximum number of variable metric corrections used to define the limited memory matrix. That is the number of previous gradient values that are used to approximate the Hessian see the scipy.optimize.fmin_l_bfgs_b documentation for description of m argument (used only in L-BFGS-B) accuracy_factor : float Determines the accuracy of the result. Typical values for accuracy_factor are: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy scipy.optimize.fmin_l_bfgs_b factr argument. (used only in L-BFGS-B) dyn_type : string Dynamics type, i.e. the type of matrix used to describe the dynamics. Options are UNIT, GEN_MAT, SYMPL (see Dynamics classes for details) prop_type : string Propagator type i.e. the method used to calculate the propagtors and propagtor gradient for each timeslot options are DEF, APPROX, DIAG, FRECHET, AUG_MAT DEF will use the default for the specific dyn_type (see PropagatorComputer classes for details) fid_type : string Fidelity error (and fidelity error gradient) computation method Options are DEF, UNIT, TRACEDIFF, TD_APPROX DEF will use the default for the specific dyn_type (See FideliyComputer classes for details) phase_option : string 210 ## determines how global phase is treated in fidelity calculations (fid_type=UNIT only). Options: PSU - global phase ignored SU - global phase included fid_err_scale_factor : float (used in TRACEDIFF FidelityComputer and subclasses only) The fidelity error calculated is of some arbitary scale. This factor can be used to scale the fidelity error such that it may represent some physical measure If None is given then it is caculated as 1/2N, where N is the dimension of the drift. amp_update_mode : string determines whether propagators are calculated Options: DEF, ALL, DYNAMIC (needs work) DEF will use the default for the specific dyn_type (See TimeslotComputer classes for details) init_pulse_type : string type / shape of pulse(s) used to initialise the the control amplitudes. Options include: RND, LIN, ZERO, SINE, SQUARE, TRIANGLE, SAW (see PulseGen classes for details) pulse_scaling : float Linear scale factor for generated pulses By default initial pulses are generated with amplitudes in the range (-1.0, 1.0). These will be scaled by this parameter pulse_offset : float Line offset for the pulse. That is this value will be added to any initial pulses generated. log_level : integer level of messaging output from the logger. Options are attributes of qutip.logging, in decreasing levels of messaging, are: DEBUG_INTENSE, DEBUG_VERBOSE, DEBUG, INFO, WARN, ERROR, CRITICAL Anything WARN or above is effectively quiet execution, assuming everything runs as expected. The default NOTSET implies that the level will be taken from the QuTiP settings file, which by default is WARN Note value should be set using set_log_level gen_stats : boolean if set to True then statistics for the optimisation run will be generated - accessible through attributes of the stats object Returns Instance of an Optimizer, through which the Config, Dynamics, PulseGen, and TerminationConditions objects can be accessed as attributes. The PropagatorComputer, FidelityComputer and TimeslotComputer objects can be accessed as attributes of the Dynamics object, e.g. optimizer.dynamics.fid_computer The optimisation can be run through the optimizer.run_optimization Pulse generator - Generate pulses for the timeslots Each class defines a gen_pulse function that produces a float array of size num_tslots. Each class produces a differ type of pulse. See the class and gen_pulse function descriptions for details create_pulse_gen(pulse_type=RND, dyn=None) Create and return a pulse generator object matching the given type. The pulse generators each produce a different type of pulse, see the gen_pulse function description for details. These are the random pulse options: RND - Independent random value in each timeslot RNDFOURIER - Fourier series with random coefficients RNDWAVES - Summation of random waves RNDWALK1 - Random change in amplitude each timeslot RNDWALK2 - Random change in amp gradient each timeslot These are the other non-periodic options: LIN - Linear, i.e. contant gradient over the time ZERO - special case of the LIN pulse, where the gradient is 0 These are the periodic options SINE - Sine wave SQUARE - Square wave SAW - Saw tooth wave TRIANGLE - Triangular wave 211 If a Dynamics object is passed in then this is used in instantiate the PulseGen, meaning that some timeslot and amplitude properties are copied over. Pulse generator - Generate pulses for the timeslots Each class defines a gen_pulse function that produces a float array of size num_tslots. Each class produces a differ type of pulse. See the class and gen_pulse function descriptions for details Utilitiy Functions Graph Theory Routines This module contains a collection of graph theory routines used mainly to reorder matrices for iterative steady state solvers. Breadth-First-Search (BFS) of a graph in CSR or CSC matrix format starting from a given node (row). Takes Qobjs and CSR or CSC matrices as inputs. This function requires a matrix with symmetric structure. Use A+trans(A) if original matrix is not symmetric or not sure. Parameters A : csc_matrix, csr_matrix Input graph in CSC or CSR matrix format start : int Staring node for BFS traversal. Returns order : array Order in which nodes are traversed from starting node. levels : array Level of the nodes in the order that they are traversed. graph_degree(A) Returns the degree for the nodes (rows) of a symmetric graph in sparse CSR or CSC format, or a qobj. Parameters A : qobj, csr_matrix, csc_matrix Input quantum object or csr_matrix. Returns degree : array Array of integers giving the degree for each node (row). reverse_cuthill_mckee(A, sym=False) Returns the permutation array that orders a sparse CSR or CSC matrix in Reverse-Cuthill McKee ordering. Since the input matrix must be symmetric, this routine works on the matrix A+Trans(A) if the sym flag is set to False (Default). It is assumed by default (sym=False) that the input matrix is not symmetric. This is because it is faster to do A+Trans(A) than it is to check for symmetry for a generic matrix. If you are guaranteed that the matrix is symmetric in structure (values of matrix element do not matter) then set sym=True Parameters A : csc_matrix, csr_matrix Input sparse CSC or CSR sparse matrix format. sym : bool {False, True} Flag to set whether input matrix is symmetric. Returns perm : array Array of permuted row and column indices. Notes This routine is used primarily for internal reordering of Lindblad superoperators for use in iterative solver routines. 212 References E. Cuthill and J. McKee, Reducing the Bandwidth of Sparse Symmetric Matrices, ACM 69 Proceedings of the 1969 24th national conference, (1969). maximum_bipartite_matching(A, perm_type=row) Returns an array of row or column permutations that removes nonzero elements from the diagonal of a nonsingular square CSC sparse matrix. Such a permutation is always possible provided that the matrix is nonsingular. This function looks at the structure of the matrix only. The input matrix will be converted to CSC matrix format if necessary. Parameters A : sparse matrix Input matrix perm_type : str {row, column} Type of permutation to generate. Returns perm : array Array of row or column permutations. Notes This function relies on a maximum cardinality bipartite matching algorithm based on a breadth-first search (BFS) of the underlying graph[R3]_. References Analysis of Maximum Transversal Algorithms, ACM Trans. Math. Softw. 38, no. 2, (2011). [R3] weighted_bipartite_matching(A, perm_type=row) Returns an array of row permutations that attempts to maximize the product of the ABS values of the diagonal elements in a nonsingular square CSC sparse matrix. Such a permutation is always possible provided that the matrix is nonsingular. This function looks at both the structure and ABS values of the underlying matrix. Parameters A : csc_matrix Input matrix perm_type : str {row, column} Type of permutation to generate. Returns perm : array Array of row or column permutations. Notes This function uses a weighted maximum cardinality bipartite matching algorithm based on breadth-first search (BFS). The columns are weighted according to the element of max ABS value in the associated rows and are traversed in descending order by weight. When performing the BFS traversal, the row associated to a given column is the one with maximum weight. Unlike other techniques[R4]_, this algorithm does not guarantee the product of the diagonal is maximized. However, this limitation is offset by the substantially faster runtime of this method. References permuting large entries to the diagonal of sparse matrices, SIAM J. Matrix Anal. and Applics. 20, no. 4, 889 (1997). [R4] 213 Utility Functions This module contains utility functions that are commonly needed in other qutip modules. n_thermal(w, w_th) Return the number of photons in thermal equilibrium for an harmonic oscillator mode with frequency w, at the temperature described by w_th where th = /. Parameters w : float or array Frequency of the oscillator. w_th : float The temperature in units of frequency (or the same units as w). Returns n_avg : float or array Return the number of average photons in thermal equilibrium for a an oscillator with the given frequency and temperature. linspace_with(start, stop, num=50, elems=[]) Return an array of numbers sampled over specified interval with additional elements added. Returns num spaced array with elements from elems inserted if not already included in set. Returned sample array is not evenly spaced if addtional elements are added. Parameters start : int The starting value of the sequence. stop : int The stoping values of the sequence. num : int, optional Number of samples to generate. elems : list/ndarray, optional Requested elements to include in array clebsch(j1, j2, j3, m1, m2, m3) Calculates the Clebsch-Gordon coefficient for coupling (j1,m1) and (j2,m2) to give (j3,m3). Parameters j1 : float Total angular momentum 1. j2 : float Total angular momentum 2. j3 : float Total angular momentum 3. m1 : float z-component of angular momentum 1. m2 : float z-component of angular momentum 2. m3 : float z-component of angular momentum 3. Returns cg_coeff : float Requested Clebsch-Gordan coefficient. convert_unit(value, orig=meV, to=GHz) Convert an energy from unit orig to unit to. Parameters value : float / array The energy in the old unit. orig : string The name of the original unit (J, eV, meV, GHz, mK) 214 to : string The name of the new unit (J, eV, meV, GHz, mK) Returns value_new_unit : float / array The energy in the new unit. File I/O Functions Retrieves an array of data from the requested file. Parameters filename : str Name of file containing reqested data. sep : str Seperator used to store data. Returns data : array_like Data from selected file. file_data_store(filename, data, numtype=complex, numformat=decimal, sep=, ) Stores a matrix of data to a file to be read by an external program. Parameters filename : str Name of data file to be stored, including extension. data: array_like Data to be written to file. numtype : str {complex, real} Type of numerical data. numformat : str {decimal,exp} Format for written data. sep : str Single-character field seperator. Usually a tab, space, comma, or semicolon. Loads data file from file named filename.qu in current directory. Parameters name : str Name of data file to be loaded. Returns qobject : instance / array_like qsave(data, name=qutip_data) Saves given data to file named filename.qu in current directory. Parameters data : instance/array_like Input Python object to be stored. filename : str Name of output data file. Parallelization This function provides functions for parallel execution of loops and function mappings, using the builtin Python module multiprocessing. parfor(func, *args, **kwargs) Executes a multi-variable function in parallel on the local machine. Parallel execution of a for-loop over function func for multiple input arguments and keyword arguments. Note: From QuTiP 3.1.0, we recommend to use qutip.parallel_map instead of this function. 215 ## Parameters func : function_type A function to run in parallel on the local machine. The function func accepts a series of arguments that are passed to the function as variables. In general, the function can have multiple input variables, and these arguments must be passed in the same order as they are defined in the function definition. In addition, the user can pass multiple keyword arguments to the function. The following keyword argument is reserved: num_cpus : int Number of CPUs to use. Default uses maximum number of CPUs. Performance degrades if num_cpus is larger than the physical CPU count of your machine. Returns result : list A list with length equal to number of input parameters containing the output from func. Parallel execution of a mapping of values to the function task. This is functionally equivalent to: The function that is to be called for each value in task_vec. values: array / list The list or array of values for which the task function is to be evaluated. progress_bar: ProgressBar Progress bar class instance for showing progress. Returns result : list **task_kwargs) for each value in values. Serial mapping function with the same call signature as parallel_map, for easy switching between serial and parallel execution. This is functionally equivalent to: This function work as a drop-in replacement of qutip.parallel_map. The function that is to be called for each value in task_vec. values: array / list The list or array of values for which the task function is to be evaluated. progress_bar: ProgressBar Progress bar class instance for showing progress. Returns result : list **task_kwargs) for each value in values. 216 ## IPython Notebook Tools This module contains utility functions for using QuTiP with IPython notebooks. args=None, client=None, view=None, show_scheduling=False, show_progressbar=False) Call the function tast for each value in task_vec using a cluster of IPython engines. The function The client and view are the IPython.parallel client and load-balanced view that will be used in the parfor execution. If these are None, new instances will be created. The function that is to be called for each value in task_vec. The list or array of values for which the task function is to be evaluated. args: list / dictionary The optional additional argument to the task function. For example a dictionary with parameter values. client: IPython.parallel.Client The IPython.parallel Client instance that will be used in the parfor execution. view: a IPython.parallel.Client view The view that is to be used in scheduling the tasks on the IPython cluster. Preferably a load-balanced view, which is obtained from the IPython.parallel.Client instance client by calling, view = client.load_balanced_view(). show_scheduling: bool {False, True}, default False Display a graph showing how the tasks (the evaluation of task for for the value in task_vec1) was scheduled on the IPython engine cluster. show_progressbar: bool {False, True}, default False Display a HTML-based progress bar duing the execution of the parfor loop. Returns result : list The result list contains the value of task(value, args) for each value in task_vec, that is, it should be equivalent to [task(v, args) for v in progress_bar=None, show_scheduling=False, **kwargs) Call the function task for each value in values using a cluster of IPython engines. The function task should have the signature task(value, *args, **kwargs). The client and view are the IPython.parallel client and load-balanced view that will be used in the parfor execution. If these are None, new instances will be created. The function that is to be called for each value in task_vec. values: array / list The list or array of values for which the task function is to be evaluated. client: IPython.parallel.Client The IPython.parallel Client instance that will be used in the parfor execution. view: a IPython.parallel.Client view The view that is to be used in scheduling the tasks on the IPython cluster. Preferably a load-balanced view, which is obtained from the IPython.parallel.Client instance client by calling, view = client.load_balanced_view(). show_scheduling: bool {False, True}, default False 217 Display a graph showing how the tasks (the evaluation of task for for the value in task_vec1) was scheduled on the IPython engine cluster. show_progressbar: bool {False, True}, default False Display a HTML-based progress bar during the execution of the parfor loop. Returns result : list task_kwargs) for each value in values. version_table(verbose=False) Print an HTML-formatted table with version numbers for QuTiP and its dependencies. Use it in a IPython notebook to show which versions of different packages that were used to run the notebook. This should make it possible to reproduce the environment and the calculation later on. Returns version_table: string Return an HTML-formatted string containing version information for QuTiP dependencies. Miscellaneous About box for qutip. Gives version numbers for QuTiP, NumPy, SciPy, Cython, and MatPlotLib. simdiag(ops, evals=True) Simulateous diagonalization of communting Hermitian matrices.. Parameters ops : list/array list or array of qobjs representing commuting Hermitian operators. Returns eigs : tuple Tuple of arrays representing eigvecs and eigvals of quantum objects corresponding to simultaneous eigenvectors and eigenvalues for each operator. 218 CHAPTER FIVE CHANGE LOG ## 5.1 Version 3.1.0 (January 1, 2015): New Features MAJOR FEATURE: New module for quantum control (qutip.control). NAMESPACE CHANGE: QuTiP no longer exports symbols from NumPy and matplotlib, so those modules must now be explicitly imported when required. New module for counting statistics. Stochastic solvers now run trajectories in parallel. New superoperator and tensor manipulation functions (super_tensor, composite, tensor_contract). New logging module for debugging (qutip.logging). New user-available API for parallelization (parallel_map). New enhanced (optional) text-based progressbar (qutip.ui.EnhancedTextProgressBar) Faster Python based monte carlo solver (mcsolve). Support for progress bars in propagator function. Time-dependent Cython code now calls complex cmath functions. Random numbers seeds can now be reused for successive calls to mcsolve. The Bloch-Redfield master equation solver now supports optional Lindblad type collapse operators. Improved handling of ODE integration errors in mesolve. Improved correlation function module (for example, improved support for time-dependent problems). Improved parallelization of mcsolve (can now be interrupted easily, support for IPython.parallel, etc.) Many performance improvements, and much internal code restructuring. Bug Fixes Cython build files for time-dependent string format now removed automatically. Fixed incorrect solution time from inverse-power method steady state solver. mcsolve now supports Options(store_states=True) Fixed bug in hadamard gate function. Fixed compatibility issues with NumPy 1.9.0. Progressbar in mcsolve can now be suppressed. Fixed bug in gate_expand_3toN. Fixed bug for time-dependent problem (list string format) with multiple terms in coefficient to an operator. 219 ## 5.2 Version 3.0.1 (Aug 5, 2014): Bug Fixes Fix bug in create(), which returned a Qobj with CSC data instead of CSR. Fix several bugs in mcsolve: Incorrect storing of collapse times and collapse operator records. Incorrect averaging of expectation values for different trajectories when using only 1 CPU. Fix bug in parsing of time-dependent Hamiltonian/collapse operator arguments that occurred when the args argument is not a dictionary. Fix bug in internal _version2int function that cause a failure when parsing the version number of the Cython package. ## 5.3 Version 3.0.0 (July 17, 2014): New Features New module qutip.stochastic with stochastic master equation and stochastic Schrdinger equation solvers. The steadystate solver no longer use umfpack by default. New pre-processing methods for reordering and balancing the linear equation system used in direct solution of the steady state. New module qutip.qip with utilities for quantum information processing, including pre-defined quantum gates along with functions for expanding arbitrary 1, 2, and 3 qubit gates to N qubit registers, circuit representations, library of quantum algorithms, and basic physical models for some common QIP architectures. New module qutip.distributions with unified API for working with distribution functions. New format for defining time-dependent Hamiltonians and collapse operators, using a pre-calculated numpy array that specifies the values of the Qobj-coefficients for each time step. New functions for working with different superoperator representations, including Kraus and Chi representation. New functions for visualizing quantum states using Qubism and Schimdt plots: plot_qubism and plot_schmidt. Dynamics solver now support taking argument e_ops (expectation value operators) in dictionary form. Public plotting functions from the qutip.visualization module are now prefixed with plot_ (e.g., plot_fock_distribution). The plot_wigner and plot_wigner_fock_distribution now supports 3D views in addition to contour views. New API and new functions for working with spin operators and states, including for example spin_Jx, spin_Jy, spin_Jz and spin_state, spin_coherent. The expect function now supports a list of operators, in addition to the previously supported list of states. Simplified creation of qubit states using ket function. The module qutip.cyQ has been renamed to qutip.cy and the sparse matrix-vector functions spmv and spmv1d has been combined into one function spmv. New functions for operating directly on the underlaying sparse CSR data have been added (e.g., spmv_csr). Performance improvements. New and improved Cython functions for calculating expectation values for state vectors, density matrices in matrix and vector form. The concurrence function now supports both pure and mixed states. Added function for calculating the entangling power of a two-qubit gate. New functions for generating Bell states, and singlet and triplet states. 220 QuTiP no longer contains the demos GUI. The examples are now available on the QuTiP web site. The qutip.gui module has been renamed to qutip.ui and does no longer contain graphical UI elements. New text-based and HTML-based progressbar classes. Support for harmonic oscillator operators/states in a Fock state basis that does not start from zero (e.g., in the range [M,N+1]). Support for eliminating and extracting states from Qobj instances (e.g., removing one state from a two-qubit system to obtain a three-level system). Support for time-dependent Hamiltonian and Liouvillian callback functions that depend on the instantaneous state, which for example can be used for solving master equations with mean field terms. Improvements Restructured and optimized implementation of Qobj, which now has significantly lower memory footprint due to avoiding excessive copying of internal matrix data. The classes OdeData, Odeoptions, Odeconfig are now called Result, Options, and Config, respectively, and are available in the module qutip.solver. The squeez function has been renamed to squeeze. Better support for sparse matrices when calculating propagators using the propagator function. Improved Bloch sphere. Restructured and improved the module qutip.sparse, which now only operates directly on sparse matrices (not on Qobj instances). Improved and simplified implement of the tensor function. Improved performance, major code cleanup (including namespace changes), and numerous bug fixes. Benchmark scripts improved and restructured. QuTiP is now using continuous integration tests (TravisCI). ## 5.4 Version 2.2.0 (March 01, 2013): New Features New Bloch3d class for plotting 3D Bloch spheres using Mayavi. Bloch sphere vectors now look like arrows. Partial transpose function. Continuos variable functions for calculating correlation and covariance matrices, the Wigner covariance matrix and the logarithmic negativity for for multimode fields in Fock basis. The master-equation solver (mesolve) now accepts pre-constructed Liouvillian terms, which makes it possible to solve master equations that are not on the standard Lindblad form. Optional Fortran Monte Carlo solver (mcsolve_f90) by Arne Grimsmo. A module of tools for using QuTiP in IPython notebooks. Increased performance of the steady state solver. New Wigner colormap for highlighting negative values. More graph styles to the visualization module. 221 Bug Fixes: Function based time-dependent Hamiltonians now keep the correct phase. mcsolve no longer prints to the command line if ntraj=1. ## 5.5 Version 2.1.0 (October 05, 2012): New Features New method for generating Wigner functions based on Laguerre polynomials. coherent(), coherent_dm(), and thermal_dm() can now be expressed using analytic values. Unittests now use nose and can be run after installation. Functions for quantum process tomography. Window icons are now set for Ubuntu application launcher. The propagator function can now take a list of times as argument, and returns a list of corresponding propagators. Bug Fixes: mesolver now correctly uses the user defined rhs_filename in Odeoptions(). rhs_generate() now handles user defined filenames properly. Density matrix returned by propagator_steadystate is now Hermitian. eseries_value returns real list if all imag parts are zero. mcsolver now gives correct results for strong damping rates. Odeoptions now prints mc_avg correctly. Do not check for PyObj in mcsolve when gui=False. Eseries now correctly handles purely complex rates. thermal_dm() function now uses truncated operator method. Cython based time-dependence now Python 3 compatible. Removed call to NSAutoPool on mac systems. Progress bar now displays the correct number of CPUs used. Qobj.diag() returns reals if operator is Hermitian. Text for progress bar on Linux systems is no longer cutoff. ## 5.6 Version 2.0.0 (June 01, 2012): The second version of QuTiP has seen many improvements in the performance of the original code base, as well as the addition of several new routines supporting a wide range of functionality. Some of the highlights of this release include: 222 New Features QuTiP now includes solvers for both Floquet and Bloch-Redfield master equations. The Lindblad master equation and Monte Carlo solvers allow for time-dependent collapse operators. It is possible to automatically compile time-dependent problems into c-code using Cython (if installed). Python functions can be used to create arbitrary time-dependent Hamiltonians and collapse operators. Solvers now return Odedata objects containing all simulation results and parameters, simplifying the saving of simulation results. Important: This breaks compatibility with QuTiP version 1.x. mesolve and mcsolve can reuse Hamiltonian data when only the initial state, or time-dependent arguments, need to be changed. QuTiP includes functions for creating random quantum states and operators. The generation and manipulation of quantum objects is now more efficient. Quantum objects have basis transformation and matrix element calculations as built-in methods. The quantum object eigensolver can use sparse solvers. The partial-trace (ptrace) function is up to 20x faster. The Bloch sphere can now be used with the Matplotlib animation function, and embedded as a subplot in a figure. QuTiP has built-in functions for saving quantum objects and data arrays. The steady-state solver has been further optimized for sparse matrices, and can handle much larger system Hamiltonians. There are three new entropy functions for concurrence, mutual information, and conditional entropy. Correlation functions have been combined under a single function. The operator norm can now be set to trace, Frobius, one, or max norm. Global QuTiP settings can now be modified. QuTiP includes a collection of unit tests for verifying the installation. Demos window now lets you copy and paste code from each example. ## 5.7 Version 1.1.4 (May 28, 2012): Bug Fixes: Fixed bug pointed out by Brendan Abolins. Qobj.tr() returns zero-dim ndarray instead of float or complex. Updated factorial import for scipy version 0.10+ ## 5.8 Version 1.1.3 (November 21, 2011): New Functions: Allow custom naming of Bloch sphere. 223 Bug Fixes: Fixed text alignment issues in AboutBox. Added fix for SciPy V>0.10 where factorial was moved to scipy.misc module. Added tidyup function to tensor function output. Removed openmp flags from setup.py as new Mac Xcode compiler does not recognize them. Qobj diag method now returns real array if all imaginary parts are zero. Examples GUI now links to new documentation. Fixed zero-dimensional array output from metrics module. ## 5.9 Version 1.1.2 (October 27, 2011) Bug Fixes Fixed issue where Monte Carlo states were not output properly. ## 5.10 Version 1.1.1 (October 25, 2011) THIS POINT-RELEASE INCLUDES VASTLY IMPROVED TIME-INDEPENDENT MCSOLVE AND ODESOLVE PERFORMANCE New Functions Number of CPUs can now be changed. Bug Fixes Metrics no longer use dense matrices. Fixed Bloch sphere grid issue with matplotlib 1.1. Qobj trace operation uses only sparse matrices. Fixed issue where GUI windows do not raise to front. ## 5.11 Version 1.1.0 (October 04, 2011) THIS RELEASE NOW REQUIRES THE GCC COMPILER TO BE INSTALLED New Functions tidyup function to remove small elements from a Qobj. Added simdiag for simultaneous diagonalization of operators. Added eigenstates method returning eigenstates and eigenvalues to Qobj class. Added hinton function for visualizing density matrices. 224 Bug Fixes Switched Examples to new Signals method used in PySide 1.0.6+. Switched ProgressBar to new Signals method. Fixed memory issue in expm functions. Fixed memory bug in isherm. Made all Qobj data complex by default. Reduced ODE tolerance levels in Odeoptions. Fixed bug in ptrace where dense matrix was used instead of sparse. Fixed issue where PyQt4 version would not be displayed in about box. Fixed issue in Wigner where xvec was used twice (in place of yvec). ## 5.12 Version 1.0.0 (July 29, 2011) Initial release. 225 CHAPTER SIX DEVELOPERS Robert Johansson (RIKEN) Paul Nation (Korea University) 6.2 Contributors Note: Anyone is welcome to contribute to QuTiP. If you are interested in helping, please let us know! alexbrc (github user) - Code contributor Alexander Pitchford (Aberystwyth University) - Code contributor Anders Lund (Technical University of Denmark) - Bug hunting for the Monte-Carlo solver Andre Carvalho - Bug hunter Andr Xuereb (University of Hannover) - Bug hunter Anubhav Vardhan (IIT, Kanpur) - Bug hunter, Code contributor, Documentation Arne Grimsmo (University of Auckland) - Bug hunter, Code contributor Ben Criger (Waterloo IQC) - Code contributor Bredan Abolins (Berkeley) - Bug hunter Claudia Degrandi (Yale University) - Documentation Dawid Crivelli - Bug hunter Denis Vasilyev (St. Petersburg State University) - Code contributor Dong Zhou (Yale University) - Bug hunter Florian Ong (Institute for Quantum Computation) - Bug hunter Frank Schima - Macports packaging Henri Nielsen (Technical University of Denmark) - Bug hunter Hwajung Kang (Systems Biology Institute, Tokyo) - Suggestions for improving Bloch class James Clemens (Miami University - Ohio) - Bug hunter Johannes Feist - Code contributor Jonas Hrsch - Code contributor Jonas Neergaard-Nielsen (Technical University of Denmark) - Code contributor, Windows support JP Hadden (University of Bristol) - Code contributor, improved Bloch sphere visualization Kevin Fischer (Stanford) - Code contributor Laurence Stant - Documentation Markus Baden (Centre for Quantum Technologies, Singapore) - Code contributor, Documentation Myung-Joong Hwang (Pohang University of Science and Technology) - Bug hunter Neill Lambert (RIKEN) - Code contributor, Windows support Nikolas Tezak (Stanford) - Code contributor Per Nielsen (Technical University of Denmark) - Bug hunter, Code contributor Piotr Migda (ICFO) - Code contributor Reinier Heeres (Yale University) - Code contributor Robert Jrdens (NIST) - Linux packaging Simon Whalen - Code contributor W.M. Witzel - Bug hunter 227 CHAPTER SEVEN BIBLIOGRAPHY 229 CHAPTER EIGHT INDICES AND TABLES genindex modindex search 231 BIBLIOGRAPHY [R1] ## Shore, B. W., The Theory of Coherent Atomic Excitation, Wiley, 1990. [R2] http://en.wikipedia.org/wiki/Concurrence_(quantum_computing) [R3] 1. [R4] 1. ## a) Duff and J. Koster, The design and use of algorithms for [Hav03] Havel, T. Robust procedures for converting among Lindblad, Kraus and matrix representations of quantum dynamical semigroups. Journal of Mathematical Physics 44 2, 534 (2003). doi:10.1063/1.1518555. [Wat13] Watrous, J. Theory of Quantum Information, lecture notes. [Moh08] 13. Mohseni, A. T. Rezakhani, D. A. Lidar, Quantum-process tomography: Resource analysis of different strategies, Phys. Rev. A 77, 032322 (2008). doi:10.1103/PhysRevA.77.032322. [Gri98] [Cre03] ## 13. Grifoni, P. Hnggi, Driven quantum tunneling, doi:10.1016/S0370-1573(98)00022-2. 3. ## Physics Reports 304, 299 (1998). a) Creffield, Location of crossings in the Floquet spectrum of a driven two-level system, Phys. Rev. B 67, 165301 (2003). doi:10.1103/PhysRevB.67.165301. ## [Gar03] Gardineer and Zoller, Quantum Noise (Springer, 2004). [Bre02] H.-P. Breuer and F. Petruccione, The Theory of Open Quantum Systems (Oxford, 2002). [Coh92] ## 3. Cohen-Tannoudji, J. Dupont-Roc, G. Grynberg, Atom-Photon Interactions: Basic Processes and Applications, (Wiley, 1992). [WBC11] C. Wood, J. Biamonte, D. G. Cory, Tensor networks and graphical calculus for open quantum systems. arXiv:1111.6950 233 ## PYTHON MODULE INDEX q qutip, 218 qutip.bloch_redfield, 172 qutip.continuous_variables, 166 qutip.control.grape, 204 qutip.control.pulsegen, 212 qutip.control.pulseoptim, 205 qutip.correlation, 179 qutip.entropy, 163 qutip.essolve, 171 qutip.expect, 162 qutip.fileio, 215 qutip.floquet, 173 qutip.fortran.mcsolve_f90, 171 qutip.graph, 212 qutip.ipynbtools, 217 qutip.mcsolve, 170 qutip.mesolve, 168 qutip.metrics, 164 qutip.operators, 152 qutip.parallel, 215 qutip.partial_transpose, 163 qutip.propagator, 189 qutip.qip.algorithms.qft, 204 qutip.qip.gates, 198 qutip.qip.qubits, 203 qutip.random_objects, 158 qutip.sesolve, 168 qutip.states, 146 qutip.stochastic, 177 qutip.superop_reps, 161 qutip.superoperator, 160 qutip.tensor, 161 qutip.three_level_atom, 159 qutip.tomography, 197 qutip.utilities, 214 qutip.visualization, 191 qutip.wigner, 190 235 INDEX coherence_function_g2() (in module qutip.correlation), 186 coherent() (in module qutip.states), 147 coherent_dm() (in module qutip.states), 147 combine_dyn_gen() (Dynamics method), 140 composite() (in module qutip.tensor), 162 compute_evolution() (Dynamics method), 140 concurrence() (in module qutip.entropy), 163 conj() (Qobj method), 116 controlled_gate() (in module qutip.qip.gates), 202 convert_unit() (in module qutip.utilities), 214 correlation() (in module qutip.correlation), 179 correlation_2op_1t() (in module qutip.correlation), 180 correlation_2op_2t() (in module qutip.correlation), 180 correlation_3op_1t() (in module qutip.correlation), 181 correlation_3op_2t() (in module qutip.correlation), 182 correlation_4op_1t() (in module qutip.correlation), 182 correlation_4op_2t() (in module qutip.correlation), 183 correlation_matrix() (in module qutip.continuous_variables), 166 correlation_matrix_field() (in module qutip.continuous_variables), 166 (in module qutip.continuous_variables), 167 correlation_ss() (in module qutip.correlation), 179 covariance_matrix() (in module qutip.continuous_variables), 166 cphase() (in module qutip.qip.gates), 199 create() (in module qutip.operators), 152 create_pulse_gen() (in module qutip.control.pulsegen), 211 create_pulse_optimizer() (in module qutip.control.pulseoptim), 209 csign() (in module qutip.qip.gates), 199 ## about() (in module qutip), 218 average_gate_fidelity() (in module qutip.metrics), 166 B basis() (in module qutip.states), 146 berkeley() (in module qutip.qip.gates), 199 Bloch (class in qutip.bloch), 122 Bloch3d (class in qutip.bloch3d), 124 bloch_redfield_solve() (in module qutip.bloch_redfield), 173 bloch_redfield_tensor() (in module qutip.bloch_redfield), 173 brmesolve() (in module qutip.bloch_redfield), 172 build_preconditioner() (in module bures_angle() (in module qutip.metrics), 165 bures_dist() (in module qutip.metrics), 165 C checkherm() (Qobj method), 116 CircuitProcessor (class in qutip.qip.models), 134 CircularSpinChain (class in qutip.qip.models.spinchain), 136 clear() (Bloch method), 123 clear() (Bloch3d method), 126 clebsch() (in module qutip.utilities), 214 cnot() (in module qutip.qip.gates), 199 coherence_function_g1() (in module qutip.correlation), 185 D dag() (Qobj method), 116 destroy() (in module qutip.operators), 153 diag() (Qobj method), 116 237 dispersive_gate_correction() (DispersivecQED method), 137 DispersivecQED (class in qutip.qip.models.cqed), 136 displace() (in module qutip.operators), 153 Distribution (class in qutip.distributions), 129 Dynamics (class in qutip.control.dynamics), 137 DynamicsSymplectic (class in qutip.control.dynamics), 142 DynamicsUnitary (class in qutip.control.dynamics), 141 E eigenenergies() (Qobj method), 116 eigenstates() (Qobj method), 117 eliminate_states() (Qobj method), 117 enr_destroy() (in module qutip.operators), 157 enr_fock() (in module qutip.states), 152 enr_identity() (in module qutip.operators), 157 enr_state_dictionaries() (in module qutip.states), 152 enr_thermal_dm() (in module qutip.states), 152 ensure_decomp_curr() (Dynamics method), 140 entropy_conditional() (in module qutip.entropy), 163 entropy_linear() (in module qutip.entropy), 163 entropy_mutual() (in module qutip.entropy), 164 entropy_vn() (in module qutip.entropy), 164 eseries (class in qutip), 121 essolve() (in module qutip.essolve), 171 evaluate() (Qobj static method), 117 expect() (in module qutip.expect), 162 expm() (Qobj method), 118 extract_states() (Qobj method), 118 F fidelity() (in module qutip.metrics), 164 file_data_store() (in module qutip.fileio), 215 flag_system_changed() (Dynamics method), 140 floquet_modes() (in module qutip.floquet), 174 floquet_modes_t() (in module qutip.floquet), 174 floquet_modes_t_lookup() (in module qutip.floquet), 175 floquet_modes_table() (in module qutip.floquet), 175 floquet_state_decomposition() (in module qutip.floquet), 176 floquet_states_t() (in module qutip.floquet), 175 floquet_wavefunction_t() (in module qutip.floquet), 176 fmmesolve() (in module qutip.floquet), 173 fock() (in module qutip.states), 148 fock_dm() (in module qutip.states), 148 fredkin() (in module qutip.qip.gates), 201 fsesolve() (in module qutip.floquet), 176 full() (Qobj method), 118 G Gate (class in qutip.qip.circuit), 132 gate_expand_1toN() (in module qutip.qip.gates), 202 gate_expand_2toN() (in module qutip.qip.gates), 203 gate_expand_3toN() (in module qutip.qip.gates), 203 gate_sequence_product() (in module qutip.qip.gates), 202 gen_pulse() (PulseGen method), 143 gen_pulse() (PulseGenLinear method), 144 gen_pulse() (PulseGenRandom method), 143 gen_pulse() (PulseGenSaw method), 146 gen_pulse() (PulseGenSine method), 145 gen_pulse() (PulseGenSquare method), 146 gen_pulse() (PulseGenTriangle method), 146 gen_pulse() (PulseGenZero method), 144 get_ctrl_dyn_gen() (Dynamics method), 140 get_ctrl_dyn_gen() (DynamicsSymplectic method), 142 get_ctrl_dyn_gen() (DynamicsUnitary method), 142 get_drift_dim() (Dynamics method), 140 get_dyn_gen() (Dynamics method), 140 get_dyn_gen() (DynamicsSymplectic method), 142 get_dyn_gen() (DynamicsUnitary method), 142 get_num_ctrls() (Dynamics method), 140 get_ops_and_u() (CircuitProcessor method), 134 get_ops_labels() (CircuitProcessor method), 134 get_owd_evo_target() (Dynamics method), 140 globalphase() (in module qutip.qip.gates), 202 grape_unitary() (in module qutip.control.grape), 204 (in module qutip.control.grape), 205 GRAPEResult (class in qutip.control.grape), 137 graph_degree() (in module qutip.graph), 212 groundstate() (Qobj method), 118 H 202 HarmonicOscillatorProbabilityFunction (class in qutip.distributions), 132 HarmonicOscillatorWaveFunction (class in qutip.distributions), 131 hilbert_dist() (in module qutip.metrics), 165 hinton() (in module qutip.visualization), 191 I identity() (in module qutip.operators), 155 init_pulse() (PulseGen method), 143 init_pulse() (PulseGenLinear method), 144, 145 init_pulse() (PulseGenPeriodic method), 145 init_time_slots() (Dynamics method), 140 initialize_controls() (Dynamics method), 140 iswap() (in module qutip.qip.gates), 200 238 J jmat() (in module qutip.operators), 154 K ket2dm() (in module qutip.states), 149 L (in module qutip.superoperator), 160 LinearSpinChain (class in qutip.qip.models.spinchain), 136 linspace_with() (in module qutip.utilities), 214 liouvillian() (in module qutip.superoperator), 160 logarithmic_negativity() (in module qutip.continuous_variables), 167 M make_sphere() (Bloch method), 123 make_sphere() (Bloch3d method), 126 marginal() (Distribution method), 130 matrix_element() (Qobj method), 119 matrix_histogram() (in module qutip.visualization), 191 matrix_histogram_complex() (in module qutip.visualization), 192 maximum_bipartite_matching() (in module qutip.graph), 213 mcsolve() (in module qutip.mcsolve), 170 mcsolve_f90() (in module qutip.fortran.mcsolve_f90), 171 mesolve() (in module qutip.mesolve), 168 N n_thermal() (in module qutip.utilities), 214 norm() (Qobj method), 119 num() (in module qutip.operators), 154 O ode2es() (in module qutip.essolve), 172 operator_to_vector() (in module qutip.superoperator), 160 optimize_circuit() (CircuitProcessor method), 134 optimize_pulse() (in module qutip.control.pulseoptim), 205 optimize_pulse_unitary() (in module qutip.control.pulseoptim), 207 Options (class in qutip.solver), 126 orbital() (in module qutip), 196 overlap() (Qobj method), 119 P parallel_map() (in module qutip.ipynbtools), 217 parallel_map() (in module qutip.parallel), 216 parfor() (in module qutip.ipynbtools), 217 parfor() (in module qutip.parallel), 215 partial_transpose() (in module qutip.partial_transpose), 163 ## permute() (Qobj method), 120 phase() (in module qutip.operators), 157 phase_basis() (in module qutip.states), 150 phasegate() (in module qutip.qip.gates), 198 plot_energy_levels() (in module qutip.visualization), 192 plot_expectation_values() (in module qutip.visualization), 195 plot_fock_distribution() (in module qutip.visualization), 192 plot_grape_control_fields() (in module qutip.control.grape), 204 plot_points() (Bloch3d method), 126 plot_pulses() (CircuitProcessor method), 135 plot_qubism() (in module qutip.visualization), 195 plot_schmidt() (in module qutip.visualization), 194 plot_spin_distribution_2d() (in module qutip.visualization), 196 plot_spin_distribution_3d() (in module qutip.visualization), 196 plot_vectors() (Bloch3d method), 126 plot_wigner() (in module qutip.visualization), 193 plot_wigner_fock_distribution() (in module qutip.visualization), 193 process_fidelity() (in module qutip.metrics), 166 project() (Distribution method), 130 propagator() (in module qutip.propagator), 189 (in module qutip.propagator), 189 propagators() (QubitCircuit method), 133 ptrace() (Qobj method), 120 pulse_matrix() (CircuitProcessor method), 135 PulseGen (class in qutip.control.pulsegen), 142 PulseGenLinear (class in qutip.control.pulsegen), 144 PulseGenPeriodic (class in qutip.control.pulsegen), 145 PulseGenRandom (class in qutip.control.pulsegen), 143 PulseGenSaw (class in qutip.control.pulsegen), 146 PulseGenSine (class in qutip.control.pulsegen), 145 PulseGenSquare (class in qutip.control.pulsegen), 145 PulseGenTriangle (class in qutip.control.pulsegen), 146 PulseGenZero (class in qutip.control.pulsegen), 143 Q QDistribution (class in qutip.distributions), 131 qeye() (in module qutip.operators), 155 qft() (in module qutip.qip.algorithms.qft), 204 qft_gate_sequence() (in module qutip.qip.algorithms.qft), 204 qft_steps() (in module qutip.qip.algorithms.qft), 204 qfunc() (in module qutip.wigner), 190 Qobj (class in qutip), 115 qpt() (in module qutip.tomography), 197 239 ## qpt_plot() (in module qutip.tomography), 197 qpt_plot_combined() (in module qutip.tomography), 197 qsave() (in module qutip.fileio), 215 qubit_states() (in module qutip.qip.qubits), 203 QubitCircuit (class in qutip.qip.circuit), 132 qutip (module), 189, 196, 218 qutip.bloch_redfield (module), 172 qutip.continuous_variables (module), 166 qutip.control.grape (module), 204 qutip.control.pulsegen (module), 211, 212 qutip.control.pulseoptim (module), 205 qutip.correlation (module), 179 qutip.entropy (module), 163 qutip.essolve (module), 171 qutip.expect (module), 162 qutip.fileio (module), 215 qutip.floquet (module), 173 qutip.fortran.mcsolve_f90 (module), 171 qutip.graph (module), 212 qutip.ipynbtools (module), 217 qutip.mcsolve (module), 170 qutip.mesolve (module), 168 qutip.metrics (module), 164 qutip.operators (module), 152 qutip.parallel (module), 215 qutip.partial_transpose (module), 163 qutip.propagator (module), 189 qutip.qip.algorithms.qft (module), 204 qutip.qip.gates (module), 198 qutip.qip.qubits (module), 203 qutip.random_objects (module), 158 qutip.sesolve (module), 168 qutip.states (module), 146 qutip.stochastic (module), 177 qutip.superop_reps (module), 161 qutip.superoperator (module), 160 qutip.tensor (module), 161 qutip.three_level_atom (module), 159 qutip.tomography (module), 197 qutip.utilities (module), 214 qutip.visualization (module), 191 qutip.wigner (module), 190 qutrit_basis() (in module qutip.states), 149 qutrit_ops() (in module qutip.operators), 155 R rand_dm() (in module qutip.random_objects), 158 rand_herm() (in module qutip.random_objects), 158 rand_ket() (in module qutip.random_objects), 158 rand_unitary() (in module qutip.random_objects), 159 remove_gate() (QubitCircuit method), 133 render() (Bloch method), 123 reset() (PulseGen method), 143 reset() (PulseGenLinear method), 144, 145 reset() (PulseGenPeriodic method), 145 ## resolve_gates() (QubitCircuit method), 134 Result (class in qutip.solver), 127 reverse_circuit() (QubitCircuit method), 134 reverse_cuthill_mckee() (in module qutip.graph), 212 rhs_clear() (in module qutip), 190 rhs_generate() (in module qutip), 189 rotation() (in module qutip.qip.gates), 202 run() (CircuitProcessor method), 135 run_state() (CircuitProcessor method), 135 rx() (in module qutip.qip.gates), 198 ry() (in module qutip.qip.gates), 198 rz() (in module qutip.qip.gates), 198 S save() (Bloch method), 123 save() (Bloch3d method), 126 save_amps() (Dynamics method), 140 serial_map() (in module qutip.parallel), 216 sesolve() (in module qutip.sesolve), 168 set_label_convention() (Bloch method), 124 set_log_level() (Dynamics method), 141 show() (Bloch method), 124 show() (Bloch3d method), 126 sigmam() (in module qutip.operators), 155 sigmap() (in module qutip.operators), 155 sigmax() (in module qutip.operators), 156 sigmay() (in module qutip.operators), 156 sigmaz() (in module qutip.operators), 156 simdiag() (in module qutip), 218 smepdpsolve() (in module qutip.stochastic), 178 smesolve() (in module qutip.stochastic), 177 snot() (in module qutip.qip.gates), 198 spec() (eseries method), 121 spectral_decomp() (Dynamics method), 141 spectral_decomp() (DynamicsUnitary method), 142 spectrum() (in module qutip.correlation), 184 spectrum_correlation_fft() (in module qutip.correlation), 185 spectrum_pi() (in module qutip.correlation), 184 spectrum_ss() (in module qutip.correlation), 184 sphereplot() (in module qutip.visualization), 194 SpinChain (class in qutip.qip.models.spinchain), 135 spost() (in module qutip.superoperator), 160 spre() (in module qutip.superoperator), 160 sprepost() (in module qutip.superoperator), 160 sqrtiswap() (in module qutip.qip.gates), 201 sqrtm() (Qobj method), 120 sqrtnot() (in module qutip.qip.gates), 198 sqrtswap() (in module qutip.qip.gates), 200 squeeze() (in module qutip.operators), 156 squeezing() (in module qutip.operators), 157 ssepdpsolve() (in module qutip.stochastic), 178 ssesolve() (in module qutip.stochastic), 177 state_index_number() (in module qutip.states), 151 state_number_enumerate() (in module qutip.states), 150 240 ## state_number_index() (in module qutip.states), 151 state_number_qobj() (in module qutip.states), 151 StochasticSolverOptions (class in qutip.stochastic), 127 super_tensor() (in module qutip.tensor), 162 swap() (in module qutip.qip.gates), 200 swapalpha() (in module qutip.qip.gates), 200 ## wigner() (in module qutip.wigner), 190 wigner_covariance_matrix() (in module qutip.continuous_variables), 167 WignerDistribution (class in qutip.distributions), 130 T tensor() (in module qutip.tensor), 161 tensor_contract() (in module qutip.tensor), 162 thermal_dm() (in module qutip.states), 149 three_level_basis() (in module qutip.three_level_atom), 159 three_level_ops() (in module qutip.three_level_atom), 159 tidyup() (eseries method), 121 tidyup() (Qobj method), 120 to_choi() (in module qutip.superop_reps), 161 to_kraus() (in module qutip.superop_reps), 161 to_super() (in module qutip.superop_reps), 161 toffoli() (in module qutip.qip.gates), 201 tr() (Qobj method), 120 tracedist() (in module qutip.metrics), 165 trans() (Qobj method), 120 transform() (Qobj method), 121 (class in qutip.distributions), 131 U unit() (Qobj method), 121 update() (HarmonicOscillatorProbabilityFunction method), 132 update() (HarmonicOscillatorWaveFunction method), 132 131 update_ctrl_amps() (Dynamics method), 141 update_psi() method), 131 update_rho() method), 131 V value() (eseries method), 122 variance() (in module qutip.expect), 162 vector_mutation (Bloch attribute), 124 vector_style (Bloch attribute), 124 vector_to_operator() (in module qutip.superoperator), 160 vector_width (Bloch attribute), 124 version_table() (in module qutip.ipynbtools), 218 visualize() (Distribution method), 130 W weighted_bipartite_matching() qutip.graph), 213 (in module 241
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.520155131816864, "perplexity": 8894.600658490306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00396.warc.gz"}
https://abmathematics.com/?m=201609
## Composites and Inverse Functions Test On Wednesday, September 28th we’ll have our first HL test on composite and inverse functions. The following list of questions should be completed as part of your review of this material, and we can discuss any problem you may be having in class before the test. Pages 85–89 questions 1 to 4, 9 to 11, 15 to 22 ## Composite and Inverse Functions Complete the following questions for Sunday, the 18th of September. Composite Functions Pages 60–61 questions 1, 2, 5, 6, 9, 10, 12, 17, 19, 24, 25 Inverse Functions Pages 68–69 questions 15, 16, 17, 24, 30, 31, 34, 35, 36, 37, 38 ## Applications of Derivatives Complete the following questions for Sunday, the 18 of September. Pages 751–752 questions 1–6, 8, 14 ## Indefinite Integrals Complete the following questions on indefinite integrals for tomorrow’s lesson. Page 780 questions 1, 3, 7, 8, 9, 10, 12, 13, 14 ## 12 HL Composite Functions Homework Let $$f(x)=x^2$$ and $$g(x)=x-1$$. 1. Find the range of $$f$$ and $$g$$, assuming the domain for both is $$\mathbb{R}$$. 2. Find the range of $$f$$ and $$g$$, assuming the domain for both is $$[-2,\infty[$$. 3. Find the value of each of the functions below when $$x=4$$. a) $$f\circ g$$ b) $$g\circ f$$ 4. Find the range of each of the functions in Question 3. ## Finding Derivatives Here are a few questions to look at that involve applications of the new techniques and results (the chain rule, the product rule, and derivatives of exponential functions) we’re recently covered. Make sure to start these before our next lesson, and aim to have them completed by Monday. Pages 715–716 questions 3, 7, 9, 11 Pages 728 questions 1 a b e h i, 4, 6, 9, 10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4720456600189209, "perplexity": 766.1203581401476}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00196.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-7-section-7-5-systems-of-inequalities-exercise-set-page-865/86
## Precalculus (6th Edition) Blitzer a. $BMI \approx16.9$ b. underweight. a. Given $H=66\ in, W=105\ lb$, we have $BMI=\frac{703(105)}{66^2}\approx16.9$ b. Using the graph for the female with the above results, we can identify that this person is underweight.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9164295196533203, "perplexity": 1120.31709605695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399820.9/warc/CC-MAIN-20200528135528-20200528165528-00138.warc.gz"}
https://cstheory.stackexchange.com/questions/42749/densest-k-subgraph-problem-for-outerplanar-graphs/42754
# Densest k subgraph problem for outerplanar graphs? The densest k subgraph problem aims to find a subgraph $$H$$ of a graph $$G$$ with exactly $$k$$ vertices that maximizes the number of edges $$|E(H)|$$. Does anyone know if there exists a polynomial-time algorithm for this problem under the restriction that $$G$$ is outerplanar? (Note: I am specifically asking for an algorithm, not a PTAS, and I want $$G$$ to be outerplanar, not $$b$$-outerplanar for some $$b > 1$$). • Is $k$ a constant, or is it given as part of the input? – Emil Jeřábek supports Monica Apr 19 '19 at 10:40 • Also, since $|V(H)|$ is fixed, you are simply maximizing $|E(H)|$, right? – Emil Jeřábek supports Monica Apr 19 '19 at 10:42 • This can be done in polynomial time by dynamic programming. – Gamow Apr 19 '19 at 12:43 • In my view densest-k-subgraph is not particularly interesting for "hereditarily sparse" graphs. – Chandra Chekuri Apr 20 '19 at 15:00 • @ChandraChekuri Can you explain a bit more what you mean by this? First, what exactly is a "hereditarily sparse" graph? Does this include e.g. graphs with low degeneracy? Because in these, Densest-k-Subgraph does have manyapplications, so it is at least interesting from an application point of view. – C Komus Apr 20 '19 at 18:43 N. Bourgeois, A. Giannakos, G. Lucarelli, I. Milis, V.T. Paschos Exact and approximation algorithms for densest $$k$$-subgraph WALCOM’13, LNCS, vol. 7748, Springer-Verlag (2013), pp. 114-125 the Densest-$$k$$-Subgraph problem can be solved in $$O(2^{\mathrm{tw}(G)}\cdot k \cdot ((\mathrm{tw}(G)^2)+k)\cdot |X|)$$ time when a tree decomposition $$(X,T)$$ of the input graph $$G$$ is given. Since outerplanar graphs have treewidth at most two, the Densest-$$k$$-Subgraph can be solved in $$O(k^2 n)$$ time on outerplanar graphs with $$n$$ vertices.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 16, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6661198139190674, "perplexity": 383.81107509701025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251687958.71/warc/CC-MAIN-20200126074227-20200126104227-00539.warc.gz"}
https://leetcode.ca/2021-07-19-1872-Stone-Game-VIII/
Formatted question description: https://leetcode.ca/all/1872.html # 1872. Stone Game VIII Hard ## Description Alice and Bob take turns playing a game, with Alice starting first. There are n stones arranged in a row. On each player’s turn, while the number of stones is more than one, they will do the following: 1. Choose an integer x > 1, and remove the leftmost x stones from the row. 2. Add the sum of the removed stones’ values to the player’s score. 3. Place a new stone, whose value is equal to that sum, on the left side of the row. The game stops when only one stone is left in the row. The score difference between Alice and Bob is (Alice's score - Bob's score). Alice’s goal is to maximize the score difference, and Bob’s goal is to minimize the score difference. Given an integer array stones of length n where stones[i] represents the value of the i-th stone from the left, return the score difference between Alice and Bob if they both play optimally. Example 1: Input: stones = [-1,2,-3,4,-5] Output: 5 Explanation: • Alice removes the first 4 stones, adds (-1) + 2 + (-3) + 4 = 2 to her score, and places a stone of value 2 on the left. stones = [2,-5]. • Bob removes the first 2 stones, adds 2 + (-5) = -3 to his score, and places a stone of value -3 on the left. stones = [-3]. The difference between their scores is 2 - (-3) = 5. Example 2: Input: stones = [7,-6,5,10,5,-2,-6] Output: 13 Explanation: • Alice removes all stones, adds 7 + (-6) + 5 + 10 + 5 + (-2) + (-6) = 13 to her score, and places a stone of value 13 on the left. stones = [13]. The difference between their scores is 13 - 0 = 13. Example 3: Input: stones = [-10,-12] Output: -22 Explanation: • Alice can only make one move, which is to remove both stones. She adds (-10) + (-12) = -22 to her score and places a stone of value -22 on the left. stones = [-22]. The difference between their scores is (-22) - 0 = -22. Constraints: • n == stones.length • 2 <= n <= 10^5 • -10^4 <= stones[i] <= 10^4 ## Solution First, calculate the prefix sums of stones such that prefixSums[i] is the sum of all elements from stones[0] to stones[i]. Then use dynamic programming. Create an array dp of length n such that dp[i] represents the maximum difference if Alice chooses x = i + 1 for the first time. Obviously, dp[n - 1] = prefixSums[n - 1]. For i from n - 2 to 1, there is dp[i] = prefixSums[i] - maxDiff, where maxDiff is the maximum value from dp[i + 1] to dp[n - 1]. Update maxDiff at each index. Finally, return maxDiff. class Solution { public int stoneGameVIII(int[] stones) { int length = stones.length; if (length == 2) return stones[0] + stones[1]; int[] prefixSums = new int[length]; prefixSums[0] = stones[0]; for (int i = 1; i < length; i++) prefixSums[i] = prefixSums[i - 1] + stones[i]; int[] dp = new int[length]; dp[length - 1] = prefixSums[length - 1]; int maxDiff = dp[length - 1]; for (int i = length - 2; i > 0; i--) { dp[i] = prefixSums[i] - maxDiff; maxDiff = Math.max(maxDiff, dp[i]); } return maxDiff; } }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2093028724193573, "perplexity": 2149.8310682735205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662512229.26/warc/CC-MAIN-20220516172745-20220516202745-00717.warc.gz"}
http://blog.shuningbian.net/2004/05/yet-another-code-update.php
## May 09, 2004 Yet *another* code update. :-) Sim::getRandom Simulation class now creates a random object when its created, for use in other classes. This a) reduces over head from creating new random objects b) ensures the numbers are fairly random. Starfish The speed is now assign the following: speed += speed*sim.getRandom().nextDouble(); This ensures the starfish have different speeds, an attribute which gives some starfish a massive advantage. Tim, a suggestion would be to write a Starfish constructor that takes speed as a parameter, and assign the new Starfish the given speed +/- variation. This would lead to "survivial of the fittest" emergence. I have just finnished The Great Hunt in the Wheel of Time series, and have started reading The Dragon Reborn. This be a nice series, right up there with A Song of Ice and Fire Glen just informed me I have another MA1001 assignment due, this makes 4 in 2 weeks. Busy busy.... Cheers, Steve
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25951793789863586, "perplexity": 4305.182346323812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823303.28/warc/CC-MAIN-20181210034333-20181210055833-00596.warc.gz"}
https://link.springer.com/article/10.1007/BF01404567
Numerische Mathematik , Volume 31, Issue 4, pp 377–403 # Smoothing noisy data with spline functions Estimating the correct degree of smoothing by the method of generalized cross-validation • Peter Craven • Grace Wahba Article ## Summary Smoothing splines are well known to provide nice curves which smooth discrete, noisy data. We obtain a practical, effective method for estimating the optimum amount of smoothing from the data. Derivatives can be estimated from the data by differentiating the resulting (nearly) optimally smoothed spline. We consider the modelyi(ti)+εi,i=1, 2, ...,n,ti∈[0, 1], wheregW2(m)={f:f,f′, ...,f(m−1) abs. cont.,f(m)∈ℒ2[0,1]}, and the {εi} are random errors withEεi=0,Eεiεj2δij. The error variance σ2 may be unknown. As an estimate ofg we take the solutiongn, λ to the problem: Findf∈W2(m) to minimize$$\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 + \lambda \int\limits_0^1 {(f^{(m)} (u))^2 du} }$$. The functiongn, λ is a smoothing polynomial spline of degree 2m−1. The parameter λ controls the tradeoff between the “roughness” of the solution, as measured by$$\int\limits_0^1 {[f^{(m)} (u)]^2 du}$$, and the infidelity to the data as measured by$$\frac{1}{n}\sum\limits_{j = 1}^n {(f(t_j ) - y_j )^2 }$$, and so governs the average square errorR(λ; g)=R(λ) defined by $$R(\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g_{n,\lambda } (t_j ) - g(t_j ))^2 }$$ . We provide an estimate$$\hat \lambda$$, called the generalized cross-validation estimate, for the minimizer ofR(λ). The estimate$$\hat \lambda$$ is the minimizer ofV(λ) defined by$$V(\lambda ) = \frac{1}{n}\parallel (I - A(\lambda ))y\parallel ^2 /\left[ {\frac{1}{n}{\text{Trace(}}I - A(\lambda ))} \right]^2$$, wherey=(y1, ...,yn)t andA(λ) is then×n matrix satisfying(gn, λ (t1), ...,gn, λ (tn))t=A (λ) y. We prove that there exist a sequence of minimizers$$\tilde \lambda = \tilde \lambda (n)$$ ofEV(λ), such that as the (regular) mesh{ti}i=1n becomes finer,$$\mathop {\lim }\limits_{n \to \infty } ER(\tilde \lambda )/\mathop {\min }\limits_\lambda ER(\lambda ) \downarrow 1$$. A Monte Carlo experiment with several smoothg's was tried withm=2,n=50 and several values of σ2, and typical values of$$R(\hat \lambda )/\mathop {\min }\limits_\lambda R(\lambda )$$ were found to be in the range 1.01–1.4. The derivativeg′ ofg can be estimated by$$g'_{n,\hat \lambda } (t)$$. In the Monte Carlo examples tried, the minimizer of$$R_D (\lambda ) = \frac{1}{n}\sum\limits_{j = 1}^n {(g'_{n,\lambda } (t_j ) - } g'(t_j ))$$ tended to be close to the minimizer ofR(λ), so that$$\hat \lambda$$ was also a good value of the smoothing parameter for estimating the derivative. ### Subject Classifications MOS:65D10 CR:5.17 MOS:65D25 ## Preview ### References 1. 1. Abramowitz, M., Stegun, I.: Handbook of mathematical functions with formulas, graphs, and mathematical tables. U.S. Department of Commerce, National Bureau of Standards Applied Mathematics Series No.55, pp. 803–819, 1964Google Scholar 2. 2. Aronszajn, N.: Theory of reproducing kernels. Trans. Amer. Math. Soc.68, 337–404 (1950)Google Scholar 3. 3. Golomb, M.: Approximation by periodic spline interpolants on uniform meshes. J. Approximation Theory1, 26–65 (1968)Google Scholar 4. 4. Golub, G., Heath, M., Wahba, G.: Generalized cross validation as a method for choosing a good ridge parameter, to appear, TechnometricsGoogle Scholar 5. 5. Golub, G., Reinsch, C.: Singular value decomposition and least squares solutions. Numer. Math.14, 403–420 (1970)Google Scholar 6. 6. Hudson, H.M.: Empirical Bayes estimation. Technical Report #58, Stanford University, Department of Statistics, Stanford, Cal., 1974Google Scholar 7. 7. Kimeldorf, G., Wahba, G.: A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Ann. Inst. Statist. Math.41, 495–502 (1970)Google Scholar 8. 8. 9. 9. Reinsch, C.M.: Smoothing by spline functions. Numer. Math.10, 177–183 (1967)Google Scholar 10. 10. Reinsch, C.M.: Smoothing by spline functions, II. Numer. Math.16, 451–454 (1971)Google Scholar 11. 11. Schoenberg, I.J.: Spline functions and the problem of graduation. Proc. Nat. Acad. Sci. (USA)52, 947–950 (1964)Google Scholar 12. 12. Wahba, G.: Convergence rates for certain approximate solutions to first kind integral equations. J. Approximation Theory7, 167–185 (1973)Google Scholar 13. 13. Wahba, G.: Smoothing noisy data with spline functions. Numer. Math.24, 383–393 (1975)Google Scholar 14. 14. Wahba, G.: Practical approximate solutions to linear operator equations when the data are noisy. SIAM J. Numer. Anal.14, 651–667 (1977)Google Scholar 15. 15. Wahba, G., Wold, S.: A completely automatic French curve: Fitting spline functions by crossvalidation. Comm. Statist.4, 1–17 (1975)Google Scholar 16. 16. Wahba, G., Wold, S.: Periodic splines for spectral density estimation: The use of cross-validation for determining the degree of smoothing. Comm. Statist.4, 125–141 (1975)Google Scholar 17. 17. Wahba, G.: A survey of some smoothing problems and the method of generalized cross validation for solving them. University of Wisconsin-Madison, Statistics Dept., Technical Report #457. In: Proceedings of the Conference on Applications of Statistics, Dayton, Ohio (P.R. Krishnaiah, ed.) June 14–18, 1976Google Scholar 18. 18. Wahba, G.: Improper priors, spline smoothing and the problem of guarding against model errors in regression. J. Roy. Statist. Soc., Ser. B. To appearGoogle Scholar
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9433746933937073, "perplexity": 9160.75345757379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104204.40/warc/CC-MAIN-20170818005345-20170818025345-00196.warc.gz"}
https://planetmath.org/HNNExtension
# HNN extension The HNN extension group $G$ for a group $A$, is constructed from a pair of isomorphic subgroups $B\lx@stackrel{{\scriptstyle\phi}}{{\cong}}C$ in $A$, according to formula $G=\frac{A*\langle t|-\rangle}{N}$ where $\langle t|-\rangle$ is a cyclic free group, $*$ is the free product and $N$ is the normal closure of $\{tbt^{-1}\phi(b)^{-1}\colon b\in B\}$. As an example take a surface bundle $F\subset E\to S^{1}$, hence the homotopy long exact sequence of this bundle implies that the fundamental group $\pi_{1}(E)$ is given by $\pi_{1}(E)=\langle x_{1},...,x_{k},t|\Pi=1,tx_{i}t^{-1}=\phi(x_{i})\rangle$ where $k$ is the genus of the surface and the relation $\Pi$ is $[x_{1},x_{2}][x_{3},x_{4}]\cdots[x_{k-1},x_{k}]$ for an orientable surface or $x_{1}^{2}x_{2}^{2}\cdots x_{k}^{2}$ is for a non-orientable one. $\phi$ is an isomorphism induced by a self homeomorphism of $F$. Title HNN extension HNNExtension 2013-03-22 16:04:03 2013-03-22 16:04:03 juanman (12619) juanman (12619) 8 juanman (12619) Definition msc 20E06 GroupExtension
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 18, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9691519737243652, "perplexity": 348.5205996735332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704804187.81/warc/CC-MAIN-20210126233034-20210127023034-00160.warc.gz"}
https://derhaeg.be/doc/ghc-8.2.2/html/libraries/ghc-8.2.2/NameShape.html
ghc-8.2.2: The GHC API NameShape Synopsis # Documentation data NameShape # A NameShape is a substitution on Names that can be used to refine the identities of a hole while we are renaming interfaces (see RnModIface). Specifically, a NameShape for ns_module_name A, defines a mapping from {A.T} (for some OccName T) to some arbitrary other Name. The most intruiging thing about a NameShape, however, is how it's constructed. A NameShape is *implied* by the exported AvailInfos of the implementor of an interface: if an implementor of signature H exports M.T, you implicitly define a substitution from {H.T} to M.T. So a NameShape is computed from the list of AvailInfos that are exported by the implementation of a module, or successively merged together by the export lists of signatures which are joining together. It's not the most obvious way to go about doing this, but it does seem to work! NB: Can't boot this and put it in NameShape because then we start pulling in too many DynFlags things. Constructors NameShape Fieldsns_mod_name :: ModuleName ns_exports :: [AvailInfo] ns_map :: OccEnv Name Create an empty NameShape (i.e., the renaming that would occur with an implementing module with no exports) for a specific hole mod_name. mkNameShape :: ModuleName -> [AvailInfo] -> NameShape # Create a NameShape corresponding to an implementing module for the hole mod_name that exports a list of AvailInfos. Given an existing NameShape, merge it with a list of AvailInfos with Backpack style mix-in linking. This is used solely when merging signatures together: we successively merge the exports of each signature until we have the final, full exports of the merged signature. What makes this operation nontrivial is what we are supposed to do when we want to merge in an export for M.T when we already have an existing export {H.T}. What should happen in this case is that {H.T} should be unified with M.T: we've determined a more *precise* identity for the export at OccName T. Note that we don't do unrestricted unification: only name holes from ns_mod_name ns are flexible. This is because we have a much more restricted notion of shaping than in Backpack'14: we do shaping *as* we do type-checking. Thus, once we shape a signature, its exports are *final* and we're not allowed to refine them further, The export list associated with this NameShape (i.e., what the exports of an implementing module which induces this NameShape would be.) Given a Name, substitute it according to the NameShape implied substitution, i.e. map {A.T} to M.T, if the implementing module exports M.T. Like substNameShape, but returns Nothing if no substitution works.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22608636319637299, "perplexity": 5348.469489541139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249501174.94/warc/CC-MAIN-20190223122420-20190223144420-00492.warc.gz"}
https://mathematica.stackexchange.com/questions/197251/how-can-i-know-the-true-range-of-the-values-returned-by-audiolocalmeasurements
# How can I know the true range of the values returned by AudioLocalMeasurements? I need to normalize the values of different AudioLocalMeasurements results but I can't seem to find any info regarding the range of values in which each measurement is expressed. For example, I am assuming that the SpectralCentroid, which should be expressed in hertz, probably has an interval similar to that of the audible spectrum, but it seems Wolfram has no documentation on this. Is there a way to sort this out in Mathematica or should I be looking for external sources about the standardization of audio descriptors?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45584872364997864, "perplexity": 283.78130292387294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573052.26/warc/CC-MAIN-20190917040727-20190917062727-00207.warc.gz"}
https://www.physicsforums.com/threads/drag-terminal-velocity-of-a-solid-sphere.682848/
# Drag - Terminal Velocity of a solid sphere. 1. Apr 3, 2013 ### MrWinesy 1. The problem statement, all variables and given/known data A solid sphere - 20mm in diameter, σ (specific gravity) = 1.3 dropped in water μ=1*10^-3 and ρ=1000 determine the terminal velocity for the sphere. (hint- guess the value for the drag coefficient then iterate) 2. Relevant equations Fd=(1/2)*Cd*ρ*(U^2)*A 3. The attempt at a solution tried guessing the drag coefficient but have no confidence in estimate or of the next step. (also the relationship equation connecting viscosity, specific gravity and density would be much appreciated). 2. Apr 3, 2013 ### rude man What they have in mind I guess is this: 1. guess at the terminal velocity. 2. determine the Reynolds number for a sphere in water at that velocity. 3. Determine the drag coefficient based on the Reynolds number. 4. Compute the drag force based on c and v. 5. Compare with your guess of v. 6. Re-guess v etc. There are several Websites that together can give you all the info you need: 1. kinematic viscosity of water 2. Reynolds number for a sphere at a given velocity 3. Drag coeff. as a function of the Reynolds number for a sphere. Along the way you can pick up any theory and data you didn't know. Draft saved Draft deleted Similar Discussions: Drag - Terminal Velocity of a solid sphere.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.89885014295578, "perplexity": 2016.149236070667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824899.43/warc/CC-MAIN-20171021205648-20171021225648-00530.warc.gz"}
https://experts.umn.edu/en/publications/deuterium-retention-and-thermal-conductivity-in-ion-beam-displace
# Deuterium retention and thermal conductivity in ion-beam displacement-damaged tungsten G. R. Tynan, R. P. Doerner, J. Barton, R. Chen, S. Cui, M. Simmonds, Y. Wang, J. S. Weaver, N. Mara, S. Pathak Research output: Contribution to journalArticlepeer-review 11 Scopus citations ## Abstract Retention of plasma-implanted D is studied in W targets damaged by a Cu ion beam at up to 0.2 dpa with sample temperatures between 300 K and 1200 K. At a D plasma ion fluence of 1024/m2 on samples damaged to 0.2 dpa at 300 K, the retained D retention inventory is 4.6 × 1020 D/m2, about ∼5.5 times higher than in undamaged samples. The retained inventory drops to 9 × 1019 D/m2 for samples damaged to 0.2 dpa at 1000 K, consistent with onset of vacancy annealing at a rate sufficient to overcome the elevated rate of ion beam damage; at a damage temperature of 1200 K retention is nearly equal to values seen in undamaged materials. A nano-scale technique provides thermal conductivity measurements from the Cu-ion beam displacement damaged region. We find the thermal conductivity of W damaged to 0.2 dpa at room temperature drops from the un-irradiated value of 182 ± 3.3 W/m K to 53 ± 8 W/m K. Original language English (US) 164-168 5 Nuclear Materials and Energy 12 https://doi.org/10.1016/j.nme.2017.03.024 Published - Aug 2017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8732622861862183, "perplexity": 10081.215821176711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184870.26/warc/CC-MAIN-20201125213038-20201126003038-00590.warc.gz"}
http://sci-gems.math.bas.bg/jspui/handle/10525/3575
Please use this identifier to cite or link to this item: http://hdl.handle.net/10525/3575 Title: Application of Discrete Dividends to American Option Pricing Authors: Koleva-Petkova, DessislavaMilev, Mariyan Keywords: Finite differencesdiscrete dividendsAmerican options Issue Date: 2017 Publisher: Institute of Mathematics and Informatics at the Bulgarian Academy of Sciences Citation: Pliska Studia Mathematica Bulgarica, Vol. 27, No 1, (2017), 37p-46p Abstract: Dividends are a detail of financial instruments pricing which is often being oversimplified. However, companies do declare (and pay out) flows which can be significant. In this paper we briefly review some known approaches to this topic. We analyse a few known drawbacks with application to American option pricing. Due to the fact that these options rely on numerical methods for their pricing, applying discrete dividends to the chosen approach may affect the solution quality. As we will show shortly, for some methods there are flaws affecting positivity and smoothness of the numerical solution while others are too computationally heavy. We find that applying discrete dividends to an exponentially fitted scheme (the Duffy scheme) overcomes these problems and we manage to obtain a smooth and sensible solution. 2010 Mathematics Subject Classification: 35K10, 65N06, 91G60, 62P05. Description: [Koleva-Petkova Dessislava; Колева-Петкова Десислава]; [Milev Mariyan; Милев Мариан] URI: http://hdl.handle.net/10525/3575 ISSN: 0204-9805 Appears in Collections: 2017 Volume 27 Files in This Item: File Description SizeFormat
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5109047293663025, "perplexity": 4290.7957501147675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00090.warc.gz"}
http://link.springer.com/article/10.1007%2Fs12596-012-0098-5
, Volume 41, Issue 4, pp 224-230 Date: 13 Nov 2012 # Light scattering by two concentric gold cylindrical hollow nanoshell Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract The scattering cross section of two concentric gold cylindrical hollow nanoshell (GCHNS) is obtained as a function of wavelength at different thicknesses of two gold shells and intershell spacing between them. Theoretical calculations show that both the intensity and position of the scattering peak depend on these parameters for two concentric GCHNS and therefore the scattering peak can be tuned by changing these parameters.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8599586486816406, "perplexity": 1937.2138816512943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510257966.18/warc/CC-MAIN-20140728011737-00118-ip-10-146-231-18.ec2.internal.warc.gz"}
https://datascience.stackexchange.com/questions/20170/coursera-ml-does-the-choice-of-optimization-algorithm-affect-the-accuracy-of-m
# Coursera ML - Does the choice of optimization algorithm affect the accuracy of multiclass logistic regression? I recently completed exercise 3 of Andrew Ng's Machine Learning on Coursera using Python. When initially completing parts 1.4 to 1.4.1 of the exercise, I ran into difficulties ensuring that my trained model has the accuracy that matches the expected 94.9%. Even after debugging and ensuring that my cost and gradient functions were bug free, and that my predictor code was working correctly, I was still getting only 90.3% accuracy. I was using the conjugate gradient (CG) algorithm in scipy.optimize.minimize. Out of curiosity, I decided to try another algorithm, and used Broyden–Fletcher–Goldfarb–Shannon (BFGS). To my surprise, the accuracy improved drastically to 96.5% and thus exceeded the expectation. The comparison of these two different results between CG and BFGS can be viewed in my notebook under the header Difference in accuracy due to different optimization algorithms. Is the reason for this difference in accuracy due to the different choice of optimization algorithm? If yes, then could someone explain why? Also, I would greatly appreciate any review of my code just to make sure that there isn't a bug in any of my functions that is causing this. Thank you. EDIT: Here below I added the code involved in the question, on the request in the comments that I do so in this page rather than refer readers to the links to my Jupyter notebooks. Model cost functions: def sigmoid(z): return 1 / (1 + np.exp(-z)) def compute_cost_regularized(theta, X, y, lda): reg =lda/(2*len(y)) * np.sum(theta[1:]**2) return 1/len(y) * np.sum(-y @ np.log(sigmoid(X@theta)) - (1-y) @ np.log(1-sigmoid(X@theta))) + reg XT = X.T beta = sigmoid(X@theta) - y regterm = lda/len(y) * theta # theta_0 does not get regularized, so a 0 is substituted in its place regterm[0] = 0 gradient = (1/len(y) * XT@beta).T + regterm Function that implements one-vs-all classification training: from scipy.optimize import minimize def train_one_vs_all(X, y, opt_method): theta_all = np.zeros((y.max()-y.min()+1, X.shape[1])) for k in range(y.min(),y.max()+1): grdtruth = np.where(y==k, 1,0) results = minimize(compute_cost_regularized, theta_all[k-1,:], args = (X,grdtruth,0.1), method = opt_method, # optimized parameters are accessible through the x attribute theta_optimized = results.x # Assign thetheta_optimized vector to the appropriate row in the # theta_all matrix theta_all[k-1,:] = theta_optimized return theta_all Called the function to train the model with different optimization methods: theta_all_optimized_cg = train_one_vs_all(X_bias, y, 'CG') # Optimization performed using Conjugate Gradient theta_all_optimized_bfgs = train_one_vs_all(X_bias, y, 'BFGS') # optimization performed using Broyden–Fletcher–Goldfarb–Shanno We see that prediction results differ based on the algorithm used: def predict_one_vs_all(X, theta): return np.mean(np.argmax(sigmoid([email protected]), axis=1)+1 == y)*100 In[16]: predict_one_vs_all(X_bias, theta_all_optimized_cg) Out[16]: 90.319999999999993 In[17]: predict_one_vs_all(X_bias, theta_all_optimized_bfgs) Out[17]: 96.480000000000004 For anyone wanting to get any data to try the code, they can find it in my Github as linked in this post. • Logistic regression should have a single stable minimum (like linear regression), so it is likely that something is causing this that you haven't noticed – Neil Slater Jul 4 '17 at 15:57 • So there must be guaranteed convergence to the minimum cost? Would you be able to do a code review for me please? – AKKA Jul 4 '17 at 23:28 • If there's a lot of code you need reviewing, maybe post it on codereview.stackexchange.com - if it is only a small amount required to replicate the problem, you could add it to your question here (edit it in as a code block, please include enough to fully replicate the problem). – Neil Slater Jul 5 '17 at 6:51 • while it is true that ensuring a global minimum should give you same result regardless of the optimization algorithm, there can be subtleties in the implementation of the algorithm (i.e. the methods to handle numerical stability etc) that may lead to slightly different solutions. These small difference in solutions may lead to larger performance difference when evaluated on small test set. May be that is causing such a large performance difference in your case. And yes, in general, optimization algorithms can largely influence the learning outcome. Btw, I got the desired result in MATLAB. – Sal Jul 6 '17 at 6:30 • @NeilSlater: ok, I have just added the code directly into the question as an edit. Does it look ok? – AKKA Jul 6 '17 at 15:07 Limits of numerical accuracy and stability are causing the optimisation routines to struggle. You can see this most easily by changing the regularisation term to 0.0 - there is no reason why this should not work in principle, and you are not using any feature engineering that particularly needs it. With regularisation set to 0.0, then you will see limits of precision reached and attempts to take log of 0 when calculating the cost function. The two different optimisation routines are affected differently, due to taking different sample points on route to the minimum. I think that with regularisation term set high, you remove the numerical instability, but at the expense of not seeing what is really going on with the calculations - in effect the regularisation terms become dominant for the difficult training examples. You can offset some of the accuracy problems by modifying the cost function: def compute_cost_regularized(theta, X, y, lda): reg =lda/(2*len(y)) * np.sum(theta[1:]**2) return reg - 1/len(y) * np.sum( y @ np.log( np.maximum(sigmoid(X@theta), 1e-10) ) + (1-y) @ np.log( np.maximum(1-sigmoid(X@theta), 1e-10) ) ) Also to get some feedback during the training, you can add options = { 'disp': True } To the call to minimize. With this change, you can try with regularisation term set to zero. When I do this, I get: predict_one_vs_all(X_bias, theta_all_optimized_cg) Out[156]: 94.760000000000005 In [157]: predict_one_vs_all(X_bias, theta_all_optimized_bfgs) /usr/local/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: overflow encountered in exp from ipykernel import kernelapp as app Out[157]: 98.839999999999989 The CG value of 94.76 seems to match the expected result nicely - so I wonder if this was done without regularisation. The BFGS value is still "better" although I am not sure how much I trust it given the warning messages during training and evaluation. To tell if this apparently better training result really translates into better digit detection, you would need to measure results on a hold-out test set. • Really appreciate the analysis you've provided in your answer. I still have a question about the modification you made to the cost function, like with np.maximum(sigmoid(X@theta), 1e-10), how did you know to use 1e-10 as the threshold value? Also, I noticed that you shifted the negative sign out side of the individual terms of the sum, and brought it out so that its now reg - the regularization term minus the sum term. Does this matter too? – AKKA Jul 8 '17 at 13:03 • As you suggested, I also tried setting the regularization term to 0.0, and not only do I get the divide by zero error, but the running time also becomes much longer! About the dividing by zero error, I don't quite understand why though. How did it come about? Has this got something to do with the implementation details of the algorithms? Pardon me as I'm not familiar with numerical methods... – AKKA Jul 8 '17 at 13:06 • @AKKA: I just chose 1e-10 arbitrarily, and the shuffling of terms around was a side-effect of me double checking and understanding the code. I don't think either make a big difference. Technically it isn't a divide by zero, but a np.log( array_containing_a_zero ) which has occurred due to a large negative or positive sum in one ore more examples during the optimisation search. – Neil Slater Jul 8 '17 at 13:21 • Because the code exponentiates then takes logs, the numbers you see can seem within reasonable bounds, but the interim calculations can be extreme. Some frameworks can resolve the expressions so that exponentiation and logs don't actually occur - but the maths for that is beyond me. – Neil Slater Jul 8 '17 at 13:26 • I see. Do you think then that the better results you have obtained could have been over-fitting? I guess that is why you said ultimately a test set is required to validate this... – AKKA Jul 8 '17 at 13:29 CG does not converge to the minimum as well as BFGS If I may add an answer here to my own question too, credits given to a good friend who volunteered to look at my code. He's not on Data Science stackexchange, and didn't feel the need to create an account just to post the answer up, so he's passed this chance to post over to me. I would also reference @Neil Slater, as there is chance his analysis on the numerical stability issue could account for this. So the main premise behind my solution is: We know that the cost function is convex, meaning it has no locals, and only a global minimum. Since the prediction using parameters trained with BFGS is better than those trained using CG, this implies that BFGS converged closer to the minimum than CG did. Whether or not BFGS converged to the global minimum, we can't say for sure, but we can definitely say that it is closer than CG is. So if we take the parameters that were trained using CG, and pass them through the optimization routine using BFGS, we should see that these parameters get further optimized, as BFGS brings everything closer to the minimum. This should improve the prediction accuracy and bring it closer to the one obtained using plain BFGS training. Here below is code that verifies this, variable names follow the same as in the question: # Copy the old array over, else only a reference is copied, and the # original vector gets modified theta_all_optimized_bfgs_from_cg = np.copy(theta_all_optimized_cg) for k in range(y.min(),y.max()+1): grdtruth = np.where(y==k, 1,0) results = minimize(compute_cost_regularized,theta_all_optimized_bfgs_from_cg[k-1,:], args = (X_bias,grdtruth,0.1), method = "BFGS", # optimized parameters are accessible through the x attribute theta_optimized = results.x # Assign thetheta_optimized vector to the appropriate row in the # theta_all matrix theta_all_optimized_bfgs_from_cg[k-1,:] = theta_optimized During execution of the loop, only one of the iterations produced a message that showed a non-zero number of optimization routine iterations, meaning that further optimization was performed: Optimization terminated successfully. Current function value: 0.078457 Iterations: 453 Function evaluations: 455 And the results were improved: In[19]: predict_one_vs_all(X_bias, theta_all_optimized_bfgs_from_cg) Out[19]: 96.439999999999998 By further training the parameters, which were initially obtained from CG, through an additional BFGS run, we have further optimized them to give a prediction accuracy of 96.44% which is very close to the 96.48% that was obtained by directly using just only BFGS! I updated my notebook with this explanation. Of course this raises more questions, such as why CG did not work as well as BFGS did on this cost function, but I guess those are questions meant for another post. • I think you should still test this on a hold-out test set, to rule out BFGS being broken instead. However, I was wondering since answering, whether adding regularisation is making the loss surface less simple . . . meaning that BFGS results are strictly better in that situation, but become unstable without regularisation on this data set. – Neil Slater Jul 11 '17 at 12:47 • @NeilSlater: True, I agree that the best validation and standard practice is to run it on a testing dataset. Doing on a test set was not part of the Coursera assignment though, so no such test sets were provided to us. I will have to take a chunk out of the original MNIST. What you said seems plausible, since without regularization, conjugate gradient improves. However, if the loss surface was truly simpler, then why would CG still perform poorer than BFGS, rather than the same? – AKKA Jul 11 '17 at 14:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4889749586582184, "perplexity": 1266.334315771536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738727.76/warc/CC-MAIN-20200811025355-20200811055355-00336.warc.gz"}
https://physics.stackexchange.com/questions/449333/why-cant-a-particle-penetrate-an-infinite-potential-barrier
# Why can't a particle penetrate an infinite potential barrier? I am studying basic quantum theory. My question is: Why can't a particle penetrate an infinite potential barrier? The reasoning that I have applied is that particles under consideration have finite energy. So, to cross an infinite potential barrier the particle requires infinite energy. But I cannot think of the mathematical relation between potential and energy so that indeed I am convinced that to cross an infinite potential barrier the particle needs infinite energy. What is the relation between the potential and energy of quantum mechanical particles? • You start with the tunneling probability knowing that it is exponentially small with the finite barrier height, therefore if the latter is infinite the former is zero. Once you see this you may use the infinite high potential well as a mathematical model for an impenetrable barrier. – hyportnex Dec 19 '18 at 15:53 The relation between the particle's wave function $$\psi(x)$$, potential $$V(x)$$ and energy is $$E = \int dx\ \psi^*(x)\left(-\frac{\hbar^2}{2m}\psi''(x) + V(x)\psi(x)\right) \quad \label((*)$$ Suppose $$V(x)$$ is bounded from below and is equal to $$+\infty$$ on some interval $$[x_1,x_2]$$. If $$\psi(x)\neq 0$$ for $$x\in[x_1,x_2]$$, then the energy $$E$$ is infinite. The term containing second derivative is always non-negative, so it can not compensate this infinity. Update. This relation is well known in the quantum mechanics. I didn't mention that the norm of a wave function is usually taken to be $$1$$: $$\int dx\ \psi^*(x)\psi(x) = 1$$ Under this condition the Schrodinger equation $$-\frac{\hbar^2}{2m}\psi''(x) + V(x)\psi(x) = E\psi(x)$$ been multiplied by $$\psi^*(x)$$ and integrated by $$x$$ gives the relation (*). The term $$-\frac{\hbar^2}{2m}\int dx\ \psi^*(x)\psi''(x)$$ corresponds to the kinetic energy of a particle, so it must be non-negative. Indeed, integration by parts leads to the following manifestly non-negative expression $$\frac{\hbar^2}{2m}\int dx\ \psi'^*(x)\psi'(x).$$ By the way, quantity $$\psi''(x)/\psi(x)$$ can be either positive or negative. • Can I arrive at the same conclusion from the following equation: $-\frac{\hbar^2}{2m}\frac{1}{\psi (x)}\frac{\partial ^2\psi (x)}{\partial x^2}+V(x)=E$ – Soumee Dec 19 '18 at 17:25 • Can you please tell from which equation has the equation that you have mentioned been derived. It seems like it must have been something like: $dE = dx\ \psi^*(x)\left(-\frac{\hbar^2}{2m}\psi''(x) + V(x)\psi(x)\right)$ , which physically translates into small amount of energy in the interval dx. So, can we find out the small amount of energy in the interval dx? If so, from where? – Soumee Dec 19 '18 at 17:34 • @Soumee I am not sure if you can come to this conclusion from the Schrodinger equation directly. You should consider properties of the $\psi''(x)/\psi(x)$ term in this case. I'll update my answer. – Gec Dec 19 '18 at 17:38 • Thirdly, since $\psi''(x)$ is non negative, the term $-\frac{\hbar^2}{2m}\psi''(x)$ as a whole is negative, which brings down the energy by a small amount from infinity, but ultimately $E=\infty$ – Soumee Dec 19 '18 at 17:39 • @Soumee I have updated my answer. – Gec Dec 19 '18 at 17:59 Imagine a finite potential well of the form $$V(x) = \begin{cases} 0 & |x| < L/2 \\ V_0 & {\rm otherwise}\end{cases}$$ You can solve Schrodinger's equation in the usual way, by splitting the domain in three parts, the resulting wave function will look something like this $$\psi(x) = \begin{cases} \psi_1(x) & x < L/2 \\ \psi_2(x) & |x| \leq L/2 \\ \psi_3(x) & x > L/2\end{cases}$$ Inside the box $$\psi_2(x) \sim e^{\pm ikx}$$, but outside the box you will find $$\psi_3(x) \sim e^{-\alpha x}$$ where $$\alpha = \frac{\sqrt{2m(V_0 - E)}}{\hbar}$$ Now calculate the limit $$V_0\to\infty$$ (infinity potential barrier), and you will see that $$\psi_3(x)\to 0$$, same as $$\psi_1(x)$$. So in that sense the particle cannot penetrate the barrier and remains confined in the region $$|x| \leq L/2$$ • I think the OP is interested in a barrier where the potential goes back to 0 at some point instead of a well like you have here. Although the conclusion will ultimately look the same either way. – Aaron Stevens Dec 19 '18 at 16:10 • While this does answer the question for the OPs exact scenario it doesn't put to rest the issue for similar scenarios. You'd need to redo the full calculation each time for any possible V(x) on either side of the barrier (which is obviously not doable). – jacob1729 Dec 19 '18 at 18:07 • Hard to fathom why these critical comments were made. This is the correct physical answer, and it should have been accepted, instead of the other one which proves, uselessly, that either the wavefunction would be zero or else the total energy would be infinite. And next what? Would we have to proove then that the energy cannot be infinite? The leson is that, often, an argument based on a physical limit is more valuable than a mathematical "proof". – Kostas Apr 2 at 18:56 • @Kostas Thanks for the feedback, I certainly agree with your last statement ;) – caverac Apr 2 at 19:53 Gec's answer is the one I would consider rigorous, but the intuitive answer is this: Suppose you put a particle detector in the barrier. How often do you expect to measure a particle there? Answer: never, because if you did this then the particle after measurement would be in a position eigenstate that would force you to conclude it had infinite (expectation value of) energy. And we're disallowing that. The only states where there is no chance to ever measure the particle in the barrier are those with $$\psi=0$$ inside the barrier (or at least over a dense subset).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 27, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9545912742614746, "perplexity": 261.483582294493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255536.6/warc/CC-MAIN-20190520021654-20190520043654-00488.warc.gz"}
https://tex.stackexchange.com/questions/114833/biblatex-textcite-using-a-superscript-reference-number
# Biblatex \textcite using a superscript reference number I would like to typeset the citation number produced by \textcite as a superscript number, rather than enclosed by brackets, in line with the rest of my document. Is it possible to typeset "Author ^1" when autocite is set to superscript rather than the default "Author 1"? \documentclass{report} \usepackage[backend=bibtex8,autocite = superscript]{biblatex} \usepackage{filecontents} \begin{filecontents}{ABib.bib} @article {article1, AUTHOR = {Author, A. N.}, TITLE = {A sample paper}, JOURNAL = {Sample journal}, VOLUME = {1}, YEAR = {2013}, NUMBER = {1}, PAGES = {1--2}} \end{filecontents} \begin{document} Someone saw something \autocite{article1}. \textcite{article1} saw something. \end{document} • You want \autocite, \textcite both to give superscripts for references ? You do not want to use \supercite (which prints superscripted references) ? – ach May 17 '13 at 19:56 • @ach: I would like the citation number that is at the end of \textcite to be superscript (since the superscript option is chosen for biblatex). The MWE produces a "^1" for the \autocite but \textcite produces an inline style reference number "[1]" after the author name and year. I would have expected \textcite to typeset the reference number using the same style as the other citations. – Chris May 17 '13 at 20:41 • I added an image which shows the issue. – Paul Stanley May 17 '13 at 21:15 I think this will do the job: it's largely a matter of copying the definitions pertinent to \supercite into the commands that define \textcite. Please note that I have "hardwired" this, in other words it will not adapt automatically to changes in "autocite", so if you stopped using "autocite=superscript" you would need to delete the redefinitions too. Also, as with other superscript references, pre- and postnotes are not printed, but a warning is written to the log. \documentclass{report} \usepackage[backend=bibtex8,autocite = superscript]{biblatex} \usepackage{filecontents} \begin{filecontents}{ABib.bib} @article {article1, AUTHOR = {Author, A. N.}, TITLE = {A sample paper}, JOURNAL = {Sample journal}, VOLUME = {1}, YEAR = {2013}, NUMBER = {1}, PAGES = {1--2}} \end{filecontents} \makeatletter \renewbibmacro*{textcite}{% \iffieldequals{namehash}{\cbx@lasthash} {\mkbibsuperscript{\supercitedelim}} {\cbx@tempa \ifnameundef{labelname} {\printfield[citetitle]{labeltitle}} {\printnames{labelname}}}% \ifnumequal{\value{citecount}}{1} {} {}% \mkbibsuperscript{\usebibmacro{cite}}% \savefield{namehash}{\cbx@lasthash}% \DeclareCiteCommand{\textcite} {\let\cbx@tempa=\empty \undef\cbx@lasthash \iffieldundef{prenote} {} {\BibliographyWarning{Ignoring prenote argument}}% \iffieldundef{postnote} {} {\BibliographyWarning{Ignoring postnote argument}}} {\usebibmacro{citeindex}% \usebibmacro{textcite}} {} {} \makeatother \begin{document} Someone saw something \autocite{article1}. \textcite{article1} saw something. \end{document} Producing: • It is surprising to me that this is not the default behaviour for this set of options. The code you provided does the job I need though so thank you. – Chris May 17 '13 at 21:31 • Wouldn't using \let\textcite=\autocite achieve the same result? – fabikw Jan 19 '15 at 3:25 Another easier possibility if using the natbib and biber options in the biblatex package is this: \usepackage[ backend=biber, style=chem-acs, sortlocale=de_DE, natbib=true, url=false, doi=true, eprint=true, autocite=superscript ]{biblatex} \newcommand{\authorcite}[1]{\citeauthor{#1}\,\supercite{#1}} \authorcite{CitationKey} will result in Author1 and Author^1 • Note that the method of defining citation commands by putting two or more \cite* commands together into a \newcommand has several shortcomings. It does not support pre- or postnotes (though that could be remedied, I suppose), and it does not support citation of several works at the same time properly. So generally, a \DeclareCiteCommand is to be preferred. – moewe Aug 10 '16 at 14:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8706314563751221, "perplexity": 3815.3647619817066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998607.18/warc/CC-MAIN-20190618043259-20190618065259-00235.warc.gz"}
https://searxiv.org/search?author=Adam%20M.%20Lowrance
### Results for "Adam M. Lowrance" total 212116took 0.15s Alternating distances of knots and linksJun 26 2014An alternating distance is a link invariant that measures how far away a link is from alternating. We study several alternating distances and demonstrate that there exist families of links for which the difference between certain alternating distances ... More Chromatic homology, Khovanov homology, and torsionSep 12 2016Mar 14 2017In the first few homological gradings, there is an isomorphism between the Khovanov homology of a link and the categorification of the chromatic polynomial of a graph related to the link. In this article, we show that the categorification of the chromatic ... More Chromatic homology, Khovanov homology, and torsionSep 12 2016In the first few homological gradings, there is an isomorphism between the Khovanov homology of a link and the categorification of the chromatic polynomial of a graph related to the link. In this article, we show that the categorification of the chromatic ... More The Jones polynomial of an almost alternating linkJul 18 2017Dec 14 2017A link is almost alternating if it is non-alternating and has a diagram that can be transformed into an alternating diagram via one crossing change. We give formulas for the first two and last two potential coefficients of the Jones polynomial of an almost ... More The Khovanov width of twisted links and closed 3-braidsJan 15 2009Jan 16 2009Khovanov homology is a bigraded Z-module that categorifies the Jones polynomial. The support of Khovanov homology lies on a finite number of slope two lines with respect to the bigrading. The Khovanov width is essentially the largest horizontal distance ... More On knot Floer width and Turaev genusSep 05 2007To each knot $K\subset S^3$ one can associated its knot Floer homology $\hat{HFK}(K)$, a finitely generated bigraded abelian group. In general, the nonzero ranks of these homology groups lie on a finite number of slope one lines with respect to the bigrading. ... More Turaev genus, knot signature, and the knot homology concordance invariantsFeb 04 2010We give bounds on knot signature, the Ozsvath-Szabo tau invariant, and the Rasmussen s invariant in terms of the Turaev genus of the knot. The 2-Factor Polynomial Detects Even Perfect MatchingsDec 26 2018In this paper, we prove that the 2-factor polynomial, an invariant of a planar trivalent graph with a perfect matching, counts the number of 2- factors that contain the the perfect matching as a subgraph. Consequently, we show that the polynomial detects ... More Turaev genus and alternating decompositionsJul 10 2015Aug 08 2016We prove that the genus of the Turaev surface of a link diagram is determined by a graph whose vertices correspond to the boundary components of the maximal alternating regions of the link diagram. Furthermore, we use these graphs to classify link diagrams ... More A Turaev surface approach to Khovanov homologyJul 12 2011Jan 30 2013We introduce Khovanov homology for ribbon graphs and show that the Khovanov homology of a certain ribbon graph embedded on the Turaev surface of a link is isomorphic to the Khovanov homology of the link (after a grading shift). We also present a spanning ... More Invariants for Turaev genus one linksApr 12 2016Sep 30 2016The Turaev genus defines a natural filtration on knots where Turaev genus zero knots are precisely the alternating knots. We show that the signature of a Turaev genus one knot is determined by the number of components in its all-A Kauffman state, the ... More Extremal Khovanov homology of Turaev genus one linksDec 29 2018The Turaev genus of a link can be thought of as a way of measuring how non-alternating a link is. A link is Turaev genus zero if and only if it is alternating, and in this viewpoint, links with large Turaev genus are very non-alternating. In this paper, ... More Cube diagrams and 3-dimensional Reidemeister-like moves for knotsNov 03 2008May 23 2012In this paper we introduce a representation of knots and links called a cube diagram. We show that a property of a cube diagram is a link invariant if and only if the property is invariant under two types of cube diagram operations. A knot homology is ... More Torsion in thin regions of Khovanov homologyMar 13 2019In the integral Khovanov homology of links, the presence of odd torsion is rare. Homologically thin links, that is links whose Khovanov homology is supported on two adjacent diagonals, are known to only contain $\mathbb{Z}_2$ torsion. In this paper, we ... More On the Turaev genus of torus knotsMar 07 2017The Turaev genus and dealternating number of a link are two invariants that measure how far away a link is from alternating. We determine the Turaev genus of a torus knot with five or fewer strands either exactly or up to an error of at most one. We also ... More Multiplicity Amongst Wide Brown Dwarf Companions to Nearby Stars: Gliese 337CDMar 17 2005We present Lick Natural Guide Star Adaptive Optics observations of the L8 brown dwarf Gliese 337C, which is resolved for the first time into two closely separated (0"53+/-0"03), nearly equal magnitude components with a K_s flux ratio of 0.93+/-0.10. Companionship ... More Not Alone: Tracing the Origins of Very Low Mass Stars and Brown Dwarfs Through Multiplicity StudiesFeb 06 2006Feb 09 2006The properties of multiple stellar systems have long provided important empirical constraints for star formation theories, enabling (along with several other lines of evidence) a concrete, qualitative picture of the birth and early evolution of normal ... More A Sample of Very Young Field L Dwarfs and Implications for the Brown Dwarf "Lithium Test" at Early AgesAug 22 2008Using a large sample of optical spectra of late-type dwarfs, we identify a subset of late-M through L field dwarfs that, because of the presence of low-gravity features in their spectra, are believed to be unusually young. From a combined sample of 303 ... More Deep Near-Infrared Imaging of the rho Oph Cloud Core: Clues to the Origin of the Lowest-Mass Brown DwarfsJun 13 2010A search for young substellar objects in the rho Oph cloud core region has been made using the deep-integration Combined Calibration Scan images of the 2MASS extended mission in J, H and Ks bands, and Spitzer IRAC images at 3.6, 4.5, 5.8 and 8.0 microns. ... More $Extrasolar~Storms$: Pressure-dependent Changes In Light Curve Phase In Brown Dwarfs From Simultaneous $Hubble$ and $Spitzer$ ObservationsMay 09 2016We present $Spitzer$/IRAC Ch1 and Ch2 monitoring of six brown dwarfs during 8 different epochs over the course of 20 months. For four brown dwarfs, we also obtained simulataneous $HST$/WFC3 G141 Grism spectra during two epochs and derived light curves ... More Discovery of a Very Young Field L Dwarf, 2MASS J01415823-4633574Nov 15 2005While following up L dwarf candidates selected photometrically from the Two Micron All Sky Survey, we uncovered an unusual object designated 2MASS J01415823-4633574. Its optical spectrum exhibits very strong bands of vanadium oxide but abnormally weak ... More HST NICMOS Imaging of the Planetary-mass Companion to the Young Brown Dwarf 2MASS J1207334-393254Jul 21 2006Multi-band (0.9 to 1.6 um) images of the TW Hydrae Association (TWA) brown dwarf, 2MASS J1207334-393254 (also known as 2M1207), and its candidate planetary mass companion (2M1207b) were obtained on 2004 Aug 28 and 2005 Apr 26 with HST/NICMOS. The images ... More Ensemble Prediction of a Halo Coronal Mass Ejection Using Heliospheric ImagersDec 01 2017The Solar TErrestrial RElations Observatory (STEREO) and its heliospheric imagers (HI) have provided us the possibility to enhance our understanding of the interplanetary propagation of coronal mass ejections (CMEs). HI-based methods are able to forecast ... More Adaptive optics imaging survey of the Tucana-Horologium associationApr 07 2003We present the results of an adaptive optics (AO) imaging survey of the common associations of Tucana and Horologium, carried out at the ESO 3.6m telescope with the ADONIS/SHARPII system. Based on our observations of two dozen probable association members, ... More Giant Planet Companion to 2MASSW J1207334-393254Apr 29 2005We report new VLT/NACO imaging observations of the young, nearby brown dwarf 2MASSW J1207334-393254 and its suggested planetary mass companion (2M1207 b). Three epochs of VLT/NACO measurements obtained over nearly one year show that the planetary mass ... More A Giant Planet Candidate near a Young Brown DwarfSep 14 2004We present deep VLT/NACO infrared imaging and spectroscopic observations of the brown dwarf 2MASSWJ1207334-393254, obtained during our on-going adaptive optics survey of southern young, nearby associations. This 25 MJup brown dwarf, located 70 pc from ... More A Study of the Diverse T Dwarf Population Revealed by WISEJan 16 2013Feb 17 2013We report the discovery of 87 new T dwarfs uncovered with the Wide-field Infrared Survey Explorer (WISE) and three brown dwarfs with extremely red near-infrared colors that exhibit characteristics of both L and T dwarfs. Two of the new T dwarfs are likely ... More HST Rotational Spectral Mapping of Two L-Type Brown Dwarfs: Variability In and Out of Water Bands Indicates High-Altitude Haze LayersNov 11 2014We present time-resolved near-infrared spectroscopy of two L5 dwarfs, 2MASS J18212815+1414010 and 2MASS J15074759-1627386, observed with the Wide Field Camera 3 instrument on the Hubble Space Telescope (HST). We study the wavelength dependence of rotation-modulated ... More A companion to AB Pic at the planet/brown dwarf boundaryApr 29 2005We report deep imaging observations of the young, nearby star AB Pic, a member of the large Tucana-Horologium as sociation. We have detected a faint, red source 5.5" South of the star with JHK colors compatible with that of a young substellar L dwarf. ... More Cloud Atlas: Discovery of Rotational Spectral Modulations in a Low-mass, L-type Brown Dwarf Companion to a StarOct 23 2017Observations of rotational modulations of brown dwarfs and giant exoplanets allow the characterization of condensate cloud properties. As of now rotational spectral modulations have only been seen in three L-type brown dwarfs. We report here the discovery ... More Cloud Atlas: Rotational Spectral Modulations and potential Sulfide Clouds in the Planetary-mass, Late T-type Companion Ross 458CMar 26 2019Measurements of photometric variability at different wavelengths provide insights into the vertical cloud structure of brown dwarfs and planetary-mass objects. In seven Hubble Space Telescope consecutive orbits, spanning $\sim$10 h of observing time}, ... More Astrometric and Spectroscopic Confirmation of a Brown Dwarf Companion to GSC 08047-00232Dec 21 2004We report VLT/NACO imaging observations of the stars GSC 08047-00232 and HIP 6856, probable members of the large Tucana-Horologium association. During our previous ADONIS/SHARPII deep imaging survey, a substellar candidate companion was discovered around ... More The Circumstellar Disk of HD 141569 Imaged with NICMOSSep 06 1999Coronagraphic imaging with the Near Infrared Camera and Multi Object Spectrometer on the Hubble Space Telescope reveals a large, ~400 AU (4'') radius, circumstellar disk around the Herbig Ae/Be star HD 141569. A reflected light image at 1.1 micron shows ... More Cloud Atlas: Rotational Modulations in the L/T Transition Brown Dwarf Companion HN Peg BJan 29 2018Feb 17 2018Time-resolved observations of brown dwarfs' rotational modulations provide powerful insights into the properties of condensate clouds in ultra-cool atmospheres. Multi-wavelength light curves reveal cloud vertical structures, condensate particle sizes, ... More Cloud Atlas: Hubble Space Telescope Near-Infrared Spectral Library of Brown Dwarfs, Planetary-mass companions, and hot JupitersDec 10 2018Bayesian atmospheric retrieval tools can place constraints on the properties of brown dwarfs and hot Jupiters atmospheres. To fully exploit these methods, high signal-to-noise spectral libraries with well-understood uncertainties are essential. We present ... More Randomized Benchmarking of Clifford OperatorsNov 25 2018Randomized benchmarking is an experimental procedure intended to demonstrate control of quantum systems. The procedure extracts the average error introduced by a set of control operations. When the target set of operations is intended to be the set of ... More A numerical method for variational problems with convexity constraintsJul 26 2011Aug 23 2012We consider the problem of approximating the solution of variational problems subject to the constraint that the admissible functions must be convex. This problem is at the interface between convex analysis, convex optimization, variational problems, ... More No disk needed around HD 199143 BSep 11 2002We present new, high angular resolution images of HD 199143 in the Capricornus association, obtained with the adaptive optics system ADONIS+SHARPII at the ESO 3.6m Telescope of La Silla Observatory. HD 199143 and its neighbour star HD 358623 (separation ... More Cloud Atlas: Discovery of Patchy Clouds and High-amplitude Rotational Modulations In a Young, Extremely Red L-type Brown DwarfSep 15 2016Sep 19 2016Condensate clouds fundamentally impact the atmospheric structure and spectra of exoplanets and brown dwarfs but the connections between surface gravity, cloud structure, dust in the upper atmosphere, and the red colors of some brown dwarfs remain poorly ... More Cloud Atlas: High-Contrast Time-Resolved Observations of Planetary-Mass CompanionsJan 31 2019Feb 20 2019Directly-imaged planetary-mass companions offer unique opportunities in atmospheric studies of exoplanets. They share characteristics of both brown dwarfs and transiting exoplanets, therefore, are critical for connecting atmospheric characterizations ... More Infrared Views of the TW Hya DiskOct 15 2001The face-on disk around TW Hya is imaged in scattered light at wavelengths of 1.1 and 1.6 micron using the coronagraph in the Near Infrared Camera and Multi Object Spectrometer aboard the Hubble Space Telescope. Stellar light scattered from the optically ... More Finite difference methods for the Infinity Laplace and p-Laplace equationsJul 26 2011Dec 05 2012We build convergent discretizations and semi-implicit solvers for the Infinity Laplacian and the game theoretical $p$-Laplacian. The discretizations simplify and generalize earlier ones. We prove convergence of the solution of the Wide Stencil finite ... More Neutral strange particle production at mid unit rapidity in p+p collisions at sqrt(s) = 200 GeVMar 12 2004Mar 19 2004We briefly discuss the methods of analysing reconstructed neutral strange particles in p+p collision data measured at sqrt(s) = 200 GeV taken using the Solenoidal Tracker At RHIC (STAR) detector. We present spectra for K0 short, lambda and anti-lambda ... More Lambda and Antilambda polarization from deep inelastic muon scatteringNov 04 1999Nov 18 1999We report results of the first measurements of Lambda and Antilambda polarization produced in deep inelastic polarized muon scattering on the nucleon. The results are consistent with an expected trend towards positive polarization with increasing x_F. ... More Identification, Interpretability, and Bayesian Word EmbeddingsApr 02 2019Social scientists have recently turned to analyzing text using tools from natural language processing like word embeddings to measure concepts like ideology, bias, and affinity. However, word embeddings are difficult to use in the regression framework ... More The generalized Levinger transformationDec 22 2007In this paper, we present new results relating the numerical range of a matrix $A$ with generalized Levinger transformation $\mathcal{L}(A,\alpha,\beta) = \alphaH_A +\betaS_A$, where $H_A$ and $S_A$, are respectively the Hermitian and skew-hermitian parts ... More Reliability Conditions in Quadrature AlgorithmsMar 06 2003The detection of insufficiently resolved or ill-conditioned integrand structures is critical for the reliability assessment of the quadrature rule outputs. We discuss a method of analysis of the profile of the integrand at the quadrature knots which allows ... More Avoidance of Partitions of a Three-element SetMar 20 2006Feb 15 2007Klazar defined and studied a notion of pattern avoidance for set partitions, which is an analogue of pattern avoidance for permutations. Sagan considered partitions which avoid a single partition of three elements. We enumerate partitions which avoid ... More Rigidity Sequences of Power Rationally Weakly Mixing TransformationsMar 19 2015We prove that a class of infinite measure preserving transformations, satisfying a "strong" weak mixing condition, generates all rigidity sequences of all conservative ergodic invertible measure preserving transformations defined on a Lebesgue $\sigma$-finite ... More The Möbius Function of a Restricted Composition PosetJun 09 2008Apr 19 2012We study a poset of compositions restricted by part size under a partial ordering introduced by Bj\"{o}rner and Stanley. We show that our composition poset $C_{d+1}$ is isomorphic to the poset of words $A_d^*$. This allows us to use techniques developed ... More Mixing sets for non-mixing transformationsApr 04 2016For different classes of measure preserving transformations, we investigate collections of sets that exhibit the property of lightly mixing. Lightly mixing is a stronger property than topological mixing, and requires that a lim inf is positive. In particular, ... More VLT/NACO Deep imaging survey of young, nearby austral starsJun 16 2009Since November 2002, we have conducted the largest deep imaging survey of the young, nearby associations of the southern hemisphere. Our goal is detection and characterization of substellar companions at intermediate (10--500 AU) physical separations. ... More A Sensitive Search for Variability in Late L Dwarfs: The Quest for WeatherJul 24 2006We have conducted a photometric monitoring program of 3 field late-L brown dwarfs looking for evidence of non-axisymmetric structure or temporal variability in their photospheres. The observations were performed using Spitzer/IRAC 4.5 and 8 micron bandpasses ... More Mechanical switching of ferro-electric rubberDec 09 2008At the A to C transition, smectic elastomers have recently been observed to undergo $\sim$35% spontaneous shear strains. We first explicitly describe how strains of up to twice this value could be mechanically or electrically induced in Sm-$C$ elastomers ... More Distributions of Long-Lived Radioactive Nuclei Provided by Star Forming EnvironmentsOct 17 2015Radioactive nuclei play an important role in planetary evolution by providing an internal heat source, which affects planetary structure and helps facilitate plate tectonics. A minimum level of nuclear activity is thought to be necessary --- but not sufficient ... More Adaptive finite difference methods for nonlinear elliptic and parabolic partial differential equations with free boundariesDec 09 2014Nov 18 2015Monotone finite difference methods provide stable convergent discretizations of a class of degenerate elliptic and parabolic Partial Differential Equations (PDEs). These methods are best suited to regular rectangular grids, which leads to low accuracy ... More High energy gamma-ray properties of the FR I radio galaxy NGC 1275Jan 13 2011We report on our study of the high-energy $\gamma-$ray emission from the FR I radio galaxy NGC 1275, based on two years of observations with the Fermi-LAT detector. Previous Fermi studies of NGC 1275 had found evidence for spectral and flux variability ... More Approximate homogenization of fully nonlinear elliptic PDEs: estimates and numerical results for Pucci type equationsOct 27 2017May 14 2018We are interested in the shape of the homogenized operator $\overline F(Q)$ for PDEs which have the structure of a nonlinear Pucci operator. A typical operator is $H^{a_1,a_2}(Q,x) = a_1(x) \lambda_{\min}(Q) + a_2(x)\lambda_{\max}(Q)$. Linearization of ... More Atomic and Molecular Opacities for Brown Dwarf and Giant Planet AtmospheresJul 11 2006Sep 06 2006We present a comprehensive description of the theory and practice of opacity calculations from the infrared to the ultraviolet needed to generate models of the atmospheres of brown dwarfs and extrasolar giant planets. Methods for using existing line lists ... More A partial differential equation for the rank one convex envelopeMay 10 2016May 11 2016In this article we introduce a Partial Differential Equation (PDE) for the rank one convex envelope. Rank one convex envelopes arise in non-convex vector valued variational problems \cite{BallElasticity, kohn1986optimal1, BallJames87, chipot1988equilibrium}. ... More Density-functional study of oxygen adsorption on Mo(112)Nov 15 2004Atomic oxygen adsorption on the Mo(112) surface has been investigated by means of first-principles total energy calculations. Among the variety of possible adsorption sites it was found that the bridge sites between two Mo atoms of the topmost row are ... More Modified Bernoulli Equation for Use with Combined Electro-Osmotic and Pressure-Driven MicroflowsJan 05 2012In this paper we present electro-osmotic (EO) flow within a more traditional fluid mechanics framework. Specifically, the modified Bernoulli equation (viz. the energy equation, the mechanical energy equation, the pipe flow equation, etc.) is shown to ... More An algebraic construction of twin-like modelsSep 19 2011Oct 04 2011If the generalized dynamics of K field theories (i.e., field theories with a non-standard kinetic term) is taken into account, then the possibility of so-called twin-like models opens up, that is, of different field theories which share the same topological ... More A partial differential equation for the rank one convex envelopeMay 10 2016Jan 12 2017In this article we introduce a Partial Differential Equation (PDE) for the rank one convex envelope. Rank one convex envelopes arise in non-convex vector valued variational problems \cite{BallElasticity, kohn1986optimal1, BallJames87, chipot1988equilibrium}. ... More Stochastic Gradient Descent with Polyak's Learning RateMar 20 2019Stochastic gradient descent (SGD) for strongly convex functions converges at the rate O(1/k). However, achieving the rate with the optimal constant requires knowledge of parameters of the function which are usually not available. In practise, the step ... More A Candidate Substellar Companion to CoD -33 7795 (TWA 5)Dec 09 1998We present the discovery of a candidate substellar object in a survey of young stars in the solar vicinity using the sensitivity and spatial resolution afforded by the NICMOS coronagraph on the Hubble Space Telescope. The H=12.1 mag object was discovered ... More Simultaneous X-ray and Infrared Observations of Sagittarius A*'s VariabilityDec 14 2018Emission from Sgr A* is highly variable at both X-ray and infrared (IR) wavelengths. Observations over the last ~20 years have revealed X-ray flares that rise above a quiescent thermal background about once per day, while faint X-ray flares from Sgr A* ... More Spitzer Space Telescope Mid-IR Light Curves of NeptuneAug 25 2016We have used the Spitzer Space Telescope in February 2016 to obtain high cadence, high signal-to-noise, 17-hour duration light curves of Neptune at 3.6 and 4.5 $\mu$m. The light curve duration was chosen to correspond to the rotation period of Neptune. ... More Deep search for companions to probable young brown dwarfsAug 15 2012We have obtained high contrast images of four nearby, faint, and very low mass objects 2MASSJ04351455-1414468, SDSSJ044337.61+000205.1, 2MASSJ06085283-2753583 and 2MASSJ06524851-5741376 (here after 2MASS0435-14, SDSS0443+00, 2MASS0608-27 and 2MASS0652-57), ... More Numerical methods for motion of level sets by affine curvatureOct 27 2016Oct 28 2016We study numerical methods for the nonlinear partial differential equation that governs the motion of level sets by affine curvature. We show that standard finite difference schemes are nonlinearly unstable. We build convergent finite difference schemes, ... More Empirical confidence estimates for classification by deep neural networksMar 21 2019How well can we estimate the probability that the classification, $C(f(x))$, predicted by a deep neural network is correct (or in the Top 5)? We consider the case of a classification neural network trained with the KL divergence which is assumed to generalize, ... More Quantum families of invertible maps and related problemsMar 19 2015Aug 11 2015The notion of families of quantum invertible maps ($C^*$-algebra homomorphisms satisfying Podle\'s condition) is employed to strengthen and reinterpret several results concerning universal quantum groups acting on finite quantum spaces. In particular ... More Approximate homogenization of convex nonlinear elliptic PDEsOct 27 2017We approximate the homogenization of fully nonlinear, convex, uniformly elliptic Partial Differential Equations in the periodic setting, using a variational formula for the optimal invariant measure, which may be derived via Legendre-Fenchel duality. ... More Subspace Restricted Boltzmann MachineJul 16 2014The subspace Restricted Boltzmann Machine (subspaceRBM) is a third-order Boltzmann machine where multiplicative interactions are between one visible and two hidden units. There are two kinds of hidden units, namely, gate units and subspace units. The ... More Filtered schemes for Hamilton-Jacobi equations: a simple construction of convergent accurate difference schemesNov 12 2014We build a simple and general class of finite difference schemes for first order Hamilton-Jacobi (HJ) Partial Differential Equations. These filtered schemes are convergent to the unique viscosity solution of the equation. The schemes are accurate: we ... More Approximate Convex Hulls: sketching the convex hull using curvatureFeb 27 2017Jun 14 2017Convex hulls are fundamental objects in computational geometry. In moderate dimensions or for large numbers of vertices, computing the convex hull can be impractical due to the computational complexity of convex hull algorithms. In this article we approximate ... More Temperature dependence of the diffusive conductivity of bilayer grapheneDec 09 2009Sep 06 2010Assuming diffusive carrier transport and employing an effective medium theory, we calculate the temperature dependence of bilayer graphene conductivity due to Fermi-surface broadening as a function of carrier density. We find that the temperature dependence ... More The Interstellar Line of Sight to the Interacting Galaxy NGC 5195Jan 14 2015We present moderately-high resolution echelle observations of the nucleus of NGC 5195, the line of sight to which samples intervening interstellar material associated with the outer spiral arm of M51. Our spectra reveal absorption from interstellar Na ... More UV Radiation Fields Produced by Young Embedded Star ClustersDec 20 2007A large fraction of stars form within young embedded clusters, and these environments produce a substantial ultraviolet (UV) background radiation field, which can provide feedback on the star formation process. To assess the possible effects of young ... More Discovery of gamma-ray emission from the Broad Line Radio Galaxy Pictor ADec 29 2011We report the discovery of high-energy \gamma-ray emission from the Broad Line Radio Galaxy (BLRG) Pictor A with a significance of ~5.8\sigma (TS=33.4), based on three years of observations with the Fermi Large Area Telescope (LAT) detector. The three-year ... More The Dirichlet problem for the convex envelopeJul 05 2010The Convex Envelope of a given function was recently characterized as the solution of a fully nonlinear Partial Differential Equation (PDE). In this article we study a modified problem: the Dirichlet problem for the underlying PDE. The main result is ... More Lipschitz regularized Deep Neural Networks converge and generalizeAug 28 2018Oct 03 2018Generalization of deep neural networks (DNNs) is an open problem which, if solved, could impact the reliability and verification of deep neural network architectures. In this paper, we show that if the usual fidelity term used in training DNNs is augmented ... More Permutation Statistics and $q$-Fibonacci NumbersApr 02 2009Jul 07 2009In a recent paper, Goyt and Sagan studied distributions of certain set partition statistics over pattern restricted sets of set partitions that were counted by the Fibonacci numbers. Their study produced a class of $q$-Fibonacci numbers, which they related ... More Preliminary Trigonometric Parallaxes of 184 Late-T and Y Dwarfs and an Analysis of the Field Substellar Mass Function into the "Planetary" Mass RegimeDec 04 2018We present preliminary trigonometric parallaxes of 184 late-T and Y dwarfs using observations from Spitzer (143), USNO (18), NTT (14), and UKIRT (9). To complete the 20-pc census of $\ge$T6 dwarfs, we combine these measurements with previously published ... More Y dwarf Trigonometric Parallaxes from the Spitzer Space TelescopeSep 18 2018Sep 19 2018Y dwarfs provide a unique opportunity to study free-floating objects with masses $<$30 M$_{Jup}$ and atmospheric temperatures approaching those of known Jupiter-like exoplanets. Obtaining distances to these objects is an essential step towards characterizing ... More Projective limits of quantum symmetry groups and the doubling construction for Hopf algebrasMay 20 2013The quantum symmetry group of the inductive limit of C*-algebras equipped with orthogonal filtrations is shown to be the projective limit of the quantum symmetry groups of the C*-algebras appearing in the sequence. Some explicit examples of such projective ... More Computing the Spectrum of a Heterotic Flux VacuumAug 31 2009Jun 27 2012We compute the massless spectra of a set of flux vacua of the heterotic string. The vacua we study include well-known non-Kahler T^2-fibrations over K3 with SU(3) structure and intrinsic torsion. Following gauged linear sigma models of these vacua into ... More Convergence and Rates for Fixed-Interval Multiple-Track Smoothing Using $k$-Means Type OptimizationJun 08 2015May 04 2016We address the task of estimating multiple trajectories from unlabelled data. This problem arises in many settings, one could think of the construction of maps of transport networks from passive observation of travellers, or the reconstruction of the ... More Twinlike models with identical linear fluctuation spectraDec 01 2011Recently, the possibility of so-called twinlike field theories has been demonstrated, that is, of different field theories which share the same topological defect solution with the same energy density. Further, purely algebraic conditions have been derived ... More An efficient linear programming method for Optimal TransportationSep 11 2015An efficient method for computing solutions to the Optimal Transportation (OT) problem with a wide class of cost functions is presented. The standard linear programming (LP) discretization of the continuous problem becomes intractible for moderate grid ... More Addressing the Fundamental Tension of PCGML with Discriminative LearningSep 10 2018Procedural content generation via machine learning (PCGML) is typically framed as the task of fitting a generative model to full-scale examples of a desired content distribution. This approach presents a fundamental tension: the more design effort expended ... More Finite Sample Inference for the Maximum Score EstimandMar 04 2019We provide a finite sample inference method for the structural parameters of a semiparametric binary response model under a conditional median restriction originally studied by Manski (1975, 1985). Our inference method is valid for any sample size and ... More Percolation thresholds on 2D Voronoi networks and Delaunay triangulationsJun 24 2009Sep 01 2009The site percolation threshold for the random Voronoi network is determined numerically for the first time, with the result p_c = 0.71410 +/- 0.00002, using Monte-Carlo simulation on periodic systems of up to 40000 sites. The result is very close to the ... More Exchange and spin-fluctuation superconducting pairing in cupratesApr 12 2001Feb 06 2002We propose a microscopical theory of superconductivity in CuO$_2$ layer within the effective two-band Hubbard model in the strong correlation limit. By applying a projection technique for the matrix Green function in terms of the Hubbard operators, the ... More Baryonic Collapse within Dark Matter Halos and the Formation of Gaseous Galactic DisksSep 12 2006This paper constructs an analytic framework for calculating the assembly of galactic disks from the collapse of gas within dark matter halos, with the goal of determining the surface density profiles. Gas parcels (baryons) fall through the potentials ... More Evolution of Planetary Orbits with Stellar Mass Loss and Tidal DissipationOct 09 2013Intermediate mass stars and stellar remnants often host planets, and these dynamical systems evolve because of mass loss and tides. This paper considers the combined action of stellar mass loss and tidal dissipation on planetary orbits in order to determine ... More Tack energy and switchable adhesion of liquid crystal elastomersAug 11 2012The mechanical properties of liquid crystal elastomers (LCEs) make them suitable candidates for pressure-sensitive adhesives (PSAs). Using the nematic dumbbell constitutive model, and the block model of PSAs, we study their tack energy and the debonding ... More Exact Simulation of Noncircular or Improper Complex-Valued Stationary Gaussian Processes using Circulant EmbeddingMay 17 2016This paper provides an algorithm for simulating improper (or noncircular) complex-valued stationary Gaussian processes. The technique utilizes recently developed methods for multivariate Gaussian processes from the circulant embedding literature. The ... More Linear Models for Flux VacuaNov 08 2006We construct worldsheet descriptions of heterotic flux vacua as the IR limits of N=2 gauge theories. Spacetime torsion is incorporated via a 2d Green-Schwarz mechanism in which a doublet of axions cancels a one-loop gauge anomaly. Manifest (0,2) supersymmetry ... More Single molecule narrowfield microscopy of protein-DNA binding dynamics in glucose signal transduction of live yeast cellsMay 25 2016Single-molecule narrowfield microscopy is a versatile tool to investigate a diverse range of protein dynamics in live cells and has been extensively used in bacteria. Here, we describe how these methods can be extended to larger eukaryotic, yeast cells, ... More
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8094633221626282, "perplexity": 2825.0552170061346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257481.39/warc/CC-MAIN-20190524004222-20190524030222-00221.warc.gz"}
https://www.physicsforums.com/threads/can-an-electron-in-an-s-orbital-exist-at-the-nucleus-r-0.862932/
# I Can an electron in an s-orbital exist at the nucleus (r=0)? 1. Mar 20, 2016 ### Steven Hanna My quantum textbook says that the probability of finding an electron in a 1s orbital between r and r+dr is given by Prob = (4/a^3)*(r^2)*exp(-2r/a) dr. In this case, Prob(0) = 0 because of the r^2, which is part of the volume element in spherical polar. Does this mean that it is impossible to find an electron at r = 0? I have learned the opposite in several chemistry classes, so I would very much appreciate if someone could clear this up. Thanks! 2. Mar 20, 2016 Staff Emeritus The volume element at r = exactly zero is 0. The nucleus is somewhat bigger than that, and its volume element is non-zero, and therefore so is the probability. 3. Mar 20, 2016 ### Steven Hanna Ah of course. Thank you! Draft saved Draft deleted Similar Discussions: Can an electron in an s-orbital exist at the nucleus (r=0)?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994593620300293, "perplexity": 833.0223611955195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567042.50/warc/CC-MAIN-20171215060102-20171215080102-00186.warc.gz"}
http://www.physicsforums.com/showthread.php?t=374076
# Classical Physics-Modern Physics by Wannabeagenius Tags: classical, physics, physicsmodern P: 92 Hi All, I'm not sure exactly what is considered classical physics. I always thought it was everything before Einstein's Theory Of Special Relativity but recently I read that it is everything other than Quantum Mechanics. Please clarify this for me. Thank you, Bob Sci Advisor P: 2,194 From what I gather, there are two distinctions, depending on the context in which you are talking about classical vs. modern physics. You pinpoint both of them very nicely. For an undergraduate course in classical physics, it's probably everything before 1900. However, I often hear General Relativity referred to as a classical theory, especially when it's juxtaposed with a theory of quantum gravity. So, depends on the context. P: 707 Quote by Wannabeagenius Hi All, I'm not sure exactly what is considered classical physics. I always thought it was everything before Einstein's Theory Of Special Relativity but recently I read that it is everything other than Quantum Mechanics. Please clarify this for me. Thank you, Bob I think classical physics is all physics that models Nature as deterministic and in principle predicts that all outcomes of an experiment are knowable exactly, Newton's Laws, Maxwell's equations, Einstein's theory of relativity. Non-classical physics predicts only probabilistic outcomes of experiments and models Nature as probabalistic rather than deterministic. Quantum mechanics when first proposed said that there is no possibility of a precisely predicted outcome of an experiment and that the underlying physical processes were fundamentally probabalistic. Now a days I am not sure that this is still true. Related Discussions General Physics 5 Science & Math Textbook Listings 0 Quantum Physics 2 Academic Guidance 7 General Discussion 29
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928532600402832, "perplexity": 483.1937054806046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
https://or.stackexchange.com/questions/4587/how-to-correct-this-scheduling-algorithm
# How to correct this scheduling algorithm? I have a scheduling problem to solve. It's a resource-constrained project scheduling problem with time-varying resource availabilities. The objective is minimizing tardiness. The full detailed model is given here. I implemented a heuristic based on a priority rule: At each step, the set of tasks can be divided to 3 sets: the set $$A$$ of already-scheduled projects; the set $$B$$ of "schedulable" tasks (tasks whose predecessors are already scheduled) and the set $$C$$ of tasks not "schedulable" yet. At each step, we compute the priority of tasks in $$B$$ and select the one with the highest probability. It's then scheduled at the earliest possible time when there is available resources. However, I want to find a way to somehow deal with this "infeasibility"case. Remark: the green lines are the availabilities of the resource, Task A in blue is scheduled and task B in grey is not scheduled because it requires two units wheras only 1 unit is available. If the task A is scheduled first (because it has the highest priority), there will be not enough resource for the task B. Thus, by the end, not all the tasks are scheduled (task B is not scheduled). However if I've scheduled B first, it will be o.k, since task A requires only one unit, and by the end all the tasks will be scheduled. PS: Finding a feasible solution is NP complete in this case. • Infeasibility really has nothing to do with the algorithm. It is a property of the problem statement. The algorithm (math programming/heuristic/etc.) may or may not be able to solve a feasible problem, but that is a separate issue. It isn't really possible to give good advice on #3 and #4 without diving into the code, but some kind of local search is probably needed and it sounds like you are on your way to writing some kind of genetic algorithm, which can be very effective on these types of problems. Jul 25, 2020 at 17:39 • @AirSquid by infeasibility I meant the algorithm get stuck somehow and can't schedule all the tasks which doesn't mean that any other algorithm can't find a feasible solution or some "backtracking" can't find a solution. Jul 25, 2020 at 17:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5618683099746704, "perplexity": 488.22692584267935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00398.warc.gz"}
https://www.tutorialspoint.com/super-palindromes-in-cplusplus
# Super Palindromes in C++ C++Server Side ProgrammingProgramming #### C in Depth: The Complete C Programming Guide for Beginners 45 Lectures 4.5 hours #### Practical C++: Learn C++ Basics Step by Step Most Popular 50 Lectures 4.5 hours #### Master C and Embedded C Programming- Learn as you go Best Seller 66 Lectures 5.5 hours Suppose we have a positive integer N, that is said to be a superpalindrome if it is a palindrome, and it is also the square of a palindrome. Now consider we have two positive integers L and R we have to find the number of superpalindromes in the inclusive range of [L, R]. So, if the input is like L = 5 and R = 500, then the output will be 3, the superpalindromes are 9, 121, 484. To solve this, we will follow these steps − • Define a function helper(), this will take x, m, M, lb, ub, • if x > ub, then − • return • if x >= lb and (x * x) is palindrome, then − • (increase ans by 1) • for initialize i := 1, when m + 2 * i <= M, update (increase i by 1), do: • W := 10^(m + 2 * i - 1) • w := 10^i • for initialize z := 1, when z <= 9, update (increase z by 1), do − • helper(z * W + x * w, m + 2 * i, M, lb, ub) • From the main method, do the following − • lb := square root of L, ub := square root of R • M := perform log of ub base 10 + 1 • for initialize z := 0, when z <= 9, update (increase z by 1), do− • helper(z, 1, M, lb, ub) • helper(11 * z, 2, M, lb, ub) • return ans Let us see the following implementation to get better understanding − ## Example Live Demo #include <bits/stdc++.h> using namespace std; class Solution { int ans = 0; public: int superpalindromesInRange(string L, string R){ long double lb = sqrtl(stol(L)), ub = sqrtl(stol(R)); int M = log10l(ub) + 1; for (int z = 0; z <= 9; z++) { helper(z, 1, M, lb, ub); helper(11 * z, 2, M, lb, ub); } return ans; } private: void helper(long x, int m, int M, long double lb, long double ub){ if (x > ub) return; if (x >= lb && is_palindrome(x * x)) ans++; for (int i = 1; m + 2 * i <= M; i++) { long W = powl(10, m + 2 * i - 1) + 1; long w = powl(10, i); for (int z = 1; z <= 9; z++) helper(z * W + x * w, m + 2 * i, M, lb, ub); } } bool is_palindrome(long x){ if (x == 0) return true; if (x % 10 == 0) return false; long left = x, right = 0; while (left >= right) { if (left == right || left / 10 == right) return true; right = 10 * right + (left % 10), left /= 10; } return false; } }; main(){ Solution ob; cout << (ob.superpalindromesInRange("5", "500")); } ## Input "5", "500" ## Output 3 Updated on 04-Jun-2020 09:03:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32361575961112976, "perplexity": 7991.628024932111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708046.99/warc/CC-MAIN-20221126180719-20221126210719-00414.warc.gz"}
http://mathhelpforum.com/discrete-math/162295-cartesian-product-universal-set.html
# Thread: Cartesian product of universal set. 1. ## Cartesian product of universal set. Hi, I don't know how to prove this. $UXA^C = (UXA)^C$ What is the result of UxA? I know AxØ=Ø but have no idea about universal set. Thanks for any help. 2. Originally Posted by truevein Hi, I don't know how to prove this. $UXA^C = (UXA^C)$ What is the result of UxA? I know AxØ=Ø but have no idea about universal set. Thanks for any help. I believe you're trying to prove that $U\times\left(U-A\right)=U-\left(U\times A\right)$. Is that right? 3. Really sorry about the displacement of the parenthesis, I corrected my question. 4. We know that $\left( {\forall x} \right)\left[ {x \in U} \right]$, the universe. So if $(x,y)\in U\times A^c$ then because $x\in U$ we know $y\notin A$ so $(x,y)\in (U\times A)^c$. If $(u,w)\in (U\times A)^c$ then $u \notin U \vee w \notin A$. Again we know that $u\in U$ so $w\notin A$. So $(u,w)\in U\times A^c$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887706637382507, "perplexity": 452.65350972422215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172050.87/warc/CC-MAIN-20170219104612-00308-ip-10-171-10-108.ec2.internal.warc.gz"}
https://socratic.org/questions/584e30db7c01496a96b98f7b
Algebra Topics # Question 98f7b Dec 12, 2016 ${x}^{2} + \left(\frac{9}{16}\right) {x}^{2}$ transforms into $\left(\frac{25}{16}\right) {x}^{2}$ as shown below. #### Explanation: To factor this expression gives: $\left(1 + \frac{9}{16}\right) {x}^{2}$ To add 1 and $\frac{9}{16}$ we must get this over a common denominator of $16$: 16/16*1 + 9/16)x^2 -># $\left(\frac{16}{16} + \frac{9}{16}\right) {x}^{2} \to$ $\left(\frac{25}{16}\right) {x}^{2}$ ##### Impact of this question 175 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7125487327575684, "perplexity": 6932.987625559912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677230.18/warc/CC-MAIN-20191017222820-20191018010320-00108.warc.gz"}